Added Makefile.WINDOWS64, kindly provided by Lin Ming
[libdai.git] / include / dai / doc.h
1 /* This file is part of libDAI - http://www.libdai.org/
2 *
3 * Copyright (c) 2006-2011, The libDAI authors. All rights reserved.
4 *
5 * Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
6 */
7
8
9 /** \file
10 * \brief Contains additional doxygen documentation
11 *
12 * \idea Adapt (part of the) guidelines in http://www.boost.org/development/requirements.html#Design_and_Programming
13 *
14 * \idea Use "gcc -MM" to generate dependencies for targets: http://make.paulandlesley.org/autodep.html
15 *
16 * \idea Disentangle structures. In particular, ensure that graphical properties are not
17 * entangled with probabilistic properties. For example, a FactorGraph contains several components:
18 * - a BipartiteGraph
19 * - an array of variable labels
20 * - an array of variable state space sizes
21 * - an array of pointers to factor value vectors
22 * In this way, each factor could be implemented differently, e.g., we could have
23 * some sparse factors, some noisy-OR factors, some dense factors, some arbitrary
24 * precision factors, etcetera.
25 *
26 * \idea Use boost::uBLAS framework to deal with matrices, especially, with 2D sparse matrices.
27 * See http://www.boost.org/libs/numeric/ublas/doc/matrix_sparse.htm
28 * However: I read somewhere that boost::uBLAS concentrates more on correct implementation than on performance.
29 **/
30
31
32 /** \mainpage Reference manual for libDAI - A free/open source C++ library for Discrete Approximate Inference methods
33 * \author Joris Mooij (with contributions of Frederik Eaton)
34 * \version git HEAD
35 * \date July 12, 2011 - or later
36 *
37 * <hr size="1">
38 * \section about About libDAI
39 * libDAI is a free/open source C++ library that provides implementations of
40 * various (approximate) inference methods for discrete graphical models. libDAI
41 * supports arbitrary factor graphs with discrete variables; this includes
42 * discrete Markov Random Fields and Bayesian Networks.
43 *
44 * The library is targeted at researchers. To be able to use the library, a
45 * good understanding of graphical models is needed.
46 *
47 * The best way to use libDAI is by writing C++ code that invokes the library;
48 * in addition, part of the functionality is accessibly by using the
49 * - command line interface
50 * - (limited) MatLab interface
51 * - (experimental) python interface
52 * - (experimental) octave interface.
53 *
54 * libDAI can be used to implement novel (approximate) inference algorithms
55 * and to easily compare the accuracy and performance with existing algorithms
56 * that have been implemented already.
57 *
58 * A solver using libDAI was amongst the three winners of the UAI 2010 Approximate
59 * Inference Challenge (see http://www.cs.huji.ac.il/project/UAI10/ for more
60 * information). The full source code is provided as part of the library.
61 *
62 * \section features Features
63 * Currently, libDAI supports the following (approximate) inference methods:
64 * - Exact inference by brute force enumeration;
65 * - Exact inference by junction-tree methods;
66 * - Mean Field;
67 * - Loopy Belief Propagation [\ref KFL01];
68 * - Fractional Belief Propagation [\ref WiH03];
69 * - Tree-Reweighted Belief Propagation [\ref WJW03];
70 * - Tree Expectation Propagation [\ref MiQ04];
71 * - Generalized Belief Propagation [\ref YFW05];
72 * - Double-loop GBP [\ref HAK03];
73 * - Various variants of Loop Corrected Belief Propagation
74 * [\ref MoK07, \ref MoR05];
75 * - Gibbs sampler;
76 * - Conditioned Belief Propagation [\ref EaG09];
77 * - Decimation algorithm.
78 *
79 * These inference methods can be used to calculate partition sums, marginals
80 * over subsets of variables, and MAP states (the joint state of variables that
81 * has maximum probability).
82 *
83 * In addition, libDAI supports parameter learning of conditional probability
84 * tables by Expectation Maximization.
85 *
86 * \section limitations Limitations
87 * libDAI is not intended to be a complete package for approximate inference.
88 * Instead, it should be considered as an "inference engine", providing
89 * various inference methods. In particular, it contains no GUI, currently
90 * only supports its own file format for input and output (although support
91 * for standard file formats may be added later), and provides very limited
92 * visualization functionalities. The only learning method supported currently
93 * is Expectation Maximization (or Maximum Likelihood if no data is missing)
94 * for learning factor parameters.
95 *
96 * \section rationale Rationale
97 *
98 * In my opinion, the lack of open source "reference" implementations hampers
99 * progress in research on approximate inference. Methods differ widely in terms
100 * of quality and performance characteristics, which also depend in different
101 * ways on various properties of the graphical models. Finding the best
102 * approximate inference method for a particular application therefore often
103 * requires empirical comparisons. However, implementing and debugging these
104 * methods takes a lot of time which could otherwise be spent on research. I hope
105 * that this code will aid researchers to be able to easily compare various
106 * (existing as well as new) approximate inference methods, in this way
107 * accelerating research and stimulating real-world applications of approximate
108 * inference.
109 *
110 * \section language Language
111 * Because libDAI is implemented in C++, it is very fast compared with
112 * implementations in MatLab (a factor 1000 faster is not uncommon).
113 * libDAI does provide a (limited) MatLab interface for easy integration with MatLab.
114 * It also provides a command line interface and experimental python and octave
115 * interfaces (thanks to Patrick Pletscher).
116 *
117 * \section compatibility Compatibility
118 *
119 * The code has been developed under Debian GNU/Linux with the GCC compiler suite.
120 * libDAI compiles successfully with g++ versions 3.4 up to 4.7 (both 32 and 64 bits).
121 *
122 * libDAI has also been successfully compiled with MS Visual Studio 2008 under Windows,
123 * MS Visual Studio 2010 under Windows 64, and with Cygwin under Windows.
124 *
125 * Finally, libDAI has been compiled successfully on MacOS X (both 32 and 64 bits).
126 *
127 * \section download Downloading libDAI
128 * The libDAI sources and documentation can be downloaded from the libDAI website:
129 * http://www.libdai.org.
130 *
131 * \section support Mailing list
132 * The Google group "libDAI" (http://groups.google.com/group/libdai)
133 * can be used for getting support and discussing development issues.
134 */
135
136
137 /** \page license License
138 * <hr size="1">
139 * \section license-license License
140 *
141 * libDAI is free software; you can redistribute it and/or modify it under the
142 * terms of the BSD 2-clause license (also known as the FreeBSD license), which
143 * can be found in the accompanying LICENSE file.
144 *
145 * [Note: up to and including version 0.2.7, libDAI was licensed under the GNU
146 * General Public License (GPL) version 2 or higher.]
147 *
148 * <hr size="1">
149 * \section license-freebsd libDAI license (similar to FreeBSD license)
150 *
151 * \verbinclude LICENSE
152 */
153
154
155 /** \page citations Citing libDAI
156 * <hr size="1">
157 * \section citations-citations Citing libDAI
158 *
159 * If you write a scientific paper describing research that made substantive use
160 * of this library, please cite the following paper describing libDAI:\n
161 *
162 * Joris M. Mooij;\n
163 * libDAI: A free & open source C++ library for Discrete Approximate Inference in graphical models;\n
164 * Journal of Machine Learning Research, 11(Aug):2169-2173, 2010.\n
165 *
166 * In BiBTeX format (for your convenience):\n
167 *
168 * <pre>
169 * \@article{Mooij_libDAI_10,
170 * author = {Joris M. Mooij},
171 * title = {lib{DAI}: A Free and Open Source {C++} Library for Discrete Approximate Inference in Graphical Models},
172 * journal = {Journal of Machine Learning Research},
173 * year = 2010,
174 * month = Aug,
175 * volume = 11,
176 * pages = {2169-2173},
177 * url = "http://www.jmlr.org/papers/volume11/mooij10a/mooij10a.pdf"
178 * }</pre>
179 *
180 * Moreover, as a personal note, I would appreciate it to be informed about any
181 * publications using libDAI at joris dot mooij at libdai dot org.
182 */
183
184
185 /** \page authors Authors
186 * \section authors-authors People who contributed to libDAI
187 *
188 * \verbinclude AUTHORS
189 */
190
191
192 /** \page build Building libDAI
193 * <hr size="1">
194 * \section build-unix Building libDAI under UNIX variants (Linux / Cygwin / Mac OS X)
195 *
196 * \subsection build-unix-preparations Preparations
197 *
198 * You need:
199 * - a recent version of gcc (at least version 3.4)
200 * - GNU make
201 * - recent boost C++ libraries (at least version 1.37; however,
202 * version 1.37 shipped with Ubuntu 9.04 is known not to work)
203 * - GMP library (or the Windows port called MPIR, for 64 bits builds MPIR 2.5.0 or higher is needed)
204 * - doxygen (only for building the documentation)
205 * - graphviz (only for using some of the libDAI command line utilities)
206 * - CImg library (only for building the image segmentation example)
207 *
208 * On Debian/Ubuntu, you can easily install the required packages with a single command:
209 * <pre> apt-get install g++ make doxygen graphviz libboost-dev libboost-graph-dev libboost-program-options-dev libboost-test-dev libgmp-dev cimg-dev</pre>
210 * (root permissions needed).
211 *
212 * On Mac OS X (10.4 is known to work), these packages can be installed easily via MacPorts.
213 * If MacPorts is not already installed, install it according to the instructions at http://www.macports.org/.
214 * Then, a simple
215 * <pre> sudo port install gmake boost gmp doxygen graphviz</pre>
216 * should be enough to install everything that is needed.
217 *
218 * On Cygwin, the prebuilt Cygwin package boost-1.33.1-x is known not to work.
219 * You can however obtain the latest boost version (you need at least 1.37.0)
220 * from http://www.boost.org/ and build it as described in the next subsection.
221 *
222 * \subsubsection build-unix-boost Building boost under Cygwin
223 *
224 * - Download the latest boost libraries from http://www.boost.org
225 * - Build the required boost libraries using:
226 * <pre>
227 * ./bootstrap.sh --with-libraries=program_options,math,graph,test --prefix=/boost_root/
228 * ./bjam</pre>
229 * - In order to use dynamic linking, the boost .dll's should be somewhere in the path.
230 * This can be achieved by a command like:
231 * <pre>
232 * export PATH=$PATH:/boost_root/stage/lib</pre>
233 *
234 *
235 * \subsection build-unix-libdai Building libDAI
236 *
237 * To build the libDAI source, first copy a template Makefile.* to Makefile.conf
238 * (for example, copy Makefile.LINUX to Makefile.conf if you use GNU/Linux).
239 * Then, edit the Makefile.conf template to adapt it to your local setup. In case
240 * you want to use Boost libraries which are installed in non-standard locations,
241 * you have to tell the compiler and linker about their locations (using the
242 * -I, -L flags for GCC; also you may need to set the LD_LIBRARY_PATH environment
243 * variable correctly before running libDAI binaries). Platform independent build
244 * options can be set in Makefile.ALL. Finally, run
245 * <pre> make</pre>
246 * The build includes a regression test, which may take a while to complete.
247 *
248 * If the build is successful, you can test the example program:
249 * <pre> examples/example tests/alarm.fg</pre>
250 * or the more extensive test program:
251 * <pre> tests/testdai --aliases tests/aliases.conf --filename tests/alarm.fg --methods JTREE_HUGIN BP_SEQMAX</pre>
252 *
253 *
254 * <hr size="1">
255 * \section build-windows Building libDAI under Windows
256 *
257 * \subsection build-windows-preparations Preparations
258 *
259 * You need:
260 * - A recent version of MicroSoft Visual Studio (2008 is known to work)
261 * - recent boost C++ libraries (version 1.37 or higher)
262 * - GMP or MPIR library (for 64-bits builds, MPIR 2.5.0 or higher is needed)
263 * - GNU make (can be obtained from http://gnuwin32.sourceforge.net)
264 * - CImg library (only for building the image segmentation example)
265 *
266 * For the regression test, you need:
267 * - GNU diff, GNU sed (can be obtained from http://gnuwin32.sourceforge.net)
268 *
269 * \subsubsection build-windows-boost Building boost under Windows
270 *
271 * Because building boost under Windows is tricky, I provide some guidance here.
272 *
273 * - Download the boost zip file from http://www.boost.org/users/download
274 * and unpack it somewhere.
275 * - Download the bjam executable from http://www.boost.org/users/download
276 * and unpack it somewhere else.
277 * - Download Boost.Build (v2) from http://www.boost.org/docs/tools/build/index.html
278 * and unpack it yet somewhere else.
279 * - Edit the file \c boost-build.jam in the main boost directory to change the
280 * \c BOOST_BUILD directory to the place where you put Boost.Build (use UNIX
281 * / instead of Windows \ in pathnames).
282 * - Copy the \c bjam.exe executable into the main boost directory.
283 * Now if you issue <tt>"bjam --version"</tt> you should get a version and no errors.
284 * Issueing <tt>"bjam --show-libraries"</tt> will show the libraries that will be built.
285 * - The following command builds the boost libraries that are relevant for libDAI:
286 * <pre>
287 * bjam --with-graph --with-math --with-program_options --with-test link=static runtime-link=shared</pre>
288 *
289 * \subsection build-windows-gmp Building GMP or MPIR under Windows
290 *
291 * Information about how to build GPR or MPIR under Windows can be found on the internet.
292 * The user has to update Makefile.WINDOWS in order to link with the GPR/MPIR libraries.
293 * Note that for 64-bit builds, MPIR 2.5.0 or higher is needed.
294 *
295 * \subsection build-windows-libdai Building libDAI
296 *
297 * To build the source, copy Makefile.WINDOWS to Makefile.conf. Then, edit
298 * Makefile.conf to adapt it to your local setup. Platform independent
299 * build options can be set in Makefile.ALL. Finally, run (from the command line)
300 * <pre> make</pre>
301 * The build includes a regression test, which may take a while to complete.
302 *
303 * If the build is successful, you can test the example program:
304 * <pre> examples\\example tests\\alarm.fg</pre>
305 * or the more extensive test program:
306 * <pre> tests\\testdai --aliases tests\\aliases.conf --filename tests\\alarm.fg --methods JTREE_HUGIN BP_SEQMAX</pre>
307 *
308 *
309 * <hr size="1">
310 * \section build-matlab Building the libDAI MatLab interface
311 *
312 * You need:
313 * - MatLab
314 * - The platform-dependent requirements described above
315 *
316 * First, you need to build the libDAI source as described above for your
317 * platform. By default, the MatLab interface is disabled, so before compiling the
318 * source, you have to enable it in Makefile.ALL by setting
319 * <pre> WITH_MATLAB=true</pre>
320 * Also, you have to configure the MatLab-specific parts of
321 * Makefile.conf to match your system (in particular, the Makefile variables ME,
322 * MATLABDIR and MEX). The MEX file extension depends on your platform; for a
323 * 64-bit linux x86_64 system this would be "ME=.mexa64", for a 32-bit linux x86
324 * system "ME=.mexglx". If you are unsure about your MEX file
325 * extension: it needs to be the same as what the MatLab command "mexext" returns.
326 * The required MEX files are built by issuing
327 * <pre> make</pre>
328 * from the command line. The MatLab interface is much less powerful than using
329 * libDAI from C++. There are two reasons for this: (i) it is boring to write MEX
330 * files; (ii) the large performance penalty paid when large data structures (like
331 * factor graphs) have to be converted between their native C++ data structure to
332 * something that MatLab understands.
333 *
334 * A simple example of how to use the MatLab interface is the following (entered
335 * at the MatLab prompt), which performs exact inference by the junction tree
336 * algorithm and approximate inference by belief propagation on the ALARM network:
337 * <pre> cd path_to_libdai/matlab
338 * [psi] = dai_readfg ('../tests/alarm.fg');
339 * [logZ,q,md,qv,qf] = dai (psi, 'JTREE', '[updates=HUGIN,verbose=0]')
340 * [logZ,q,md,qv,qf] = dai (psi, 'BP', '[updates=SEQMAX,tol=1e-9,maxiter=10000,logdomain=0]')</pre>
341 * where "path_to_libdai" has to be replaced with the directory in which libDAI
342 * was installed. For other algorithms and some default parameters, see the file
343 * tests/aliases.conf.
344 *
345 * <hr size="1">
346 * \section build-doxygen Building the documentation
347 *
348 * Install doxygen, graphviz and a TeX distribution and use
349 * <pre> make doc</pre>
350 * to build the documentation. If the documentation is not clear enough, feel free
351 * to send me an email (or even better, to improve the documentation and send a patch!).
352 * The documentation can also be browsed online at http://www.libdai.org.
353 */
354
355
356 /** \page changelog Change Log
357 * \verbinclude ChangeLog
358 */
359
360
361 /** \page terminology Terminology and conventions
362 *
363 * \section terminology-graphicalmodels Graphical models
364 *
365 * Commonly used graphical models are Bayesian networks and Markov random fields.
366 * In libDAI, both types of graphical models are represented by a slightly more
367 * general type of graphical model: a factor graph [\ref KFL01].
368 *
369 * An example of a Bayesian network is:
370 * \dot
371 * digraph bayesnet {
372 * size="1,1";
373 * x0 [label="0"];
374 * x1 [label="1"];
375 * x2 [label="2"];
376 * x3 [label="3"];
377 * x4 [label="4"];
378 * x0 -> x1;
379 * x0 -> x2;
380 * x1 -> x3;
381 * x1 -> x4;
382 * x2 -> x4;
383 * }
384 * \enddot
385 * The probability distribution of a Bayesian network factorizes as:
386 * \f[ P(\mathbf{x}) = \prod_{i\in\mathcal{V}} P(x_i \,|\, x_{\mathrm{pa}(i)}) \f]
387 * where \f$\mathrm{pa}(i)\f$ are the parents of node \a i in a DAG.
388 *
389 * The same probability distribution can be represented as a Markov random field:
390 * \dot
391 * graph mrf {
392 * size="1.5,1.5";
393 * x0 [label="0"];
394 * x1 [label="1"];
395 * x2 [label="2"];
396 * x3 [label="3"];
397 * x4 [label="4"];
398 * x0 -- x1;
399 * x0 -- x2;
400 * x1 -- x2;
401 * x1 -- x3;
402 * x1 -- x4;
403 * x2 -- x4;
404 * }
405 * \enddot
406 *
407 * The probability distribution of a Markov random field factorizes as:
408 * \f[ P(\mathbf{x}) = \frac{1}{Z} \prod_{C\in\mathcal{C}} \psi_C(x_C) \f]
409 * where \f$ \mathcal{C} \f$ are the cliques of an undirected graph,
410 * \f$ \psi_C(x_C) \f$ are "potentials" or "compatibility functions", and
411 * \f$ Z \f$ is the partition sum which properly normalizes the probability
412 * distribution.
413 *
414 * Finally, the same probability distribution can be represented as a factor graph:
415 * \dot
416 * graph factorgraph {
417 * size="1.8,1";
418 * x0 [label="0"];
419 * x1 [label="1"];
420 * x2 [label="2"];
421 * x3 [label="3"];
422 * x4 [label="4"];
423 * f01 [shape="box",label=""];
424 * f02 [shape="box",label=""];
425 * f13 [shape="box",label=""];
426 * f124 [shape="box",label=""];
427 * x0 -- f01;
428 * x1 -- f01;
429 * x0 -- f02;
430 * x2 -- f02;
431 * x1 -- f13;
432 * x3 -- f13;
433 * x1 -- f124;
434 * x2 -- f124;
435 * x4 -- f124;
436 * }
437 * \enddot
438 *
439 * The probability distribution of a factor graph factorizes as:
440 * \f[ P(\mathbf{x}) = \frac{1}{Z} \prod_{I\in \mathcal{F}} f_I(x_I) \f]
441 * where \f$ \mathcal{F} \f$ are the factor nodes of a factor graph (a
442 * bipartite graph consisting of variable nodes and factor nodes),
443 * \f$ f_I(x_I) \f$ are the factors, and \f$ Z \f$ is the partition sum
444 * which properly normalizes the probability distribution.
445 *
446 * Looking at the expressions for the joint probability distributions,
447 * it is obvious that Bayesian networks and Markov random fields can
448 * both be easily represented as factor graphs. Factor graphs most
449 * naturally express the factorization structure of a probability
450 * distribution, and hence are a convenient representation for approximate
451 * inference algorithms, which all try to exploit this factorization.
452 * This is why libDAI uses a factor graph as representation of a
453 * graphical model, implemented in the dai::FactorGraph class.
454 *
455 * \section terminology-inference Inference tasks
456 *
457 * Given a factor graph, specified by the variable nodes \f$\{x_i\}_{i\in\mathcal{V}}\f$
458 * the factor nodes \f$ \mathcal{F} \f$, the graph structure, and the factors
459 * \f$\{f_I(x_I)\}_{I\in\mathcal{F}}\f$, the following tasks are important:
460 *
461 * - Calculating the partition sum:
462 * \f[ Z = \sum_{\mathbf{x}_{\mathcal{V}}} \prod_{I \in \mathcal{F}} f_I(x_I) \f]
463 * - Calculating the marginal distribution of a subset of variables
464 * \f$\{x_i\}_{i\in A}\f$:
465 * \f[ P(\mathbf{x}_{A}) = \frac{1}{Z} \sum_{\mathbf{x}_{\mathcal{V}\setminus A}} \prod_{I \in \mathcal{F}} f_I(x_I) \f]
466 * - Calculating the MAP state which has the maximum probability mass:
467 * \f[ \mathrm{argmax}_{\mathbf{x}}\,\prod_{I\in\mathcal{F}} f_I(x_I) \f]
468 *
469 * libDAI offers several inference algorithms, which solve (a subset of) these tasks either
470 * approximately or exactly, for factor graphs with discrete variables. The following
471 * algorithms are implemented:
472 *
473 * Exact inference:
474 * - Brute force enumeration: dai::ExactInf
475 * - Junction-tree method: dai::JTree
476 *
477 * Approximate inference:
478 * - Mean Field: dai::MF
479 * - (Loopy) Belief Propagation: dai::BP [\ref KFL01]
480 * - Fractional Belief Propagation: dai::FBP [\ref WiH03]
481 * - Tree-Reweighted Belief Propagation: dai::TRWBP [\ref WJW03]
482 * - Tree Expectation Propagation: dai::TreeEP [\ref MiQ04]
483 * - Generalized Belief Propagation: dai::HAK [\ref YFW05]
484 * - Double-loop GBP: dai::HAK [\ref HAK03]
485 * - Loop Corrected Belief Propagation: dai::MR [\ref MoR05] and dai::LC [\ref MoK07]
486 * - Gibbs sampling: dai::Gibbs
487 * - Conditioned Belief Propagation: dai::CBP [\ref EaG09]
488 * - Decimation algorithm: dai::DecMAP
489 *
490 * Not all inference tasks are implemented by each method: calculating MAP states
491 * is only possible with dai::JTree, dai::BP and dai::DECMAP; calculating partition sums is
492 * not possible with dai::MR, dai::LC and dai::Gibbs.
493 *
494 * \section terminology-learning Parameter learning
495 *
496 * In addition, libDAI supports parameter learning of conditional probability
497 * tables by Expectation Maximization (or Maximum Likelihood, if there is no
498 * missing data). This is implemented in dai::EMAlg.
499 *
500 * \section terminology-variables-states Variables and states
501 *
502 * Linear states are a concept that is used often in libDAI, for example for storing
503 * and accessing factors, which are functions mapping from states of a set of variables
504 * to the real numbers. Internally, a factor is stored as an array, and the array index
505 * of an entry corresponds with the linear state of the set of variables. Below we will
506 * define variables, states and linear states of (sets of) variables.
507 *
508 * \subsection terminology-variables Variables
509 *
510 * Each (random) \a variable has a unique identifier, its \a label (which has
511 * a non-negative integer value). If two variables have the same
512 * label, they are considered as identical. A variable can take on a finite
513 * number of different values or \a states.
514 *
515 * We use the following notational conventions. The discrete
516 * random variable with label \f$l\f$ is denoted as \f$x_l\f$, and the number
517 * of possible values of this variable as \f$S_{x_l}\f$ or simply \f$S_l\f$.
518 * The set of possible values of variable \f$x_l\f$ is denoted
519 * \f$X_l := \{0,1,\dots,S_l-1\}\f$ and called its \a state \a space.
520 *
521 * \subsection terminology-variable-sets Sets of variables and the canonical ordering
522 *
523 * Let \f$A := \{x_{l_1},x_{l_2},\dots,x_{l_n}\}\f$ be a set of variables.
524 *
525 * The \a canonical \a ordering of the variables in \a A is induced by their labels.
526 * That is: if \f$l_1 < l_2\f$, then \f$x_{l_1}\f$ occurs before \f$x_{l_2}\f$ in the
527 * canonical ordering. Below, we will assume that \f$(l_i)_{i=1}^n\f$ is
528 * ordered according to the canonical ordering, i.e., \f$l_1 < l_2 < \dots < l_n\f$.
529 *
530 * \subsection terminology-variable-states States and linear states of sets of variables
531 *
532 * A \a state of the variables in \a A refers to a joint assignment of the
533 * variables, or in other words, to an element of the Cartesian product
534 * \f$ \prod_{i=1}^n X_{l_i}\f$ of the state spaces of the variables in \a A.
535 * Note that a state can also be interpreted as a mapping from variables (or
536 * variable labels) to the natural numbers, which assigns to a variable (or its
537 * label) the corresponding state of the variable.
538 *
539 * A state of \a n variables can be represented as an n-tuple of
540 * non-negative integers: \f$(s_1,s_2,\dots,s_n)\f$ corresponds to the
541 * joint assignment \f$x_{l_1} = s_1, \dots, x_{l_n} = s_n\f$.
542 * Alternatively, a state can be represented compactly as one non-negative integer;
543 * this representation is called a \a linear \a state. The linear state
544 * \a s corresponding to the state \f$(s_1,s_2,\dots,s_n)\f$ would be:
545 * \f[
546 * s := \sum_{i=1}^n s_i \prod_{j=1}^{i-1} S_{l_j}
547 * = s_1 + s_2 S_{l_1} + s_3 S_{l_1} S_{l_2} + \dots + s_n S_{l_1} \cdots S_{l_{n-1}}.
548 * \f]
549 *
550 * Vice versa, given a linear state \a s for the variables \a A, the
551 * corresponding state \f$s_i\f$ of the \a i 'th variable \f$x_{l_i}\f$ (according to
552 * the canonical ordering of the variables in \a A) is given by
553 * \f[
554 * s_i = \left\lfloor\frac{s \mbox { mod } \prod_{j=1}^i S_{l_j}}{\prod_{j=1}^{i-1} S_{l_j}}\right\rfloor.
555 * \f]
556 *
557 * Finally, the \a number \a of \a states of the set of variables \a A is simply the
558 * number of different joint assignments of the variables, that is, \f$\prod_{i=1}^n S_{l_i}\f$.
559 */
560
561
562 /** \page fileformats libDAI file formats
563 *
564 * \section fileformats-factorgraph Factor graph (.fg) file format
565 *
566 * This section describes the .fg file format used in libDAI to store factor graphs.
567 * Markov Random Fields are special cases of factor graphs, as are Bayesian
568 * networks. A factor graph can be specified as follows: for each factor, one has
569 * to specify which variables occur in the factor, what their respective
570 * cardinalities (i.e., number of possible values) are, and a table listing all
571 * the values of that factor for all possible configurations of these variables.
572 *
573 * A .fg file is not much more than that. It starts with a line containing the
574 * number of factors in that graph, followed by an empty line. Then all factors
575 * are specified, using one block for each factor, where the blocks are seperated
576 * by empty lines. Each variable occurring in the factor graph has a unique
577 * identifier, its label (which should be a nonnegative integer). Comment lines
578 * which start with # are ignored.
579 *
580 * \subsection fileformats-factorgraph-factor Factor block format
581 *
582 * Each block describing a factor starts with a line containing the number of
583 * variables in that factor. The second line contains the labels of these
584 * variables, seperated by spaces (labels are nonnegative integers and to avoid
585 * confusion, it is suggested to start counting at 0). The third line contains
586 * the number of possible values of each of these variables, also seperated by
587 * spaces. Note that there is some redundancy here, since if a variable appears
588 * in more than one factor, the cardinality of that variable appears several
589 * times in the .fg file; obviously, these cardinalities should be consistent.
590 * The fourth line contains the number of nonzero entries
591 * in the factor table. The rest of the lines contain these nonzero entries;
592 * each line consists of a table index, followed by white-space, followed by the
593 * value corresponding to that table index. The most difficult part is getting
594 * the indexing right. The convention that is used is that the left-most
595 * variables cycle through their values the fastest (similar to MatLab indexing
596 * of multidimensional arrays).
597 *
598 * \subsubsection fileformats-factorgraph-factor-example Example
599 *
600 * An example block describing one factor is:
601 *
602 * <pre>
603 * 3
604 * 4 8 7
605 * 3 2 2
606 * 11
607 * 0 0.1
608 * 1 3.5
609 * 2 2.8
610 * 3 6.3
611 * 4 8.4
612 * 6 7.4
613 * 7 2.4
614 * 8 8.9
615 * 9 1.3
616 * 10 1.6
617 * 11 2.6
618 * </pre>
619 *
620 * which corresponds to the following factor:
621 *
622 * \f[
623 * \begin{array}{ccc|c}
624 * x_4 & x_8 & x_7 & \mbox{value}\\
625 * \hline
626 * 0 & 0 & 0 & 0.1\\
627 * 1 & 0 & 0 & 3.5\\
628 * 2 & 0 & 0 & 2.8\\
629 * 0 & 1 & 0 & 6.3\\
630 * 1 & 1 & 0 & 8.4\\
631 * 2 & 1 & 0 & 0.0\\
632 * 0 & 0 & 1 & 7.4\\
633 * 1 & 0 & 1 & 2.4\\
634 * 2 & 0 & 1 & 8.9\\
635 * 0 & 1 & 1 & 1.3\\
636 * 1 & 1 & 1 & 1.6\\
637 * 2 & 1 & 1 & 2.6
638 * \end{array}
639 * \f]
640 *
641 * Note that the value of \f$x_4\f$ changes fastest, followed by that of \f$x_8\f$, and \f$x_7\f$
642 * varies the slowest, corresponding to the second line of the block ("4 8 7").
643 * Further, \f$x_4\f$ can take on three values, and \f$x_8\f$ and \f$x_7\f$ each have two possible
644 * values, as described in the third line of the block ("3 2 2"). The table
645 * contains 11 non-zero entries (all except for the fifth entry). Note that the
646 * eleventh and twelveth entries are interchanged.
647 *
648 * A final note: the internal representation in libDAI of the factor above is
649 * different, because the variables are ordered according to their indices
650 * (i.e., the ordering would be \f$x_4 x_7 x_8\f$) and the values of the table are
651 * stored accordingly, with the variable having the smallest index changing
652 * fastest:
653 *
654 * \f[
655 * \begin{array}{ccc|c}
656 * x_4 & x_7 & x_8 & \mbox{value}\\
657 * \hline
658 * 0 & 0 & 0 & 0.1\\
659 * 1 & 0 & 0 & 3.5\\
660 * 2 & 0 & 0 & 2.8\\
661 * 0 & 1 & 0 & 7.4\\
662 * 1 & 1 & 0 & 2.4\\
663 * 2 & 1 & 0 & 8.9\\
664 * 0 & 0 & 1 & 6.3\\
665 * 1 & 0 & 1 & 8.4\\
666 * 2 & 0 & 1 & 0.0\\
667 * 0 & 1 & 1 & 1.3\\
668 * 1 & 1 & 1 & 1.6\\
669 * 2 & 1 & 1 & 2.6
670 * \end{array}
671 * \f]
672 *
673 *
674 * \section fileformats-evidence Evidence (.tab) file format
675 *
676 * This section describes the .tab fileformat used in libDAI to store "evidence",
677 * i.e., a data set consisting of multiple samples, where each sample is the
678 * observed joint state of some variables.
679 *
680 * A .tab file is a tabular data file, consisting of a header line, followed by
681 * an empty line, followed by the data points, with one line for each data point.
682 * Each line (apart from the empty one) should have the same number of columns,
683 * where columns are separated by one tab character. Each column corresponds to
684 * a variable. The header line consists of the variable labels (corresponding to
685 * dai::Var::label()). The other lines are observed joint states of the variables, i.e.,
686 * each line corresponds to a joint observation of the variables, and each column
687 * of a line contains the state of the variable associated with that column.
688 * Missing data is handled simply by having two consecutive tab characters,
689 * without any characters in between.
690 *
691 * \subsection fileformats-evidence-example Example
692 *
693 * <pre>
694 * 1 3 2
695 *
696 * 0 0 1
697 * 1 0 1
698 * 1 1
699 * </pre>
700 *
701 * This would correspond to a data set consisting of three observations concerning
702 * the variables with labels 1, 3 and 2; the first observation being
703 * \f$x_1 = 0, x_3 = 0, x_2 = 1\f$, the second observation being
704 * \f$x_1 = 1, x_3 = 0, x_2 = 1\f$, and the third observation being
705 * \f$x_1 = 1, x_2 = 1\f$ (where the state of \f$x_3\f$ is missing).
706 *
707 * \section fileformats-emalg Expectation Maximization (.em) file format
708 *
709 * This section describes the file format of .em files, which are used
710 * to specify a particular EM algorithm. The .em files are complementary
711 * to .fg files; in other words, an .em file without a corresponding .fg
712 * file is useless. Furthermore, one also needs a corresponding .tab file
713 * containing the data used for parameter learning.
714 *
715 * An .em file starts with a line specifying the number of maximization steps,
716 * followed by an empty line. Then, each maximization step is described in a
717 * block, which should satisfy the format described in the next subsection.
718 *
719 * \subsection fileformats-emalg-maximizationstep Maximization Step block format
720 *
721 * A maximization step block of an .em file starts with a single line
722 * describing the number of shared parameters blocks that will follow.
723 * Then, each shared parameters block follows, in the format described in
724 * the next subsection.
725 *
726 * \subsection fileformats-emalg-sharedparameters Shared parameters block format
727 *
728 * A shared parameters block of an .em file starts with a single line
729 * consisting of the name of a ParameterEstimation subclass
730 * and its parameters in the format of a PropertySet. For example:
731 * <pre> CondProbEstimation [target_dim=2,total_dim=4,pseudo_count=1]</pre>
732 * The next line contains the number of factors that share their parameters.
733 * Then, each of these factors is specified on separate lines (possibly
734 * seperated by empty lines), where each line consists of several fields
735 * seperated by a space or a tab character. The first field contains
736 * the index of the factor in the factor graph. The following fields should
737 * contain the variable labels of the variables on which that factor depends,
738 * in a specific ordering. This ordering can be different from the canonical
739 * ordering of the variables used internally in libDAI (which would be sorted
740 * ascendingly according to the variable labels). The ordering of the variables
741 * specifies the implicit ordering of the shared parameters: when iterating
742 * over all shared parameters, the corresponding index of the first variable
743 * changes fastest (in the inner loop), and the corresponding index of the
744 * last variable changes slowest (in the outer loop). By choosing the right
745 * ordering, it is possible to let different factors (depending on different
746 * variables) share parameters in parameter learning using EM. This convention
747 * is similar to the convention used in factor blocks in a factor graph .fg
748 * file (see \ref fileformats-factorgraph-factor).
749 *
750 * \section fileformats-aliases Aliases file format
751 *
752 * An aliases file is basically a list of "macros" and the strings that they
753 * should be substituted with.
754 *
755 * Each line of the aliases file can be either empty, contain a comment
756 * (if the first character is a '#') or contain an alias. In the latter case,
757 * the line should contain a colon; the part before the colon contains the
758 * name of the alias, the part after the colon the string that it should be
759 * substituted with. Any whitespace before and after the colon is ignored.
760 *
761 * For example, the following line would define the alias \c BP_SEQFIX
762 * as a shorthand for "BP[updates=SEQFIX,tol=1e-9,maxiter=10000,logdomain=0]":
763 * <pre>
764 * BP_SEQFIX: BP[updates=SEQFIX,tol=1e-9,maxiter=10000,logdomain=0]
765 * </pre>
766 *
767 * Aliases files can be used to store default options for algorithms.
768 */
769
770 /** \page bibliography Bibliography
771 * \anchor EaG09 \ref EaG09
772 * F. Eaton and Z. Ghahramani (2009):
773 * "Choosing a Variable to Clamp",
774 * <em>Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS 2009)</em> 5:145-152,
775 * http://jmlr.csail.mit.edu/proceedings/papers/v5/eaton09a/eaton09a.pdf
776 *
777 * \anchor EMK06 \ref EMK06
778 * G. Elidan and I. McGraw and D. Koller (2006):
779 * "Residual Belief Propagation: Informed Scheduling for Asynchronous Message Passing",
780 * <em>Proceedings of the 22nd Annual Conference on Uncertainty in Artificial Intelligence (UAI-06)</em>,
781 * http://uai.sis.pitt.edu/papers/06/UAI2006_0091.pdf
782 *
783 * \anchor HAK03 \ref HAK03
784 * T. Heskes and C. A. Albers and H. J. Kappen (2003):
785 * "Approximate Inference and Constrained Optimization",
786 * <em>Proceedings of the 19th Annual Conference on Uncertainty in Artificial Intelligence (UAI-03)</em> pp. 313-320,
787 * http://www.snn.ru.nl/reports/Heskes.uai2003.ps.gz
788 *
789 * \anchor KFL01 \ref KFL01
790 * F. R. Kschischang and B. J. Frey and H.-A. Loeliger (2001):
791 * "Factor Graphs and the Sum-Product Algorithm",
792 * <em>IEEE Transactions on Information Theory</em> 47(2):498-519,
793 * http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=910572
794 *
795 * \anchor KoF09 \ref KoF09
796 * D. Koller and N. Friedman (2009):
797 * <em>Probabilistic Graphical Models - Principles and Techniques</em>,
798 * The MIT Press, Cambridge, Massachusetts, London, England.
799
800 * \anchor Min05 \ref Min05
801 * T. Minka (2005):
802 * "Divergence measures and message passing",
803 * <em>MicroSoft Research Technical Report</em> MSR-TR-2005-173,
804 * http://research.microsoft.com/en-us/um/people/minka/papers/message-passing/minka-divergence.pdf
805 *
806 * \anchor MiQ04 \ref MiQ04
807 * T. Minka and Y. Qi (2004):
808 * "Tree-structured Approximations by Expectation Propagation",
809 * <em>Advances in Neural Information Processing Systems</em> (NIPS) 16,
810 * http://books.nips.cc/papers/files/nips16/NIPS2003_AA25.pdf
811 *
812 * \anchor MoK07 \ref MoK07
813 * J. M. Mooij and H. J. Kappen (2007):
814 * "Loop Corrections for Approximate Inference on Factor Graphs",
815 * <em>Journal of Machine Learning Research</em> 8:1113-1143,
816 * http://www.jmlr.org/papers/volume8/mooij07a/mooij07a.pdf
817 *
818 * \anchor MoK07b \ref MoK07b
819 * J. M. Mooij and H. J. Kappen (2007):
820 * "Sufficient Conditions for Convergence of the Sum-Product Algorithm",
821 * <em>IEEE Transactions on Information Theory</em> 53(12):4422-4437,
822 * http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4385778
823 *
824 * \anchor Moo08 \ref Moo08
825 * J. M. Mooij (2008):
826 * "Understanding and Improving Belief Propagation",
827 * <em>Ph.D. Thesis</em> Radboud University Nijmegen
828 * http://webdoc.ubn.ru.nl/mono/m/mooij_j/undeanimb.pdf
829 *
830 * \anchor MoR05 \ref MoR05
831 * A. Montanari and T. Rizzo (2005):
832 * "How to Compute Loop Corrections to the Bethe Approximation",
833 * <em>Journal of Statistical Mechanics: Theory and Experiment</em> 2005(10)-P10011,
834 * http://stacks.iop.org/1742-5468/2005/P10011
835 *
836 * \anchor StW99 \ref StW99
837 * A. Steger and N. C. Wormald (1999):
838 * "Generating Random Regular Graphs Quickly",
839 * <em>Combinatorics, Probability and Computing</em> Vol 8, Issue 4, pp. 377-396,
840 * http://www.math.uwaterloo.ca/~nwormald/papers/randgen.pdf
841 *
842 * \anchor WiH03 \ref WiH03
843 * W. Wiegerinck and T. Heskes (2003):
844 * "Fractional Belief Propagation",
845 * <em>Advances in Neural Information Processing Systems</em> (NIPS) 15, pp. 438-445,
846 * http://books.nips.cc/papers/files/nips15/LT16.pdf
847 *
848 * \anchor WJW03 \ref WJW03
849 * M. J. Wainwright, T. S. Jaakkola and A. S. Willsky (2003):
850 * "Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching",
851 * <em>9th Workshop on Artificial Intelligence and Statistics</em>,
852 * http://www.eecs.berkeley.edu/~wainwrig/Papers/WJW_AIStat03.pdf
853 *
854 * \anchor YFW05 \ref YFW05
855 * J. S. Yedidia and W. T. Freeman and Y. Weiss (2005):
856 * "Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms",
857 * <em>IEEE Transactions on Information Theory</em> 51(7):2282-2312,
858 * http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1459044
859 */
860
861
862 /** \page discussion Ideas not worth exploring
863 * \section discuss_extendedgraphs Extended factorgraphs/regiongraphs
864 *
865 * A FactorGraph and a RegionGraph are often equipped with
866 * additional properties for nodes and edges. The code to initialize those
867 * is often quite similar. Maybe one could abstract this, e.g.:
868 * \code
869 * template <typename Node1Properties, typename Node2Properties, typename EdgeProperties>
870 * class ExtFactorGraph : public FactorGraph {
871 * public:
872 * std::vector<Node1Properties> node1Props;
873 * std::vector<Node2Properties> node2Props;
874 * std::vector<std::vector<EdgeProperties> > edgeProps;
875 * // ...
876 * }
877 * \endcode
878 *
879 * Advantages:
880 * - Less code duplication.
881 * - Easier maintainability.
882 * - Easier to write new inference algorithms.
883 *
884 * Disadvantages:
885 * - Cachability may be worse.
886 * - A problem is the case where there are no properties for either type of nodes or for edges.
887 * Maybe this can be solved using specializations, or using variadac template arguments?
888 * Another possible solution would be to define a "class Empty {}", and add some code
889 * that checks for the typeid, comparing it with Empty, and doing something special in that case
890 * (e.g., not allocating memory).
891 * - The main disadvantage of this approach seems to be that it leads to even more entanglement.
892 * Therefore this is probably a bad idea.
893 *
894 * \section discuss_templates Polymorphism by template parameterization
895 *
896 * Instead of polymorphism by inheritance, use polymorphism by template parameterization.
897 * For example, the real reason for introducing the complicated inheritance scheme of dai::InfAlg
898 * was for functions like dai::calcMarginal. Instead, one could use a template function:
899 * \code
900 * template<typename InfAlg>
901 * Factor calcMarginal( const InfAlg &obj, const VarSet &ns, bool reInit );
902 * \endcode
903 * This would assume that the type InfAlg supports certain methods. Ideally, one would use
904 * concepts to define different classes of inference algorithms with different capabilities,
905 * for example the ability to calculate logZ, the ability to calculate marginals, the ability to
906 * calculate bounds, the ability to calculate MAP states, etc. Then, one would use traits
907 * classes in order to be able to query the capabilities of the model. For example, one would be
908 * able to query whether the inference algorithm supports calculation of logZ. Unfortunately,
909 * this is compile-time polymorphism, whereas tests/testdai needs runtime polymorphism.
910 * Therefore this is probably a bad idea.
911 */