BipartiteGraph and FactorGraph (which will be obsoleted soon)
* [Frederik Eaton] Added BP_dual, BBP and CBP algorithms
* [Frederik Eaton] Added Gibbs::state() accessors/mutators
-* [Frederik Eaton] Fixed Gibbs::getVarDist(size_t) to return uniform
+* [Frederik Eaton] Fixed Gibbs::getVarDist(size_t) to return uniform
distribution if no state is allowed
* [Frederik Eaton] Improved parsing code in tests/testdai to allow recursive
alias substitutions
operators
* [Frederik Eaton] Change cout to cerr in warnings and error messages
* [Giuseppe Passino] Optimised maximum-residual BP by using a reversed ordered
- set instead of the linear search (which can yield enormous speedups - a
+ set instead of the linear search (which can yield enormous speedups - a
factor 500 has been measured on a binary Ising grid with 128x128 variables!)
-* Added debug assertions to Var which check for inconsistent dimensions of
+* Added debug assertions to Var which check for inconsistent dimensions of
variables with the same labels
* [Giuseppe Passino] Added prefix ++ operator to State (State::operator++())
-* [Giuseppe Passino] Added iterators to FactorGraph (FactorGraph::begin,
+* [Giuseppe Passino] Added iterators to FactorGraph (FactorGraph::begin,
FactorGraph::end)
* [Giuseppe Passino] Added iterators to TFactor (TFactor::begin, TFactor::end)
* [Giuseppe Passino] Added iterators to TProb (TProb::begin, TProb::end)
* [Giuseppe Passino] Added BP::findMaximum(), which can be used after running
max-product BP to construct a global state with maximum probability
-* Improved Makefile (libDAI now also builds out-of-the-box on MacOSX,
+* Improved Makefile (libDAI now also builds out-of-the-box on MacOSX,
thanks to Dan Preston; merged Makefile.win and Makefile.shared into Makefile)
* Fixed bug in calcMarginal, calcPairBeliefs, calcPairBeliefsNew where
states with zero probability mass were clamped, leading to NaNs or assertion
var.h, varset.h, factor.h, enum.h} and of examples/example.cpp
Merged TODO and FILEFORMAT into doxygen documentation
* examples/
- - Moved example.cpp to examples/
+ - Moved example.cpp to examples/
- Added examples/example_bipgraph.cpp
- Added examples/example_varset.cpp
* Interface changes:
* Added damping to various algorithms to improve convergence properties.
* Added more features to utils/createfg for creating factor graphs.
* Added ExactInf class for brute force exact inference.
-* [Giuseppe Pasino] Added "logdomain" property to BP, a boolean that controls
+* [Giuseppe Pasino] Added "logdomain" property to BP, a boolean that controls
whether calculations are done in the log-domain or in the linear domain;
doing calculations in the log-domain may help if the numerical range
of a double is too small.
Improved architecture:
* Added Exceptions framework.
* Pervasive change of BipartiteGraph implementation (based on an idea by
- Giuseppe Passino). BipartiteGraph no longer stores the node properties
- (former _V1 and _V2), nor does it store a dense adjacency matrix anymore,
- nor an edge list. Instead, it stores the graph structure as lists of
- neighboring nodes. This yields a significant memory/speed improvement for
+ Giuseppe Passino). BipartiteGraph no longer stores the node properties
+ (former _V1 and _V2), nor does it store a dense adjacency matrix anymore,
+ nor an edge list. Instead, it stores the graph structure as lists of
+ neighboring nodes. This yields a significant memory/speed improvement for
large factor graphs, and is more elegant as well. Iterating over neighbors is
made easy by using boost::foreach.
* Added conditional compilation of inference methods.
* Replaced sub_nb class in mr.h by boost::dynamic_bitset.
* Improved index.h:
- Renamed Index -> IndexFor
- - Added some .reserve()'s to IndexFor methods which yields a
+ - Added some .reserve()'s to IndexFor methods which yields a
25% speedup of testregression
- Replaced multind by Permute
- Added MultiFor
- Added State
* New funcstionality of factor.h.
* Moved Properties and MaxDiff frameworks from InfAlg to each individual
- inference algorithm, because the Properties framework was not as
+ inference algorithm, because the Properties framework was not as
convenient as I hoped, and not every inference algorithm needs a maxdiff
variable. Also, replaced some FactorGraph functionality in InfAlg by a
- function that returns the FactorGraph. The result is cleaner (less
+ function that returns the FactorGraph. The result is cleaner (less
entangled) code.
* Removed x2x.
* Replaced Complex with real numbers (negative potentials are just too rare
to warrant the additional "complexity" :)).
Miscellaneous improvements:
-* Now compiles also with MS Visual C++ (thanks to Jiuxiang Hu) and with
+* Now compiles also with MS Visual C++ (thanks to Jiuxiang Hu) and with
GCC under cygwin.
* Contributions by Peter Gober:
- Renamed variable _N in mr.* for compatibility with g++ under cygwin.
- moved header files in include/dai and sources in src;
- changed #ifndefs to GNU style;
- added extra warning checks (-W -Wextra) and fixed resulting warnings;
- - dai::TProb:
+ - dai::TProb:
o removed copy constructor and assignment operators (redundant);
o implementation of some methods via STL algorithms;
o added methods takeExp, takeLog, takeLog0 for transformation in-place;
and variables;
- Optimization of FactorGraph constructors using tr1::unordered_map.
* FactorGraph constructors no longer check for short loops (huge speed
- increase for large factor graphs), nor for negative entries. Also, the
+ increase for large factor graphs), nor for negative entries. Also, the
normtype is now Prob::NORMPROB by default.
* Improved MaxSpanningTreePrims algorithm (uses boost::graph).
General framework
-- DAIAlg is now a template class; typedefs for DAIAlg<FactorGraph> and for
+- DAIAlg is now a template class; typedefs for DAIAlg<FactorGraph> and for
DAIAlg<RegionGraph> are provided. In this way, we do not have to write "wrapper"
functions to forward functionality from either FactorGraph or RegionGraph
to DAIAlg. Functionality like clamping can be implemented in FactorGraph
different ways and an approximation method is run for each clamping; using the
saveProbs/undoProbs can give a significant speed increase.
- Switched to a general Properties framework that handles the parameters of
- all inference methods in a uniform manner. The Properties class is a map of
- several properties in boost::any objects, indexed by their names (strings).
+ all inference methods in a uniform manner. The Properties class is a map of
+ several properties in boost::any objects, indexed by their names (strings).
It can read from a stream and write to a stream. It is recursive, in the sense
that a Properties object can hold a variable of type Properties as well.
- Added a generic way of constructing inference algorithms from a factor graph,
- name and properties object. Added the newInfAlg function which constructs
+ name and properties object. Added the newInfAlg function which constructs
the requested object. This is used by LCBP, the Matlab interface and the
command line (test) interface.
- Added a generic enum framework for enum parameters. Although implemented as a
- Corrected two bugs in operator&& and operator|| in VarSet (they returned
the logical NOT of what they should return).
- Fixed bug in RegionGraph::RecomputeOR(s).
-- Fixed bug in utils/create_dreg_fg:
+- Fixed bug in utils/create_dreg_fg:
graph structure was not random for given parameters (forgot to call srand()).
- TreeEP bug workaround: use the complete junction tree instead of a subtree.
-- Fixed bug in JTree::HUGIN() and JTree:ShaferShenoy() in case of junction tree
+- Fixed bug in JTree::HUGIN() and JTree:ShaferShenoy() in case of junction tree
that consists of one outer region only.
- Fixed INIT bug in LCBP2::UpdatePancake().
- Fixed MaxDiffs flow (except for MR).
weighted graph where the weight between neighbours i and j is given by
N(psi,i,j), where psi is the product of all the factors involving both i and j
(which is an upper bound on the effective interaction between i and j).
-- Implemented MR (MontanariRizzo) based on Bastian's code, but extended it
+- Implemented MR (MontanariRizzo) based on Bastian's code, but extended it
to be able to handle connectivities larger than 3 (in principle, up to 32).
It supports different initialization methods (the original RESPPROP,
the CLAMPING method and EXACT which uses JTree) and different update methods
(FULL and LINEAR).
-- Implemented LCBP2, an analogon of LCBP which represents pancakes as little
- networks and uses some approximate inference method on them for calculating
+- Implemented LCBP2, an analogon of LCBP which represents pancakes as little
+ networks and uses some approximate inference method on them for calculating
marginals.
- Now there are several LCBP variants (LCBP, LCBPI, LCBPJ, LCBPK, LCBPL);
LCBPJ works only for pairwise, LCBPK is LCBP improved for higher order
- Wrote utility to visualize factor graphs using graphviz.
(it uses the BOOST Program Options library)
- Added fginfo utility that displays some info about a .fg file.
-- Implemented Factor::strength function that calculates the potential strength
+- Implemented Factor::strength function that calculates the potential strength
N(psi,i,j) between variables i and j as described in cs.IT:0504030
- Wrote a general MatLab interface matlab/dai (similar to tests/test);
this unified the matlab functions dai, dai_bp, dai_mf, dai_jt, dai_tep, dai_cvm.
Improvements of existing code
-- Reimplemented RegionGraph and descendants: a RegionGraph ISA FactorGraph
- and also a BipartiteGraph<FRegion,Region>. It now also keeps a map that
- associates outer region indices to factor indices (no powers yet, this
- is deemed superfluous) and provides functions to recompute (some of) the
+- Reimplemented RegionGraph and descendants: a RegionGraph ISA FactorGraph
+ and also a BipartiteGraph<FRegion,Region>. It now also keeps a map that
+ associates outer region indices to factor indices (no powers yet, this
+ is deemed superfluous) and provides functions to recompute (some of) the
outer regions from the factors.
- InfAlg descendants run() methods now stop immediately and return NAN in case
they detect NANs. Only BP does not do NAN checking for performance reasons.
- HAK, GBP and DoubleLoop now conform to the standards for verbose reporting,
timing and convergence criteria.
- Implemented logZ() for JTree. It does the calculation during message-passing.
-- Marginal2ndO now optionally divides by the single node beliefs (to the power n-2);
+- Marginal2ndO now optionally divides by the single node beliefs (to the power n-2);
hopefully this will give more accurate approximations.
-- Marginal and Marginal2ndO (optionally) use the new saveProbs/undoProbs functionality
- for a faster way of calculating marginals, which does not require a call to init()
+- Marginal and Marginal2ndO (optionally) use the new saveProbs/undoProbs functionality
+ for a faster way of calculating marginals, which does not require a call to init()
nor cloning the whole object for each clamping. This leads to a significant speedup.
- LCBP (and LCBP2) now have complete flexibility in the specification of the
- inner method, i.e. the method used to generate the initial cavity approximations.
- One can pass two strings, a name and a properties string, and LCBP simply invokes
+ inner method, i.e. the method used to generate the initial cavity approximations.
+ One can pass two strings, a name and a properties string, and LCBP simply invokes
newInfAlg to construct the corresponding inference algorithm and uses the generic
marginal functions to approximate cavity marginals.
- Replaced the global "method" variable by local properties and removed ai.h
makeFacCavity -> makeFactorCavity
- LCBP_SEQMAXRES has been removed because it did strange things.
- Implemented RandomDRegularGraph
-- Implemented JTree::calcMarginal for marginals that are not confined
+- Implemented JTree::calcMarginal for marginals that are not confined
within one cluster (using cut-set conditioning).
- Added isConnected() method to FactorGraph (some methods do not work with
disconnected factor graphs).
- Made a new and significantly improved testing framework that provides most
functionality from the command line.
- The basis is provided by tests/test, which uses the newInfAlg functionality
- and enables the user to easily compare from the command line different
- inference methods on a given factorgraph. All parameters can be specified.
- Output consists of CPU time, average and maximum single variable marginal
- errors, relative logZ error and MaxDiff().
+ and enables the user to easily compare from the command line different
+ inference methods on a given factorgraph. All parameters can be specified.
+ Output consists of CPU time, average and maximum single variable marginal
+ errors, relative logZ error and MaxDiff().
- tests/aliases.conf contains aliases for standard combinations of methods
and their options (which can be used in tests/test).
-- tests/large contains several bash/python scripts that create random factor
+- tests/large contains several bash/python scripts that create random factor
graphs, compare several approximate inference algorithms (using tests/test) and
allow for easy visualization of the results using PyX.
- Added several .fg files for test purposes to /tests (e.g. two ALARM versions
- alarm.fg and alarm_bnt.fg; testfast.fg, a 4x4 periodic Ising grid for
+ alarm.fg and alarm_bnt.fg; testfast.fg, a 4x4 periodic Ising grid for
regression testing).
- Added a regression test to the Makefile which is included in the standard
target. It compares all inference methods on tests/testfast.fg with the
Functionality
- Added RegionGraph, GBP, CVM and HAK (double-loop).
-- Added JunctionTree (with two update algorithms, HUGIN and Shafer-Shenoy), which is a
+- Added JunctionTree (with two update algorithms, HUGIN and Shafer-Shenoy), which is a
RegionGraph.
- NormType is now chosen automatically (in case of negative factors, Prob::NORMLINF is used,
- otherwise the default Prob::NORMPROB is used). Also, in case of negative factors, the
- RegionGraph constructors assign each Factor to a unique outer region instead of dividing
+ otherwise the default Prob::NORMPROB is used). Also, in case of negative factors, the
+ RegionGraph constructors assign each Factor to a unique outer region instead of dividing
it over all subsuming outer regions. See README for when negative factors are known to work
and when not.
- FactorGraph::FactorGraph(const vector<Factor>) only gives a warning in case of short loops,
- Made a seperate class ClusterGraph, which is only used by the junction tree
code. It's main purpose is a graph-theoretical variable elimination algorithm.
- Implemented the heuristic "minimum-new-edges-in-clique-graph" for variable elimination.
-- Massive code cleanup, moving towards "generic" programming style, using
+- Massive code cleanup, moving towards "generic" programming style, using
multiple inheritance and polymorphism.
o BP, LCBP, MF, HAK and JunctionTree now inherit from a common DAIAlg class
o Made generic functions Marginal, Marginal2ndO, calcCavityDist, calcCavityDist2ndO, clamp
0.1.1 (2006-02-28)
--------------------
- The constructors of (Log)FactorGraph and LogFactorGraph from a
-vector<(Log)Potential> now merge potentials to prevent short loops (of length
-4) in the factor graph. These are used in ai to construct the factor graphs
+vector<(Log)Potential> now merge potentials to prevent short loops (of length
+4) in the factor graph. These are used in ai to construct the factor graphs
from the psi argument. If compiled with DEBUG defined, the method calc_nb()
of BipGraph checks for the existence of short loops.
- Changed calling syntax of ai (now the actual syntax *does* correspond to its
# Copyright (C) 2006-2009 Joris Mooij [joris dot mooij at tuebingen dot mpg dot de]
# Radboud University Nijmegen, The Netherlands /
# Max Planck Institute for Biological Cybernetics, Germany
-#
+#
# This file is part of libDAI.
#
# libDAI is free software; you can redistribute it and/or modify
# Define build targets
TARGETS=tests utils lib examples testregression testem
ifdef WITH_DOC
- TARGETS:=$(TARGETS) doc
+ TARGETS:=$(TARGETS) doc
endif
ifdef WITH_MATLAB
TARGETS:=$(TARGETS) matlabs
utils : utils/createfg$(EE) utils/fg2dot$(EE) utils/fginfo$(EE)
-lib: $(LIB)/libdai$(LE)
+lib: $(LIB)/libdai$(LE)
# OBJECTS
# This template contains configurations for compiling libDAI under CygWin
-#
+#
# To use it, simply copy this file to 'Makefile.conf' and adapt 'Makefile.conf'
# to your local setup
#
# This template contains configurations for compiling libDAI under GNU/Linux
# and other UNIX variants
-#
+#
# To use it, simply copy this file to 'Makefile.conf' and adapt 'Makefile.conf'
# to your local setup
#
# This template contains configurations for compiling libDAI under Mac OS X
-#
+#
# To use it, simply copy this file to 'Makefile.conf' and adapt 'Makefile.conf'
# to your local setup
#
# This template contains configurations for compiling libDAI with Visual C++
# under Windows (and GNU Make)
-#
+#
# To use it, simply copy this file to 'Makefile.conf' and adapt 'Makefile.conf'
# to your local setup
#
# LINKER
# Standard libraries to include
-LIBS=/link $(LIB)/libdai$(LE)
+LIBS=/link $(LIB)/libdai$(LE)
# For linking with the BOOST Program Options library
BOOSTLIBS=/LIBPATH:C:\boost_1_36_0\stage\lib
# Additional library search paths for linker
libDAI-0.2 December 1, 2006
libDAI-0.2.1 May 26, 2008
-libDAI-0.2.2 September 30, 2008
+libDAI-0.2.2 September 30, 2008
Acknowledgments
Var S(1, 2); // Define binary variable Sprinkler (with label 1)
Var R(2, 2); // Define binary variable Rain (with label 2)
Var W(3, 2); // Define binary variable Wetgrass (with label 3)
-
+
// Define probability distribution for C
Factor P_C( C );
P_C[0] = 0.5; // C = 0
cout << "P(S=1 | W=1) = " << P.marginal( VarSet( S, W ) )[3] / denom << endl;
cout << "P(R=1 | W=1) = " << P.marginal( VarSet( R, W ) )[3] / denom << endl;
- return 0;
+ return 0;
}
int main() {
Var x0(0, 2); // Define binary variable x0 (with label 0)
Var x1(1, 3); // Define ternary variable x1 (with label 1)
-
+
// Define set X = {x0, x1}
VarSet X; // empty
X |= x1; // X = {x1}
assert( X.calcState(X.calcStates(S)) == S );
}
- return 0;
+ return 0;
}
static const char* DAINames[] = {
ExactInf::Name,
#ifdef DAI_WITH_BP
- BP::Name,
+ BP::Name,
#endif
#ifdef DAI_WITH_MF
MF::Name,
//@{
/// Calculates _indices, which is a cache of IndexFor @see bp.cpp
void RegenerateInds();
-
+
/// Index type
typedef std::vector<size_t> _ind_t;
/// Cached indices (indexed [i][_I])
- std::vector<std::vector<_ind_t> > _indices;
+ std::vector<std::vector<_ind_t> > _indices;
/// Returns an index from the cache
const _ind_t& _index(size_t i, size_t _I) const { return _indices[i][_I]; }
//@}
// DISABLED BECAUSE IT IS BUGGY:
// void updateSeqMsgM( size_t i, size_t _I );
/// Sets normalized factor->variable message adjoint and calculates the corresponding unnormalized adjoint
- void setSeqMsgM( size_t i, size_t _I, const Prob &p );
+ void setSeqMsgM( size_t i, size_t _I, const Prob &p );
/// Implements routine Send-n in Figure 5 in [\ref EaG09]
void sendSeqMsgN( size_t i, size_t _I, const Prob &f );
/// Implements routine Send-m in Figure 5 in [\ref EaG09]
_adj_b_F = adj_b_F;
_init_adj_psi_V = adj_psi_V;
_init_adj_psi_F = adj_psi_F;
- Regenerate();
+ Regenerate();
}
/// Initializes belief adjoints and with zero initial factor adjoints and regenerates
/// Returns factor->variable message adjoint
DAI_ACCMUT(Prob& adj_m(size_t i, size_t _I), { return _adj_m[i][_I]; });
- public:
+ public:
/// Parameters of this algorithm
/* PROPERTIES(props,BBP) {
/// Enumeration of possible update schedules
// DISABLED BECAUSE IT IS BUGGY:
// bool clean_updates;
- }
+ }
*/
-/* {{{ GENERATED CODE: DO NOT EDIT. Created by
- ./scripts/regenerate-properties include/dai/bbp.h src/bbp.cpp
+/* {{{ GENERATED CODE: DO NOT EDIT. Created by
+ ./scripts/regenerate-properties include/dai/bbp.h src/bbp.cpp
*/
struct Properties {
/// Enumeration of possible update schedules
/// Represents the neighborhood structure of nodes in a bipartite graph.
/** A bipartite graph has two types of nodes: type 1 and type 2. Edges can occur only between
* nodes of different type. Nodes are indexed by an unsigned integer. If there are nr1()
- * nodes of type 1 and nr2() nodes of type 2, the nodes of type 1 are numbered
+ * nodes of type 1 and nr2() nodes of type 2, the nodes of type 1 are numbered
* 0,1,2,...,nr1()-1 and the nodes of type 2 are numbered 0,1,2,...,nr2()-1. An edge
* between node \a n1 of type 1 and node \a n2 of type 2 is represented by a BipartiteGraph::Edge(\a n1,\a n2).
*
* A BipartiteGraph is implemented as a sparse adjacency list, i.e., it stores for each node a list of
* its neighboring nodes. In particular, it stores for each node of type 1 a vector of Neighbor structures
- * (accessible by the nb1() method) describing the neighboring nodes of type 2; similarly, for each node
- * of type 2 it stores a vector of Neighbor structures (accessibly by the nb2() method) describing the
- * neighboring nodes of type 1.
+ * (accessible by the nb1() method) describing the neighboring nodes of type 2; similarly, for each node
+ * of type 2 it stores a vector of Neighbor structures (accessibly by the nb2() method) describing the
+ * neighboring nodes of type 1.
* Thus, each node has an associated variable of type BipartiteGraph::Neighbors, which is a vector of
* Neighbor structures, describing its neighboring nodes of the other type.
* \idea Cache second-order neighborhoods in BipartiteGraph.
* a node as a neighbor of its neighbor (the \a dual member).
*
* By convention, variable identifiers naming indices into a
- * vector of neighbors are prefixed with an underscore ("_").
+ * vector of neighbors are prefixed with an underscore ("_").
* The neighbor list which they point into is then understood
* from context. For example:
*
/// Constructs BipartiteGraph from a range of edges.
/** \tparam EdgeInputIterator Iterator that iterates over instances of BipartiteGraph::Edge.
* \param nr1 The number of nodes of type 1.
- * \param nr2 The number of nodes of type 2.
+ * \param nr2 The number of nodes of type 2.
* \param begin Points to the first edge.
* \param end Points just beyond the last edge.
*/
/// (Re)constructs BipartiteGraph from a range of edges.
/** \tparam EdgeInputIterator Iterator that iterates over instances of BipartiteGraph::Edge.
* \param nr1 The number of nodes of type 1.
- * \param nr2 The number of nodes of type 2.
+ * \param nr2 The number of nodes of type 2.
* \param begin Points to the first edge.
* \param end Points just beyond the last edge.
*/
void construct( size_t nr1, size_t nr2, EdgeInputIterator begin, EdgeInputIterator end );
/// Returns constant reference to the _i2'th neighbor of node i1 of type 1
- const Neighbor & nb1( size_t i1, size_t _i2 ) const {
+ const Neighbor & nb1( size_t i1, size_t _i2 ) const {
DAI_DEBASSERT( i1 < _nb1.size() );
DAI_DEBASSERT( _i2 < _nb1[i1].size() );
- return _nb1[i1][_i2];
+ return _nb1[i1][_i2];
}
/// Returns reference to the _i2'th neighbor of node i1 of type 1
Neighbor & nb1( size_t i1, size_t _i2 ) {
DAI_DEBASSERT( i1 < _nb1.size() );
DAI_DEBASSERT( _i2 < _nb1[i1].size() );
- return _nb1[i1][_i2];
+ return _nb1[i1][_i2];
}
/// Returns constant reference to the _i1'th neighbor of node i2 of type 2
- const Neighbor & nb2( size_t i2, size_t _i1 ) const {
+ const Neighbor & nb2( size_t i2, size_t _i1 ) const {
DAI_DEBASSERT( i2 < _nb2.size() );
DAI_DEBASSERT( _i1 < _nb2[i2].size() );
- return _nb2[i2][_i1];
+ return _nb2[i2][_i1];
}
/// Returns reference to the _i1'th neighbor of node i2 of type 2
- Neighbor & nb2( size_t i2, size_t _i1 ) {
+ Neighbor & nb2( size_t i2, size_t _i1 ) {
DAI_DEBASSERT( i2 < _nb2.size() );
DAI_DEBASSERT( _i1 < _nb2[i2].size() );
- return _nb2[i2][_i1];
+ return _nb2[i2][_i1];
}
/// Returns constant reference to all neighbors of node i1 of type 1
- const Neighbors & nb1( size_t i1 ) const {
+ const Neighbors & nb1( size_t i1 ) const {
DAI_DEBASSERT( i1 < _nb1.size() );
- return _nb1[i1];
+ return _nb1[i1];
}
/// Returns reference to all neighbors of node of i1 type 1
- Neighbors & nb1( size_t i1 ) {
+ Neighbors & nb1( size_t i1 ) {
DAI_DEBASSERT( i1 < _nb1.size() );
- return _nb1[i1];
+ return _nb1[i1];
}
/// Returns constant reference to all neighbors of node i2 of type 2
- const Neighbors & nb2( size_t i2 ) const {
+ const Neighbors & nb2( size_t i2 ) const {
DAI_DEBASSERT( i2 < _nb2.size() );
- return _nb2[i2];
+ return _nb2[i2];
}
/// Returns reference to all neighbors of node i2 of type 2
- Neighbors & nb2( size_t i2 ) {
+ Neighbors & nb2( size_t i2 ) {
DAI_DEBASSERT( i2 < _nb2.size() );
- return _nb2[i2];
+ return _nb2[i2];
}
/// Returns number of nodes of type 1
size_t nr1() const { return _nb1.size(); }
/// Returns number of nodes of type 2
size_t nr2() const { return _nb2.size(); }
-
+
/// Calculates the number of edges, time complexity: O(nr1())
size_t nrEdges() const {
size_t sum = 0;
sum += nb1(i1).size();
return sum;
}
-
+
/// Adds a node of type 1 without neighbors.
void add1() { _nb1.push_back( Neighbors() ); }
-
+
/// Adds a node of type 2 without neighbors.
void add2() { _nb2.push_back( Neighbors() ); }
_edge_indexed = true;
}
-
+
const Edge& edge(size_t e) const {
assert(_edge_indexed);
return _edges[e];
}
-
+
const std::vector<Edge>& edges() const {
return _edges;
}
-
+
size_t VV2E(size_t n1, size_t n2) const {
assert(_edge_indexed);
Edge e(n1,n2);
* 12 -- 21;
* }
* \enddot
- * It has three nodes of type 1 (drawn as circles) and two nodes of type 2 (drawn as rectangles).
+ * It has three nodes of type 1 (drawn as circles) and two nodes of type 2 (drawn as rectangles).
* Node 0 of type 1 has only one neighbor (node 0 of type 2), but node 0 of type 2 has three neighbors (nodes 0,1,2 of type 1).
* The example code shows how to construct a BipartiteGraph object representing this bipartite graph and
* how to iterate over nodes and their neighbors.
class BP : public DAIAlgFG {
private:
typedef std::vector<size_t> ind_t;
- typedef std::multimap<double, std::pair<std::size_t, std::size_t> > LutType;
+ typedef std::multimap<double, std::pair<std::size_t, std::size_t> > LutType;
struct EdgeProp {
ind_t index;
Prob message;
size_t _iters;
/// The history of message updates (only recorded if recordSentMessages is true)
std::vector<std::pair<std::size_t, std::size_t> > _sentMessages;
-
+
public:
/// Parameters of this inference algorithm
struct Properties {
/// Enumeration of inference variants
DAI_ENUM(InfType,SUMPROD,MAXPROD);
-
+
/// Verbosity
size_t verbose;
/// Copy constructor
BP( const BP &x ) : DAIAlgFG(x), _edges(x._edges), _edge2lut(x._edge2lut),
- _lut(x._lut), _maxdiff(x._maxdiff), _iters(x._iters), _sentMessages(x._sentMessages),
- props(x.props), recordSentMessages(x.recordSentMessages)
+ _lut(x._lut), _maxdiff(x._maxdiff), _iters(x._iters), _sentMessages(x._sentMessages),
+ props(x.props), recordSentMessages(x.recordSentMessages)
{
for( LutType::iterator l = _lut.begin(); l != _lut.end(); ++l )
_edge2lut[l->second.first][l->second.second] = l;
/// Calculates both types of BP messages and their normalizers from an InfAlg.
/** BP_dual calculates "dual" versions of BP messages (both messages from factors
- * to variables and messages from variables to factors), and normalizers, given an InfAlg.
+ * to variables and messages from variables to factors), and normalizers, given an InfAlg.
* These are computed from the variable and factor beliefs of the InfAlg.
* This class is used primarily by BBP.
*/
struct _edges_t : public std::vector<std::vector<T> > {};
/// Groups together the data structures for storing the two types of messages and their normalizers
- struct messages {
+ struct messages {
/// Unnormalized variable->factor messages
_edges_t<Prob> n;
/// Normalizers of variable->factor messages
/// Pointer to the InfAlg object
const InfAlg *_ia;
-
+
/// Does all necessary preprocessing
void init();
/// Allocates space for _msgs
void calcBeliefF(size_t I);
public:
- /// Construct BP_dual object from (converged) InfAlg object's beliefs and factors.
- /* A pointer to the the InfAlg object is stored,
+ /// Construct BP_dual object from (converged) InfAlg object's beliefs and factors.
+ /* A pointer to the the InfAlg object is stored,
* so the object must not be destroyed before the BP_dual is destroyed.
*/
BP_dual( const InfAlg *ia ) : _ia(ia) { init(); }
namespace dai {
-/// Find a variable to clamp using BBP (goes with maximum adjoint)
+/// Find a variable to clamp using BBP (goes with maximum adjoint)
/// \see BBP
std::pair<size_t, size_t> bbpFindClampVar( const InfAlg &in_bp, bool clampingVar, const PropertySet &bbp_props, bbp_cfn_t cfn, Real *maxVarOut );
/// Prints beliefs, variables and partition sum, in case of a debugging build
void printDebugInfo();
- /// Called by 'run', and by itself. Implements the main algorithm.
+ /// Called by 'run', and by itself. Implements the main algorithm.
/** Chooses a variable to clamp, recurses, combines the logZ and
* beliefs estimates of the children, and returns the improved
* estimates in \a lz_out and \a beliefs_out to its parent
*/
virtual bool chooseNextClampVar( InfAlg* bp, std::vector<size_t> &clamped_vars_list, size_t &i, std::vector<size_t> &xis, Real *maxVarOut );
- /// Return the InfAlg to use at each step of the recursion.
+ /// Return the InfAlg to use at each step of the recursion.
/// \todo At present, only returns a BP instance
InfAlg* getInfAlg();
DAI_ENUM(ChooseMethodType,CHOOSE_RANDOM,CHOOSE_MAXENT,CHOOSE_BBP,CHOOSE_BP_L1,CHOOSE_BP_CFN);
/// Enumeration of possible clampings: variables or factors
DAI_ENUM(ClampType,CLAMP_VAR,CLAMP_FACTOR);
-
+
/// Verbosity
size_t verbose = 0;
std::string clamp_outfile = "";
}
*/
-/* {{{ GENERATED CODE: DO NOT EDIT. Created by
- ./scripts/regenerate-properties include/dai/cbp.h src/cbp.cpp
+/* {{{ GENERATED CODE: DO NOT EDIT. Created by
+ ./scripts/regenerate-properties include/dai/cbp.h src/cbp.cpp
*/
struct Properties {
/// Enumeration of possible update schedules
/// A ClusterGraph is a hypergraph with VarSets as nodes.
/** It is implemented as bipartite graph with variable (Var) nodes
- * and cluster (VarSet) nodes.
+ * and cluster (VarSet) nodes.
*/
class ClusterGraph {
public:
/// Construct from vector<VarSet>
ClusterGraph( const std::vector<VarSet> & cls );
-
+
/// Returns true if cluster I is not contained in a larger cluster
bool isMaximal( size_t I ) const {
DAI_DEBASSERT( I < G.nr2() );
}
return result;
}
-
+
/// Returns union of clusters that contain the variable with index i
VarSet Delta( size_t i ) const {
VarSet result;
VarSet delta( size_t i ) const {
return Delta( i ) / vars[i];
}
-
+
/// Erases all clusters that contain n where n is the variable with index i
ClusterGraph& eraseSubsuming( size_t i ) {
while( G.nb1(i).size() ) {
}
return *this;
}
-
+
/// Returns a const reference to the clusters
const std::vector<VarSet> & toVector() const { return clusters; }
/// InfAlg is an abstract base class, defining the common interface of all inference algorithms in libDAI.
-/** \todo General marginalization functions like calcMarginal now copy a complete InfAlg object. Instead,
- * it would make more sense that they construct a new object without copying the FactorGraph or RegionGraph.
+/** \todo General marginalization functions like calcMarginal now copy a complete InfAlg object. Instead,
+ * it would make more sense that they construct a new object without copying the FactorGraph or RegionGraph.
* Or they can simply be made methods of the general InfAlg class.
* \idea Use a PropertySet as output of an InfAlg, instead of functions like maxDiff() and Iterations().
*/
/// Combines an InfAlg and a graphical model, e.g., a FactorGraph or RegionGraph
/** \tparam GRM Should be castable to FactorGraph
* \todo A DAIAlg should not inherit from a FactorGraph or RegionGraph, but should
- * store a reference to the graphical model object. This prevents needless copying
- * of (possibly large) data structures. Disadvantage: the caller must not change
- * the graphical model between calls to the inference algorithm (maybe a smart_ptr
- * or some locking mechanism would help here?).
+ * store a reference to the graphical model object. This prevents needless copying
+ * of (possibly large) data structures. Disadvantage: the caller must not change
+ * the graphical model between calls to the inference algorithm (maybe a smart_ptr
+ * or some locking mechanism would help here?).
*/
template <class GRM>
class DAIAlg : public InfAlg, public GRM {
public:
/// Default constructor
DAIAlg() : InfAlg(), GRM() {}
-
- /// Construct from GRM
+
+ /// Construct from GRM
DAIAlg( const GRM &grm ) : InfAlg(), GRM(grm) {}
/// Save factor I
// OBSOLETE
/// Only for backwards compatibility (to be removed soon)
- void clamp( const Var &v, size_t x, bool backup = false ) {
+ void clamp( const Var &v, size_t x, bool backup = false ) {
GRM::clamp( v, x, backup );
std::cerr << "Warning: this DAIAlg<...>::clamp(const Var&,...) interface is obsolete!" << std::endl;
}
* some sparse factors, some noisy-OR factors, some dense factors, some arbitrary
* precision factors, etc.
*
- * \idea Use Boost::uBLAS framework to deal with matrices, especially, with 2D sparse matrices.
+ * \idea Use Boost::uBLAS framework to deal with matrices, especially, with 2D sparse matrices.
* See http://www.boost.org/libs/numeric/ublas/doc/matrix_sparse.htm
*
* \idea Introduce naming scheme:
* \code
* template <typename Node1Properties, typename Node2Properties, typename EdgeProperties>
* class ExtFactorGraph : public FactorGraph {
- * public:
- * std::vector<Node1Properties> node1Props;
- * std::vector<Node2Properties> node2Props;
- * std::vector<std::vector<EdgeProperties> > edgeProps;
- * // ...
+ * public:
+ * std::vector<Node1Properties> node1Props;
+ * std::vector<Node2Properties> node2Props;
+ * std::vector<std::vector<EdgeProperties> > edgeProps;
+ * // ...
* }
* \endcode
*
- * Advantages:
+ * Advantages:
* - Less code duplication.
* - Easier maintainability.
* - Easier to write new inference algorithms.
* - Cachability may be worse.
* - A problem is the case where there are no properties for either type of nodes or for edges.
* Maybe this can be solved using specializations, or using variadac template arguments?
- * Another possible solution would be to define a "class Empty {}", and add some code
- * that checks for the typeid, comparing it with Empty, and doing something special in that case
+ * Another possible solution would be to define a "class Empty {}", and add some code
+ * that checks for the typeid, comparing it with Empty, and doing something special in that case
* (e.g., not allocating memory).
* - The main disadvantage of this approach seems to be that it leads to even more entanglement.
* Therefore this is probably a bad idea.
*
* \section discuss_templates Polymorphism by template parameterization
- * Instead of polymorphism by inheritance, use polymorphism by template parameterization.
+ * Instead of polymorphism by inheritance, use polymorphism by template parameterization.
* For example, the real reason for introducing the complicated inheritance scheme of dai::InfAlg
* was for functions like dai::calcMarginal. Instead, one could use a template function:
* \code
* template<typename InfAlg>
* Factor calcMarginal( const InfAlg &obj, const VarSet &ns, bool reInit );
* \endcode
- * This would assume that the type InfAlg supports certain methods. Ideally, one would use
- * concepts to define different classes of inference algorithms with different capabilities,
+ * This would assume that the type InfAlg supports certain methods. Ideally, one would use
+ * concepts to define different classes of inference algorithms with different capabilities,
* for example the ability to calculate logZ, the ability to calculate marginals, the ability to
- * calculate bounds, the ability to calculate MAP states, etc. Then, one would use traits
- * classes in order to be able to query the capabilities of the model. For example, one would be
- * able to query whether the inference algorithm supports calculation of logZ. Unfortunately,
+ * calculate bounds, the ability to calculate MAP states, etc. Then, one would use traits
+ * classes in order to be able to query the capabilities of the model. For example, one would be
+ * able to query whether the inference algorithm supports calculation of logZ. Unfortunately,
* this is compile-time polymorphism, whereas tests/testdai needs runtime polymorphism.
* Therefore this is probably a bad idea.
*/
*
* \section about About libDAI
* libDAI is a free/open source C++ library (licensed under GPLv2+) that provides
- * implementations of various (approximate) inference methods for discrete
- * graphical models. libDAI supports arbitrary factor graphs with discrete
- * variables; this includes discrete Markov Random Fields and Bayesian
+ * implementations of various (approximate) inference methods for discrete
+ * graphical models. libDAI supports arbitrary factor graphs with discrete
+ * variables; this includes discrete Markov Random Fields and Bayesian
* Networks.
*
- * The library is targeted at researchers. To be able to use the library, a
- * good understanding of graphical models is needed.
+ * The library is targeted at researchers. To be able to use the library, a
+ * good understanding of graphical models is needed.
*
* \section limitations Limitations
- * libDAI is not intended to be a complete package for approximate inference.
- * Instead, it should be considered as an "inference engine", providing
- * various inference methods. In particular, it contains no GUI, currently
- * only supports its own file format for input and output (although support
+ * libDAI is not intended to be a complete package for approximate inference.
+ * Instead, it should be considered as an "inference engine", providing
+ * various inference methods. In particular, it contains no GUI, currently
+ * only supports its own file format for input and output (although support
* for standard file formats may be added later), and provides very limited
* visualization functionalities. The only learning method supported currently
* is EM for learning factor parameters.
* \section language Why C++?
* Because libDAI is implemented in C++, it is very fast compared with
* implementations in MatLab (a factor 1000 faster is not uncommon).
- * libDAI does provide a MatLab interface for easy integration with MatLab.
+ * libDAI does provide a MatLab interface for easy integration with MatLab.
*/
/** \page fileformat libDAI factorgraph file format
- *
+ *
* This page describes the .fg fileformat used in libDAI to store factor graphs.
* Markov Random Fields are special cases of factor graphs, as are Bayesian
* networks. A factor graph can be specified as follows: for each factor, one has
* empty lines. Each variable occurring in the factor graph has a unique
* identifier, its index (which should be a nonnegative integer). Comment lines
* start with #.
- *
+ *
* Each block starts with a line containing the number of variables in that
* factor. The second line contains the indices of these variables, seperated by
* spaces (indices are nonnegative integers and to avoid confusion, it is
* convention that is used is that the left-most variables cycle through their
* values the fastest (similar to MATLAB indexing of multidimensional arrays). An
* example block describing one factor is:
- *
+ *
* 3\n
* 4 8 7\n
* 3 2 2\n
* 10 1.6\n
* 12 6.4\n
* 11 2.6\n
- *
+ *
* which corresponds to the following factor:
- *
+ *
* \f[
* \begin{array}{ccc|c}
* x_4 & x_8 & x_7 & \mbox{value}\\
* values, as described in the third line of the block ("3 2 2"). The table
* contains 11 non-zero entries (all except for the fifth entry). Note that the
* eleventh and twelveth entries are interchanged.
- *
+ *
* A final note: the internal representation in libDAI of the factor above is
* different, because the variables are ordered according to their indices
* (i.e., the ordering would be x_4 x_7 x_8) and the values of the table are
* stored accordingly, with the variable having the smallest index changing
* fastest:
- *
+ *
* \f[
* \begin{array}{ccc|c}
* x_4 & x_7 & x_8 & \mbox{value}\\
namespace dai {
-/// Interface for a parameter estimation method.
-/** This parameter estimation interface is based on sufficient statistics.
- * Implementations are responsible for collecting data from a probability
+/// Interface for a parameter estimation method.
+/** This parameter estimation interface is based on sufficient statistics.
+ * Implementations are responsible for collecting data from a probability
* vector passed to it from a SharedParameters container object.
*
* Implementations of this interface should register a factory function
* "ConditionalProbEstimation".
*/
class ParameterEstimation {
- public:
+ public:
/// A pointer to a factory function.
typedef ParameterEstimation* (*ParamEstFactory)( const PropertySet& );
public:
/// Constructor
- /** For a conditional probability \f$ P( X | Y ) \f$,
+ /** For a conditional probability \f$ P( X | Y ) \f$,
* \param target_dimension should equal \f$ | X | \f$
* \param pseudocounts has length \f$ |X| \cdot |Y| \f$
*/
/// Virtual constructor, using a PropertySet.
/** Some keys in the PropertySet are required.
- * For a conditional probability \f$ P( X | Y ) \f$,
+ * For a conditional probability \f$ P( X | Y ) \f$,
* - target_dimension should be equal to \f$ | X | \f$
* - total_dimension should be equal to \f$ |X| \cdot |Y| \f$
- *
+ *
* An optional key is:
* - pseudo_count, which specifies the initial counts (defaults to 1)
*/
static ParameterEstimation* factory( const PropertySet &p );
-
+
/// Virtual copy constructor
virtual ParameterEstimation* clone() const { return new CondProbEstimation( _target_dim, _initial_stats ); }
/// Virtual destructor
virtual ~CondProbEstimation() {}
-
+
/// Returns an estimate of the conditional probability distribution.
- /** The format of the resulting Prob keeps all the values for
+ /** The format of the resulting Prob keeps all the values for
* \f$ P(X | Y=y) \f$ in sequential order in the array.
*/
virtual Prob estimate();
-
+
/// Accumulate sufficient statistics from the expectations in p.
virtual void addSufficientStatistics( const Prob &p );
-
+
/// Returns the required size for arguments to addSufficientStatistics.
virtual size_t probSize() const { return _stats.size(); }
};
/// A single factor or set of factors whose parameters should be estimated.
/** To ensure that parameters can be shared between different factors during
- * EM learning, each factor's values are reordered to match a desired variable
- * ordering. The ordering of the variables in a factor may therefore differ
+ * EM learning, each factor's values are reordered to match a desired variable
+ * ordering. The ordering of the variables in a factor may therefore differ
* from the canonical ordering used in libDAI. The SharedParameters
* class couples one or more factors (together with the specified orderings
* of the variables) with a ParameterEstimation object, taking care of the
/// Copy constructor
SharedParameters( const SharedParameters &sp );
- /// Constructor
+ /// Constructor
/** \param varorders all the factor orientations for this parameter
* \param estimation a pointer to the parameter estimation method
* \param deletePE whether the parameter estimation object should be deleted in the destructor
- */
+ */
SharedParameters( const FactorOrientations &varorders, ParameterEstimation *estimation, bool deletePE=false );
/// Constructor for making an object from a stream and a factor graph
SharedParameters( std::istream &is, const FactorGraph &fg_varlookup );
/// Destructor
- ~SharedParameters() {
+ ~SharedParameters() {
if( _deleteEstimation )
delete _estimation;
}
/// A MaximizationStep groups together several parameter estimation tasks into a single unit.
-class MaximizationStep {
+class MaximizationStep {
private:
std::vector<SharedParameters> _params;
MaximizationStep() : _params() {}
/// Constructor from a vector of SharedParameters objects
- MaximizationStep( std::vector<SharedParameters> &maximizations ) : _params(maximizations) {}
+ MaximizationStep( std::vector<SharedParameters> &maximizations ) : _params(maximizations) {}
/// Constructor from an input stream and a corresponding factor graph
MaximizationStep( std::istream &is, const FactorGraph &fg_varlookup );
* - InfAlg for performing the E-step, which includes the factor graph,
* - a vector of MaximizationSteps steps to be performed.
*
- * This implementation can perform incremental EM by using multiple
+ * This implementation can perform incremental EM by using multiple
* MaximizationSteps. An expectation step is performed between execution
* of each MaximizationStep. A call to iterate() will cycle through all
* MaximizationSteps.
* Having multiple and separate maximization steps allows for maximizing some
* parameters, performing another E step, and then maximizing separate
* parameters, which may result in faster convergence in some cases.
- */
+ */
class EMAlg {
private:
/// All the data samples used during learning
static const Real LOG_Z_TOL_DEFAULT;
/// Construct an EMAlg from all these objects
- EMAlg( const Evidence &evidence, InfAlg &estep, std::vector<MaximizationStep> &msteps, const PropertySet &termconditions )
+ EMAlg( const Evidence &evidence, InfAlg &estep, std::vector<MaximizationStep> &msteps, const PropertySet &termconditions )
: _evidence(evidence), _estep(estep), _msteps(msteps), _iters(0), _lastLogZ(), _max_iters(MAX_ITERS_DEFAULT), _log_z_tol(LOG_Z_TOL_DEFAULT)
- {
+ {
setTermConditions( termconditions );
}
-
+
/// Construct an EMAlg from an Evidence object, an InfAlg object, and an input stream
EMAlg( const Evidence &evidence, InfAlg &estep, std::istream &mstep_file );
/// Determine if the termination conditions have been met.
/** There are two sufficient termination conditions:
* -# the maximum number of iterations has been performed
- * -# the ratio of logZ increase over previous logZ is less than the
+ * -# the ratio of logZ increase over previous logZ is less than the
* tolerance, i.e.,
* \f$ \frac{\log(Z_t) - \log(Z_{t-1})}{| \log(Z_{t-1}) | } < \mathrm{tol} \f$.
*/
/// Get all observations
const std::map<Var, size_t>& observations() const { return _obs; }
-
+
/// Add an observation
void addObservation( Var node, size_t setting );
-
+
/// Clamp variables in the graphical model to their observed values
void applyEvidence( InfAlg& alg ) const;
};
public:
/// Default constructor
Evidence() : _samples() {}
-
+
/// Construct from existing samples
Evidence( std::vector<Observation> &samples ) : _samples(samples) {}
- /// Read in tabular data from a stream.
+ /// Read in tabular data from a stream.
/** Each line contains one sample, and the first line is a header line with names.
*/
void addEvidenceTabFile( std::istream& is, std::map<std::string, Var> &varMap );
-
- /// Read in tabular data from a stream.
- /** Each line contains one sample, and the first line is a header line with
+
+ /// Read in tabular data from a stream.
+ /** Each line contains one sample, and the first line is a header line with
* variable labels which should correspond with a subset of the variables in fg.
*/
void addEvidenceTabFile( std::istream& is, FactorGraph& fg );
-
+
/// Returns number of stored samples
size_t nrSamples() const { return _samples.size(); }
construct();
}
-
+
/// @name General InfAlg interface
//@{
virtual ExactInf* clone() const { return new ExactInf(*this); }
virtual double maxDiff() const { DAI_THROW(NOT_IMPLEMENTED); return 0.0; }
virtual size_t Iterations() const { DAI_THROW(NOT_IMPLEMENTED); return 0; }
//@}
-
+
/// @name Additional interface specific for ExactInf
- //@{
+ //@{
Factor beliefV( size_t i ) const { return _beliefsV[i]; }
Factor beliefF( size_t I ) const { return _beliefsF[I]; }
//@}
NUM_ERRORS}; // NUM_ERRORS should be the last entry
/// Constructor
- Exception( Code _code, const std::string& msg="", const std::string& detailedMsg="" ) : std::runtime_error(ErrorStrings[_code] + " [" + msg + "]"), errorcode(_code) {
- if( !detailedMsg.empty() )
- std::cerr << "ERROR: " << detailedMsg << std::endl;
+ Exception( Code _code, const std::string& msg="", const std::string& detailedMsg="" ) : std::runtime_error(ErrorStrings[_code] + " [" + msg + "]"), errorcode(_code) {
+ if( !detailedMsg.empty() )
+ std::cerr << "ERROR: " << detailedMsg << std::endl;
}
-
+
/// Copy constructor
Exception( const Exception &e ) : std::runtime_error(e), errorcode(e.errorcode) {}
* variables to the nonnegative real numbers.
* More formally, denoting a discrete variable with label \f$l\f$ by
* \f$x_l\f$ and its state space by \f$X_l = \{0,1,\dots,S_l-1\}\f$,
- * then a factor depending on the variables \f$\{x_l\}_{l\in L}\f$ is
+ * then a factor depending on the variables \f$\{x_l\}_{l\in L}\f$ is
* a function \f$f_L : \prod_{l\in L} X_l \to [0,\infty)\f$.
- *
+ *
* In libDAI, a factor is represented by a TFactor<\a T> object, which has two
- * components:
- * \arg a VarSet, corresponding with the set of variables \f$\{x_l\}_{l\in L}\f$
+ * components:
+ * \arg a VarSet, corresponding with the set of variables \f$\{x_l\}_{l\in L}\f$
* that the factor depends on;
- * \arg a TProb<\a T>, a vector containing the value of the factor for each possible
+ * \arg a TProb<\a T>, a vector containing the value of the factor for each possible
* joint state of the variables.
*
* The factor values are stored in the entries of the TProb<\a T> in a particular
- * ordering, which is defined by the one-to-one correspondence of a joint state
- * in \f$\prod_{l\in L} X_l\f$ with a linear index in
+ * ordering, which is defined by the one-to-one correspondence of a joint state
+ * in \f$\prod_{l\in L} X_l\f$ with a linear index in
* \f$\{0,1,\dots,\prod_{l\in L} S_l-1\}\f$ according to the mapping \f$\sigma\f$
* induced by VarSet::calcState(const std::map<Var,size_t> &).
*
public:
/// Iterator over factor entries
- typedef typename TProb<T>::iterator iterator;
+ typedef typename TProb<T>::iterator iterator;
/// Const iterator over factor entries
- typedef typename TProb<T>::const_iterator const_iterator;
+ typedef typename TProb<T>::const_iterator const_iterator;
/// Constructs TFactor depending on no variables, with value p
TFactor ( Real p = 1.0 ) : _vs(), _p(1,p) {}
/// Constructs TFactor depending on variables in vars, with uniform distribution
TFactor( const VarSet& vars ) : _vs(vars), _p(_vs.nrStates()) {}
-
+
/// Constructs TFactor depending on variables in vars, with all values set to p
TFactor( const VarSet& vars, Real p ) : _vs(vars), _p(_vs.nrStates(),p) {}
-
+
/// Constructs TFactor depending on variables in vars, copying the values from the range starting at begin
/** \param vars contains the variables that the new TFactor should depend on.
* \tparam Iterator Iterates over instances of type T; should support addition of size_t.
for( size_t li = 0; li < p.size(); ++li )
_p[permindex.convert_linear_index(li)] = p[li];
}
-
+
/// Constructs TFactor depending on the variable v, with uniform distribution
TFactor( const Var &v ) : _vs(v), _p(v.states()) {}
/// Returns a reference to the i'th entry of the value vector
T& operator[] (size_t i) { return _p[i]; }
-
+
/// Returns iterator pointing to first entry
iterator begin() { return _p.begin(); }
/// Returns const iterator pointing to first entry
- const_iterator begin() const { return _p.begin(); }
- /// Returns iterator pointing beyond last entry
- iterator end() { return _p.end(); }
- /// Returns const iterator pointing beyond last entry
- const_iterator end() const { return _p.end(); }
+ const_iterator begin() const { return _p.begin(); }
+ /// Returns iterator pointing beyond last entry
+ iterator end() { return _p.end(); }
+ /// Returns const iterator pointing beyond last entry
+ const_iterator end() const { return _p.end(); }
/// Sets all values to p
TFactor<T> & fill (T p) { _p.fill( p ); return(*this); }
}
/// Adds scalar t to *this
- TFactor<T>& operator+= (T t) {
+ TFactor<T>& operator+= (T t) {
_p += t;
return *this;
}
/// Subtracts scalar t from *this
- TFactor<T>& operator-= (T t) {
+ TFactor<T>& operator-= (T t) {
_p -= t;
return *this;
}
/// Returns sum of *this and scalar t
TFactor<T> operator+ (T t) const {
- TFactor<T> result(*this);
- result._p += t;
- return result;
+ TFactor<T> result(*this);
+ result._p += t;
+ return result;
}
/// Returns *this minus scalar t
TFactor<T> operator- (T t) const {
- TFactor<T> result(*this);
- result._p -= t;
- return result;
+ TFactor<T> result(*this);
+ result._p -= t;
+ return result;
}
/// Returns *this raised to the power a
- TFactor<T> operator^ (Real a) const {
- TFactor<T> x;
- x._vs = _vs;
- x._p = _p^a;
- return x;
+ TFactor<T> operator^ (Real a) const {
+ TFactor<T> x;
+ x._vs = _vs;
+ x._p = _p^a;
+ return x;
}
/// Multiplies *this with the TFactor f
- TFactor<T>& operator*= (const TFactor<T>& f) {
+ TFactor<T>& operator*= (const TFactor<T>& f) {
if( f._vs == _vs ) // optimize special case
_p *= f._p;
else
- *this = (*this * f);
+ *this = (*this * f);
return *this;
}
/// Divides *this by the TFactor f
- TFactor<T>& operator/= (const TFactor<T>& f) {
+ TFactor<T>& operator/= (const TFactor<T>& f) {
if( f._vs == _vs ) // optimize special case
_p /= f._p;
else
- *this = (*this / f);
+ *this = (*this / f);
return *this;
}
/// Adds the TFactor f to *this
- TFactor<T>& operator+= (const TFactor<T>& f) {
+ TFactor<T>& operator+= (const TFactor<T>& f) {
if( f._vs == _vs ) // optimize special case
_p += f._p;
else
- *this = (*this + f);
+ *this = (*this + f);
return *this;
}
/// Subtracts the TFactor f from *this
- TFactor<T>& operator-= (const TFactor<T>& f) {
+ TFactor<T>& operator-= (const TFactor<T>& f) {
if( f._vs == _vs ) // optimize special case
_p -= f._p;
else
- *this = (*this - f);
+ *this = (*this - f);
return *this;
}
/// Returns product of *this with the TFactor f
- /** The product of two factors is defined as follows: if
+ /** The product of two factors is defined as follows: if
* \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$ and \f$g : \prod_{m\in M} X_m \to [0,\infty)\f$, then
* \f[fg : \prod_{l\in L\cup M} X_l \to [0,\infty) : x \mapsto f(x_L) g(x_M).\f]
*/
}
/// Returns quotient of *this by the TFactor f
- /** The quotient of two factors is defined as follows: if
+ /** The quotient of two factors is defined as follows: if
* \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$ and \f$g : \prod_{m\in M} X_m \to [0,\infty)\f$, then
* \f[\frac{f}{g} : \prod_{l\in L\cup M} X_l \to [0,\infty) : x \mapsto \frac{f(x_L)}{g(x_M)}.\f]
*/
}
/// Returns sum of *this and the TFactor f
- /** The sum of two factors is defined as follows: if
+ /** The sum of two factors is defined as follows: if
* \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$ and \f$g : \prod_{m\in M} X_m \to [0,\infty)\f$, then
* \f[f+g : \prod_{l\in L\cup M} X_l \to [0,\infty) : x \mapsto f(x_L) + g(x_M).\f]
*/
}
/// Returns *this minus the TFactor f
- /** The difference of two factors is defined as follows: if
+ /** The difference of two factors is defined as follows: if
* \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$ and \f$g : \prod_{m\in M} X_m \to [0,\infty)\f$, then
* \f[f-g : \prod_{l\in L\cup M} X_l \to [0,\infty) : x \mapsto f(x_L) - g(x_M).\f]
*/
_p.makePositive( epsilon );
return *this;
}
-
+
/// Returns pointwise inverse of *this.
/** If zero == true, uses 1 / 0 == 0; otherwise 1 / 0 == Inf.
*/
- TFactor<T> inverse(bool zero=true) const {
- TFactor<T> inv;
- inv._vs = _vs;
+ TFactor<T> inverse(bool zero=true) const {
+ TFactor<T> inv;
+ inv._vs = _vs;
inv._p = _p.inverse(zero);
- return inv;
+ return inv;
}
/// Returns pointwise exp of *this
- TFactor<T> exp() const {
- TFactor<T> e;
- e._vs = _vs;
- e._p = _p.exp();
- return e;
+ TFactor<T> exp() const {
+ TFactor<T> e;
+ e._vs = _vs;
+ e._p = _p.exp();
+ return e;
}
/// Returns pointwise logarithm of *this
/** If zero==true, uses log(0)==0; otherwise, log(0)=-Inf.
*/
TFactor<T> log(bool zero=false) const {
- TFactor<T> l;
- l._vs = _vs;
- l._p = _p.log(zero);
- return l;
+ TFactor<T> l;
+ l._vs = _vs;
+ l._p = _p.log(zero);
+ return l;
}
/// Returns pointwise absolute value of *this
- TFactor<T> abs() const {
- TFactor<T> e;
- e._vs = _vs;
- e._p = _p.abs();
- return e;
+ TFactor<T> abs() const {
+ TFactor<T> e;
+ e._vs = _vs;
+ e._p = _p.abs();
+ return e;
}
/// Normalizes *this TFactor according to the specified norm
T normalize( typename Prob::NormType norm=Prob::NORMPROB ) { return _p.normalize( norm ); }
/// Returns a normalized copy of *this, according to the specified norm
- TFactor<T> normalized( typename Prob::NormType norm=Prob::NORMPROB ) const {
+ TFactor<T> normalized( typename Prob::NormType norm=Prob::NORMPROB ) const {
TFactor<T> result;
result._vs = _vs;
result._p = _p.normalized( norm );
* \pre \a nsState < ns.states()
*
* The result is a TFactor that depends on the variables in this->vars() except those in \a ns,
- * obtained by setting the variables in \a ns to the joint state specified by the linear index
+ * obtained by setting the variables in \a ns to the joint state specified by the linear index
* \a nsState. Formally, if *this corresponds with the factor \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$,
* \f$M \subset L\f$ corresponds with \a ns and \a nsState corresponds with a mapping \f$s\f$ that
- * maps a variable \f$x_m\f$ with \f$m\in M\f$ to its state \f$s(x_m) \in X_m\f$, then the slice
+ * maps a variable \f$x_m\f$ with \f$m\in M\f$ to its state \f$s(x_m) \in X_m\f$, then the slice
* returned corresponds with the factor \f$g : \prod_{l \in L \setminus M} X_l \to [0,\infty)\f$
* defined by \f$g(\{x_l\}_{l\in L \setminus M}) = f(\{x_l\}_{l\in L \setminus M}, \{s(x_m)\}_{m\in M})\f$.
*/
assert( ns << _vs );
VarSet nsrem = _vs / ns;
TFactor<T> result( nsrem, T(0) );
-
+
// OPTIMIZE ME
IndexFor i_ns (ns, _vs);
IndexFor i_nsrem (nsrem, _vs);
* If *this corresponds with \f$f : \prod_{l\in L} X_l \to [0,\infty)\f$ and \f$L \subset M\f$, then
* the embedded factor corresponds with \f$g : \prod_{m\in M} X_m \to [0,\infty) : x \mapsto f(x_L)\f$.
*/
- TFactor<T> embed(const VarSet & ns) const {
+ TFactor<T> embed(const VarSet & ns) const {
assert( ns >> _vs );
if( _vs == ns )
return *this;
/// Apply binary operator pointwise on two factors
template<typename T, typename binaryOp> TFactor<T> pointwiseOp( const TFactor<T> &f, const TFactor<T> &g, binaryOp op ) {
if( f.vars() == g.vars() ) { // optimizate special case
- TFactor<T> result(f);
+ TFactor<T> result(f);
for( size_t i = 0; i < result.states(); i++ )
result[i] = op( result[i], g[i] );
- return result;
+ return result;
} else {
TFactor<T> result( f.vars() | g.vars(), 0.0 );
for( size_t alpha1 = 0; alpha1 < i.states(); alpha1++ )
for( size_t alpha2 = 0; alpha2 < i.states(); alpha2++ )
if( alpha2 != alpha1 )
- for( size_t beta1 = 0; beta1 < j.states(); beta1++ )
+ for( size_t beta1 = 0; beta1 < j.states(); beta1++ )
for( size_t beta2 = 0; beta2 < j.states(); beta2++ )
if( beta2 != beta1 ) {
size_t as = 1, bs = 1;
if( f > max )
max = f;
}
-
+
return std::tanh( 0.25 * std::log( max ) );
}
/// Calculates the mutual information between the two variables that f depends on, under the distribution given by f
/** \relates TFactor
* \pre f.vars().size() == 2
- */
+ */
template<typename T> Real MutualInfo(const TFactor<T> &f) {
assert( f.vars().size() == 2 );
VarSet::const_iterator it = f.vars().begin();
/// Represents a factor graph.
-/** Both Bayesian Networks and Markov random fields can be represented in a
- * unifying representation, called <em>factor graph</em> [\ref KFL01],
+/** Both Bayesian Networks and Markov random fields can be represented in a
+ * unifying representation, called <em>factor graph</em> [\ref KFL01],
* implemented in libDAI by the FactorGraph class.
*
- * Consider a probability distribution over \f$N\f$ discrete random variables
+ * Consider a probability distribution over \f$N\f$ discrete random variables
* \f$x_0,x_1,\dots,x_N\f$ that factorizes as a product of factors, each of
* which depends on some subset of the variables:
* \f[
* of variables \f$X_I \subset \{x_0,x_1,\dots,x_N\}\f$ to the nonnegative
* real numbers.
*
- * For a Bayesian network, each factor corresponds to a (conditional)
- * probability table, whereas for a Markov random field, each factor
+ * For a Bayesian network, each factor corresponds to a (conditional)
+ * probability table, whereas for a Markov random field, each factor
* corresponds to a maximal clique of the undirected graph.
*
* Factor graphs explicitly express the factorization structure of the
*
* \todo Alternative implementation of undo factor changes: the only things that have to be
* undone currently are setting a factor to 1 and setting a factor to a Kronecker delta. This
- * could also be implemented in the TFactor itself, which could maintain its state
+ * could also be implemented in the TFactor itself, which could maintain its state
* (ones/delta/full) and act accordingly.
*/
class FactorGraph {
/// Shorthand for BipartiteGraph::Edge
typedef BipartiteGraph::Edge Edge;
-
+
/// Iterator over factors
typedef std::vector<Factor>::iterator iterator;
-
+
/// Const iterator over factors
typedef std::vector<Factor>::const_iterator const_iterator;
-
+
private:
std::vector<Var> _vars;
template<typename FactorInputIterator, typename VarInputIterator>
FactorGraph(FactorInputIterator fact_begin, FactorInputIterator fact_end, VarInputIterator var_begin, VarInputIterator var_end, size_t nr_fact_hint = 0, size_t nr_var_hint = 0 );
- /// Destructor
+ /// Destructor
virtual ~FactorGraph() {}
/// Clone *this (virtual copy constructor)
/// Set the content of the I'th factor and make a backup of its old content if backup == true
virtual void setFactor( size_t I, const Factor &newFactor, bool backup = false ) {
- assert( newFactor.vars() == factor(I).vars() );
+ assert( newFactor.vars() == factor(I).vars() );
if( backup )
backupFactor( I );
- _factors[I] = newFactor;
+ _factors[I] = newFactor;
}
/// Set the contents of all factors as specified by facs and make a backup of the old contents if backup == true
// OBSOLETE
/// Only for backwards compatibility (to be removed soon)
- virtual void clamp( const Var &v, size_t x, bool backup = false ) {
+ virtual void clamp( const Var &v, size_t x, bool backup = false ) {
std::cerr << "Warning: this FactorGraph::clamp(const Var&,...) interface is obsolete!" << std::endl;
clamp( findVar(v), x, backup );
}
/// Writes a FactorGraph to a GraphViz .dot file
void printDot( std::ostream& os ) const;
-
+
/// Returns the cliques in this FactorGraph
std::vector<VarSet> Cliques() const;
/// @name Additional interface specific for HAK
- //@{
+ //@{
Factor & muab( size_t alpha, size_t _beta ) { return _muab[alpha][_beta]; }
Factor & muba( size_t alpha, size_t _beta ) { return _muba[alpha][_beta]; }
const Factor& Qa( size_t alpha ) const { return _Qa[alpha]; };
* and (long)i is equal to the linear index of the corresponding
* state of indexVars, where the variables in indexVars that are
* not in forVars assume their zero'th value.
- * \idea Optimize all indices as follows: keep a cache of all (or only
- * relatively small) indices that have been computed (use a hash). Then,
+ * \idea Optimize all indices as follows: keep a cache of all (or only
+ * relatively small) indices that have been computed (use a hash). Then,
* instead of computing on the fly, use the precomputed ones.
*/
class IndexFor {
/// For each variable in forVars, the current state
std::vector<size_t> _count;
-
+
/// For each variable in forVars, its number of possible values
std::vector<size_t> _dims;
public:
/// Default constructor
- IndexFor() {
- _index = -1;
+ IndexFor() {
+ _index = -1;
}
/// Constructor
}
/// Conversion to long
- operator long () const {
- return( _index );
+ operator long () const {
+ return( _index );
}
/// Pre-increment operator
i++;
}
- if( i == _count.size() )
+ if( i == _count.size() )
_index = -1;
}
return( *this );
MultiFor( const std::vector<size_t> &d ) : _dims(d), _states(d.size(),0), _state(0) {}
/// Return linear state
- operator size_t() const {
+ operator size_t() const {
assert( valid() );
return( _state );
}
Permute( const std::vector<size_t> &d, const std::vector<size_t> &sigma ) : _dims(d), _sigma(sigma) {
assert( _dims.size() == _sigma.size() );
}
-
+
/// Construct from vector of variables
Permute( const std::vector<Var> &vars ) : _dims(vars.size()), _sigma(vars.size()) {
VarSet vs( vars.begin(), vars.end(), vars.size() );
/// Calculates a permuted linear index.
/** Converts the linear index li to a vector index
- * corresponding with the dimensions in _dims, permutes it according to sigma,
+ * corresponding with the dimensions in _dims, permutes it according to sigma,
* and converts it back to a linear index according to the permuted dimensions.
*/
size_t convert_linear_index( size_t li ) const {
long state;
states_type states;
-
+
public:
/// Default constructor
State() : state(0), states() {}
}
/// Return linear state
- operator size_t() const {
+ operator size_t() const {
assert( valid() );
return( state );
}
for( VarSet::const_iterator v = vs.begin(); v != vs.end(); v++ ) {
states_type::const_iterator entry = states.find( *v );
if( entry != states.end() )
- vs_state += entry->second * prod;
+ vs_state += entry->second * prod;
prod *= v->states();
}
return vs_state;
state = -1;
}
}
-
+
/// Postfix increment operator
void operator++( int ) {
- operator++();
+ operator++();
}
/// Returns true if the current state is valid
public:
/// Rooted tree
DEdgeVec RTree;
-
+
/// Outer region beliefs
std::vector<Factor> Qa;
-
+
/// Inner region beliefs
std::vector<Factor> Qb;
/// @name Additional interface specific for JTree
- //@{
+ //@{
void GenerateJT( const std::vector<VarSet> &Cliques );
/// Returns reference the message from outer region alpha to its _beta'th neighboring inner region
- Factor & message( size_t alpha, size_t _beta ) { return _mes[alpha][_beta]; }
+ Factor & message( size_t alpha, size_t _beta ) { return _mes[alpha][_beta]; }
/// Returns const reference to the message from outer region alpha to its _beta'th neighboring inner region
- const Factor & message( size_t alpha, size_t _beta ) const { return _mes[alpha][_beta]; }
+ const Factor & message( size_t alpha, size_t _beta ) const { return _mes[alpha][_beta]; }
/// Runs junction-tree with HUGIN updates
void runHUGIN();
Factor beliefV( size_t i ) const { return _beliefs[i]; }
/// @name Additional interface specific for LC
- //@{
+ //@{
double CalcCavityDist( size_t i, const std::string &name, const PropertySet &opts );
double InitCavityDists( const std::string &name, const PropertySet &opts );
long SetCavityDists( std::vector<Factor> &Q );
/// @name Additional interface specific for MF
- //@{
+ //@{
Factor beliefV( size_t i ) const;
//@}
-
+
private:
void construct();
void setProperties( const PropertySet &opts );
std::vector<std::vector<double> > M; // M[i][_j] is M^{(i)}_j
std::vector<std::vector<size_t> > kindex; // the _j'th neighbour of spin i has spin i as its kindex[i][_j]'th neighbour
std::vector<std::vector<std::vector<double> > > cors;
-
+
static const size_t kmax = 31;
typedef boost::dynamic_bitset<> sub_nb;
-
+
size_t N;
std::vector<double> Mag;
/// @name Additional interface specific for MR
- //@{
+ //@{
//@}
-
+
private:
void init(size_t Nin, double *_w, double *_th);
void makekindex();
void sum_subs(size_t j, sub_nb A, double *sum_even, double *sum_odd);
double sign(double a) { return (a >= 0) ? 1.0 : -1.0; }
-
+
void setProperties( const PropertySet &opts );
PropertySet getProperties() const;
std::string printProperties() const;
-};
+};
} // end of namespace dai
/// Represents a vector with entries of type \a T.
/** A TProb<T> is a std::vector<T> with an interface designed for dealing with probability mass functions.
- * It is mainly used for representing measures on a finite outcome space, e.g., the probability
+ * It is mainly used for representing measures on a finite outcome space, e.g., the probability
* distribution of a discrete random variable.
* \tparam T Should be a scalar that is castable from and to double and should support elementary arithmetic operations.
*/
public:
/// Iterator over entries
- typedef typename std::vector<T>::iterator iterator;
+ typedef typename std::vector<T>::iterator iterator;
/// Const iterator over entries
- typedef typename std::vector<T>::const_iterator const_iterator;
+ typedef typename std::vector<T>::const_iterator const_iterator;
/// Enumerates different ways of normalizing a probability measure.
- /**
+ /**
* - NORMPROB means that the sum of all entries should be 1;
* - NORMLINF means that the maximum absolute value of all entries should be 1.
*/
typedef enum { NORMPROB, NORMLINF } NormType;
/// Enumerates different distance measures between probability measures.
- /**
+ /**
* - DISTL1 is the L-1 distance (sum of absolute values of pointwise difference);
* - DISTLINF is the L-inf distance (maximum absolute value of pointwise difference);
* - DISTTV is the Total Variation distance;
* - DISTKL is the Kullback-Leibler distance.
*/
typedef enum { DISTL1, DISTLINF, DISTTV, DISTKL } DistType;
-
+
/// Default constructor
TProb() : _p() {}
-
+
/// Construct uniform distribution over n outcomes, i.e., a vector of length n with each entry set to 1/n
explicit TProb( size_t n ) : _p(std::vector<T>(n, 1.0 / n)) {}
-
+
/// Construct vector of length n with each entry set to p
explicit TProb( size_t n, Real p ) : _p(n, (T)p) {}
-
+
/// Construct vector from a range
/** \tparam Iterator Iterates over instances that can be cast to T.
* \param begin Points to first instance to be added.
_p.reserve( sizeHint );
_p.insert( _p.begin(), begin, end );
}
-
+
/// Returns a const reference to the vector
const std::vector<T> & p() const { return _p; }
/// Returns a reference to the vector
std::vector<T> & p() { return _p; }
-
+
/// Returns a copy of the i'th entry
- T operator[]( size_t i ) const {
+ T operator[]( size_t i ) const {
#ifdef DAI_DEBUG
return _p.at(i);
#else
return _p[i];
#endif
}
-
+
/// Returns reference to the i'th entry
T& operator[]( size_t i ) { return _p[i]; }
-
+
/// Returns iterator pointing to first entry
iterator begin() { return _p.begin(); }
const_iterator end() const { return _p.end(); }
/// Sets all entries to x
- TProb<T> & fill(T x) {
+ TProb<T> & fill(T x) {
std::fill( _p.begin(), _p.end(), x );
return *this;
}
/// Draws all entries i.i.d. from a uniform distribution on [0,1)
- TProb<T> & randomize() {
+ TProb<T> & randomize() {
std::generate(_p.begin(), _p.end(), rnd_uniform);
return *this;
}
_p[i] = 0;
return *this;
}
-
+
/// Set all entries to 1.0/size()
TProb<T>& setUniform () {
fill(1.0/size());
std::transform( _p.begin(), _p.end(), q._p.begin(), _p.begin(), std::multiplies<T>() );
return *this;
}
-
+
/// Return product of *this with q (sizes should be identical)
TProb<T> operator* (const TProb<T> & q) const {
DAI_DEBASSERT( size() == q.size() );
std::transform( _p.begin(), _p.end(), q._p.begin(), _p.begin(), std::plus<T>() );
return *this;
}
-
+
/// Returns sum of *this and q (sizes should be identical)
TProb<T> operator+ (const TProb<T> & q) const {
DAI_DEBASSERT( size() == q.size() );
sum += q;
return sum;
}
-
+
/// Pointwise subtraction of q (sizes should be identical)
TProb<T>& operator-= (const TProb<T> & q) {
DAI_DEBASSERT( size() == q.size() );
}
return *this;
}
-
+
/// Pointwise division by q, where division by 0 yields +Inf (sizes should be identical)
TProb<T>& divide (const TProb<T> & q) {
DAI_DEBASSERT( size() == q.size() );
std::transform( _p.begin(), _p.end(), q._p.begin(), _p.begin(), std::divides<T>() );
return *this;
}
-
+
/// Returns quotient of *this with q (sizes should be identical)
TProb<T> operator/ (const TProb<T> & q) const {
DAI_DEBASSERT( size() == q.size() );
result.normalize( norm );
return result;
}
-
+
/// Returns true if one or more entries are NaN
bool hasNaNs() const {
bool foundnan = false;
bool hasNegatives() const {
return (std::find_if( _p.begin(), _p.end(), std::bind2nd( std::less<Real>(), 0.0 ) ) != _p.end());
}
-
+
/// Returns entropy of *this
Real entropy() const {
Real S = 0.0;
T s = 0;
for( size_t i = 0; i < size(); i++ ) {
s += _p[i];
- if( s > x )
+ if( s > x )
return i;
}
return( size() - 1 );
for( size_t i = 0; i < p.size(); i++ )
result += fabs((Real)p[i] - (Real)q[i]);
break;
-
+
case TProb<T>::DISTLINF:
for( size_t i = 0; i < p.size(); i++ ) {
Real z = fabs((Real)p[i] - (Real)q[i]);
}
/// Gets a property
- const PropertyValue & Get(const PropertyKey &key) const {
- PropertySet::const_iterator x = find(key);
-#ifdef DAI_DEBUG
+ const PropertyValue & Get(const PropertyKey &key) const {
+ PropertySet::const_iterator x = find(key);
if( x == this->end() )
- std::cerr << "PropertySet::Get cannot find property " << key << std::endl;
-#endif
- assert( x != this->end() );
- return x->second;
+ DAI_THROWE(NOT_FOUND,"PropertySet::Get cannot find property '" + key + "'");
+ return x->second;
}
/// Sets a property
/// Converts a property from ValueType to string (if necessary)
template<typename ValueType>
- PropertySet & setAsString(const PropertyKey &key, ValueType &val) {
+ PropertySet & setAsString(const PropertyKey &key, ValueType &val) {
try {
return Set( key, boost::lexical_cast<std::string>(val) );
} catch( boost::bad_lexical_cast & ) {
/// Shorthand for (temporarily) adding properties, e.g. PropertySet p()("method","BP")("verbose",1)("tol",1e-9)
PropertySet operator()(const PropertyKey &key, const PropertyValue &val) const { PropertySet copy = *this; return copy.Set(key,val); }
-
+
/// Check if a property with the given key exists
bool hasKey(const PropertyKey &key) const { PropertySet::const_iterator x = find(key); return (x != this->end()); }
/// Construct Region from a VarSet and a counting number
Region(const VarSet & x, double c) : VarSet(x), _c(c) {}
-
+
/// Provide read access to counting number
const double & c() const { return _c; }
/// Provide full access to counting number
/// Constructs FRegion from a Factor and a counting number
FRegion( const Factor & x, double c ) : Factor(x), _c(c) {}
-
+
/// Provide read access to counting number
const double & c() const { return _c; }
/// Provide full access to counting number
/// Constructs a RegionGraph from a FactorGraph, a vector of outer regions, a vector of inner regions and a vector of edges
RegionGraph( const FactorGraph &fg, const std::vector<Region> &ors, const std::vector<Region> &irs, const std::vector<std::pair<size_t,size_t> > &edges );
-
+
/// Constructs a RegionGraph from a FactorGraph and a vector of outer VarSets (CVM style)
RegionGraph( const FactorGraph &fg, const std::vector<VarSet> &cl );
/// Set the content of the I'th factor and make a backup of its old content if backup == true
virtual void setFactor( size_t I, const Factor &newFactor, bool backup = false ) {
- FactorGraph::setFactor( I, newFactor, backup );
- RecomputeOR( I );
+ FactorGraph::setFactor( I, newFactor, backup );
+ RecomputeOR( I );
}
/// Set the contents of all factors as specified by facs and make a backup of the old contents if backup == true
VarSet ns;
for( std::map<size_t, Factor>::const_iterator fac = facs.begin(); fac != facs.end(); fac++ )
ns |= fac->second.vars();
- RecomputeORs( ns );
+ RecomputeORs( ns );
}
SmallSet() : _elements() {}
/// Construct a set with one element
- SmallSet( const T &t ) : _elements() {
+ SmallSet( const T &t ) : _elements() {
_elements.push_back( t );
}
/// Construct a set with two elements
- SmallSet( const T &t1, const T &t2 ) {
+ SmallSet( const T &t1, const T &t2 ) {
if( t1 < t2 ) {
_elements.push_back( t1 );
_elements.push_back( t2 );
return res;
}
- /// Set-intersection operator: returns all elements in *this that are also contained in x
+ /// Set-intersection operator: returns all elements in *this that are also contained in x
SmallSet operator& ( const SmallSet& x ) const {
SmallSet res;
std::set_intersection( _elements.begin(), _elements.end(), x._elements.begin(), x._elements.end(), inserter( res._elements, res._elements.begin() ) );
return res;
}
-
+
/// Erases from *this all elements in x
SmallSet& operator/= ( const SmallSet& x ) {
return (*this = (*this / x));
}
/// Erases one element
- SmallSet& operator/= ( const T &t ) {
+ SmallSet& operator/= ( const T &t ) {
typename std::vector<T>::iterator pos = lower_bound( _elements.begin(), _elements.end(), t );
if( pos != _elements.end() )
if( *pos == t ) // found element, delete it
- _elements.erase( pos );
- return *this;
+ _elements.erase( pos );
+ return *this;
}
/// Adds to *this all elements in x
}
/// Erases from *this all elements not in x
- SmallSet& operator&= ( const SmallSet& x ) {
+ SmallSet& operator&= ( const SmallSet& x ) {
return (*this = (*this & x));
}
/// Returns true if *this is a subset of x
- bool operator<< ( const SmallSet& x ) const {
+ bool operator<< ( const SmallSet& x ) const {
return std::includes( x._elements.begin(), x._elements.end(), _elements.begin(), _elements.end() );
}
/// Returns true if x is a subset of *this
- bool operator>> ( const SmallSet& x ) const {
+ bool operator>> ( const SmallSet& x ) const {
return std::includes( _elements.begin(), _elements.end(), x._elements.begin(), x._elements.end() );
}
/// Returns true if *this and x have elements in common
- bool intersects( const SmallSet& x ) const {
- return( (*this & x).size() > 0 );
+ bool intersects( const SmallSet& x ) const {
+ return( (*this & x).size() > 0 );
}
/// Returns true if *this contains the element t
- bool contains( const T &t ) const {
+ bool contains( const T &t ) const {
return std::binary_search( _elements.begin(), _elements.end(), t );
}
typedef typename std::vector<T>::const_reverse_iterator const_reverse_iterator;
/// Reverse iterator over the elements
typedef typename std::vector<T>::reverse_iterator reverse_iterator;
-
+
/// Returns iterator that points to the first element
iterator begin() { return _elements.begin(); }
/// Returns constant iterator that points to the first element
DEdgeVec _RTree;
std::vector<size_t> _a; // _Qa[alpha] <-> superTree.Qa[_a[alpha]]
std::vector<size_t> _b; // _Qb[beta] <-> superTree.Qb[_b[beta]]
- // _Qb[beta] <-> _RTree[beta]
+ // _Qb[beta] <-> _RTree[beta]
const Factor * _I;
VarSet _ns;
VarSet _nsrem;
double _logZ;
-
-
+
+
public:
TreeEPSubTree() : _Qa(), _Qb(), _RTree(), _a(), _b(), _I(NULL), _ns(), _nsrem(), _logZ(0.0) {}
TreeEPSubTree( const TreeEPSubTree &x) : _Qa(x._Qa), _Qb(x._Qb), _RTree(x._RTree), _a(x._a), _b(x._b), _I(x._I), _ns(x._ns), _nsrem(x._nsrem), _logZ(x._logZ) {}
/// @name Additional interface specific for TreeEP
- //@{
+ //@{
//@}
private:
#ifdef DAI_DEBUG
/// \brief "Print variable". Prints the text of an expression, followed by its value (only if DAI_DEBUG is defined)
/**
- * Useful debugging macro to see what your code is doing.
+ * Useful debugging macro to see what your code is doing.
* Example: \code DAI_PV(3+4) \endcode
* Output: \code 3+4= 7 \endcode
*/
/// Writes a std::vector to a std::ostream
-template<class T>
+template<class T>
std::ostream& operator << (std::ostream& os, const std::vector<T> & x) {
os << "(";
for( typename std::vector<T>::const_iterator it = x.begin(); it != x.end(); it++ )
}
/// Writes a std::set to a std::ostream
-template<class T>
+template<class T>
std::ostream& operator << (std::ostream& os, const std::set<T> & x) {
os << "{";
for( typename std::set<T>::const_iterator it = x.begin(); it != x.end(); it++ )
std::vector<double>::iterator _maxpos;
public:
/// Constructor
- Diffs(long maxsize, double def) : std::vector<double>(), _maxsize(maxsize), _def(def) {
- this->reserve(_maxsize);
- _pos = begin();
- _maxpos = begin();
+ Diffs(long maxsize, double def) : std::vector<double>(), _maxsize(maxsize), _def(def) {
+ this->reserve(_maxsize);
+ _pos = begin();
+ _maxpos = begin();
}
/// Returns maximum difference encountered
- double maxDiff() {
+ double maxDiff() {
if( size() < _maxsize )
return _def;
else
- return( *_maxpos );
+ return( *_maxpos );
}
/// Register new difference x
void push(double x) {
if( _pos == end() )
_pos = begin();
if( _maxpos == _pos ) {
- *_pos++ = x;
+ *_pos++ = x;
_maxpos = max_element(begin(),end());
} else {
if( x > *_maxpos )
/// Represents a discrete random variable.
-/** A Var stores the \a label of the variable (an integer-valued unique ID)
+/** A Var stores the \a label of the variable (an integer-valued unique ID)
* and the number of possible values (\a states) of that variable. Two
- * Var objects with the same label are assumed to be identical (i.e., it
+ * Var objects with the same label are assumed to be identical (i.e., it
* is assumed that their states are also the same).
*
* In this manual, we use the following notational conventions. The discrete
- * random variable with label \f$l\f$ is denoted as \f$x_l\f$, and the number
+ * random variable with label \f$l\f$ is denoted as \f$x_l\f$, and the number
* of possible values of this variable as \f$S_l\f$; this is represented in
* code by the object Var(\f$l\f$,\f$S_l\f$). The set of possible values of
* variable \f$x_l\f$ is denoted \f$X_l := \{0,1,\dots,S_l-1\}\f$.
/// Number of possible values
size_t _states;
-
+
public:
/// Default constructor
Var() : _label(-1), _states(0) {}
/// Larger-than operator (compares only labels)
bool operator > ( const Var& n ) const { return( _label > n._label ); }
/// Smaller-than-or-equal-to operator (only compares labels)
- bool operator <= ( const Var& n ) const {
+ bool operator <= ( const Var& n ) const {
#ifdef DAI_DEBUG
if( _label == n._label )
assert( _states == n._states );
#endif
- return( _label <= n._label );
+ return( _label <= n._label );
}
/// Larger-than-or-equal-to operator (only compares labels)
- bool operator >= ( const Var& n ) const {
+ bool operator >= ( const Var& n ) const {
#ifdef DAI_DEBUG
if( _label == n._label )
assert( _states == n._states );
#endif
- return( _label >= n._label );
+ return( _label >= n._label );
}
/// Not-equal-to operator (only compares labels)
- bool operator != ( const Var& n ) const {
+ bool operator != ( const Var& n ) const {
#ifdef DAI_DEBUG
if( _label == n._label )
assert( _states == n._states );
#endif
- return( _label != n._label );
+ return( _label != n._label );
}
/// Equal-to operator (only compares labels)
- bool operator == ( const Var& n ) const {
+ bool operator == ( const Var& n ) const {
#ifdef DAI_DEBUG
if( _label == n._label )
assert( _states == n._states );
-#endif
- return( _label == n._label );
+#endif
+ return( _label == n._label );
}
/// Writes a Var to an output stream
/// Represents a set of variables.
/** \note A VarSet is implemented using a SmallSet<Var> instead
* of the more natural std::set<Var> because of efficiency reasons.
- * That is, internally, the variables in the set are sorted according
- * to their labels: the set of variables \f$\{x_l\}_{l\in L}\f$ is
- * represented as a vector \f$(x_{l(0)},x_{l(1)},\dots,x_{l(|L|-1)})\f$
- * where \f$l(0) < l(1) < \dots < l(|L|-1)\f$
+ * That is, internally, the variables in the set are sorted according
+ * to their labels: the set of variables \f$\{x_l\}_{l\in L}\f$ is
+ * represented as a vector \f$(x_{l(0)},x_{l(1)},\dots,x_{l(|L|-1)})\f$
+ * where \f$l(0) < l(1) < \dots < l(|L|-1)\f$
* and \f$L = \{l(0),l(1),\dots,l(|L|-1)\}\f$.
*/
class VarSet : public SmallSet<Var> {
/** The number of states of the Cartesian product of the variables in this VarSet
* is simply the product of the number of states of each variable in this VarSet.
* If *this corresponds with the set \f$\{x_l\}_{l\in L}\f$,
- * where variable \f$x_l\f$ has label \f$l\f$, and denoting by \f$S_l\f$ the
- * number of possible values ("states") of variable \f$x_l\f$, the number of
+ * where variable \f$x_l\f$ has label \f$l\f$, and denoting by \f$S_l\f$ the
+ * number of possible values ("states") of variable \f$x_l\f$, the number of
* joint configurations of the variables in \f$\{x_l\}_{l\in L}\f$ is given by \f$\prod_{l\in L} S_l\f$.
*/
size_t nrStates() {
VarSet( const Var &n ) : SmallSet<Var>(n) {}
/// Construct a VarSet with two elements
- VarSet( const Var &n1, const Var &n2 ) : SmallSet<Var>(n1,n2) {}
+ VarSet( const Var &n1, const Var &n2 ) : SmallSet<Var>(n1,n2) {}
/// Construct a VarSet from a range.
/** \tparam VarIterator Iterates over instances of type Var.
* \return The linear index in the Cartesian product of the variables in *this
* corresponding with the joint assignment specified by \a states, where it is
* assumed that \a states[\a m]==0 for all \a m in *this which are not in \a states.
- *
+ *
* The linear index is calculated as follows. The variables in *this are
* ordered according to their label (in ascending order); say *this corresponds with
* the set \f$\{x_{l(0)},x_{l(1)},\dots,x_{l(n-1)}\}\f$ with \f$l(0) < l(1) < \dots < l(n-1)\f$,
* \f}
*
* \note If *this corresponds with \f$\{x_l\}_{l\in L}\f$, and \a states specifies a state
- * for each variable \f$x_l\f$ for \f$l\in L\f$, calcState(const std::map<Var,size_t> &) induces a mapping
+ * for each variable \f$x_l\f$ for \f$l\in L\f$, calcState(const std::map<Var,size_t> &) induces a mapping
* \f$\sigma : \prod_{l\in L} X_l \to \{0,1,\dots,\prod_{l\in L} S_l-1\}\f$ that
- * maps a joint state to a linear index; this is the inverse of the mapping
+ * maps a joint state to a linear index; this is the inverse of the mapping
* \f$\sigma^{-1}\f$ induced by calcStates(size_t).
*/
size_t calcState( const std::map<Var, size_t> &states ) {
* The variables in *this are ordered according to their label (in ascending order); say *this corresponds with
* the set \f$\{x_{l(0)},x_{l(1)},\dots,x_{l(n-1)}\}\f$ with \f$l(0) < l(1) < \dots < l(n-1)\f$,
* where variable \f$x_l\f$ has label \a l. Denote by \f$S_l\f$ the number of possible values
- * ("states") of variable \f$x_l\f$ with label \a l.
+ * ("states") of variable \f$x_l\f$ with label \a l.
* The mapping \a s returned by this function is defined as:
* \f{eqnarray*}
* s(x_{l(i)}) = \left\lfloor\frac{S \mbox { mod } \prod_{j=0}^{i} S_{l(j)}}{\prod_{j=0}^{i-1} S_{l(j)}}\right\rfloor \qquad \mbox{for all $i=0,\dots,n-1$}.
* \f}
* where \f$S\f$ denotes the value of \a linearState.
*
- * \note If *this corresponds with \f$\{x_l\}_{l\in L}\f$, calcStates(size_t) induces a mapping
+ * \note If *this corresponds with \f$\{x_l\}_{l\in L}\f$, calcStates(size_t) induces a mapping
* \f$\sigma^{-1} : \{0,1,\dots,\prod_{l\in L} S_l-1\} \to \prod_{l\in L} X_l\f$ that
- * maps a linear index to a joint state; this is the inverse of the mapping \f$\sigma\f$
+ * maps a linear index to a joint state; this is the inverse of the mapping \f$\sigma\f$
* induced by calcState(const std::map<Var,size_t> &).
*/
std::map<Var, size_t> calcStates( size_t linearState ) {
public:
size_t n1; ///< First node index
size_t n2; ///< Second node index
-
+
/// Default constructor
DEdge() {}
public:
size_t n1; ///< First node index
size_t n2; ///< Second node index
-
+
/// Default constructor
UEdge() {}
// order, such that for all (i1, j1), (i2, j2) in result,
// if j1 == i2 then (i1, j1) comes before (i2, j2) in result.
// We do this by reordering the contents of result, effectively
- // growing the tree starting at the root. At each step,
+ // growing the tree starting at the root. At each step,
// result[0..N-1] are the edges already added to the tree,
// whereas the other elements of result still have to be added.
// The elements of nodes are the vertices that still have to
% N = dai_potstrength (psi, i, j)
-%
+%
% INPUT: psi = structure with a Member field and a P field, like a CPTAB.
% i = label of a variable in psi.
% j = label of another variable in psi.
qr(/\*.*PROPERTIES));
# Actual delimiters to use for generated code blocks
my ($gen_code_start, $gen_code_end) =
- ("/* {{{ GENERATED CODE: DO NOT EDIT. Created by \n".
- " $0 $header_file $source_file \n*/\n",
+ ("/* {{{ GENERATED CODE: DO NOT EDIT. Created by\n".
+ " $0 $header_file $source_file\n*/\n",
"/* }}} END OF GENERATED CODE */\n");
# Strings to hold text of files
push @vars, [$type, $name, $default, $cmt];
}
}
-
+
my ($stext) = "";
my ($text) = <<EOF;
struct Properties {
$source_buffer = process_file { return 0; } $source_file;
$source_buffer =~ s/\n+$//s;
-$header_buffer = process_file {
+$header_buffer = process_file {
if (/$props_start_pat/) {
# when we see something resembling properties, record it, and when we
# get to the end, process and emit the generated code
if( name == ExactInf::Name )
return new ExactInf (fg, opts);
#ifdef DAI_WITH_BP
- if( name == BP::Name )
+ if( name == BP::Name )
return new BP (fg, opts);
#endif
#ifdef DAI_WITH_MF
- if( name == MF::Name )
+ if( name == MF::Name )
return new MF (fg, opts);
#endif
#ifdef DAI_WITH_HAK
- if( name == HAK::Name )
+ if( name == HAK::Name )
return new HAK (fg, opts);
#endif
#ifdef DAI_WITH_LC
void BBP::RegenerateInds() {
// initialise _indices
// typedef std::vector<size_t> _ind_t;
- // std::vector<std::vector<_ind_t> > _indices;
+ // std::vector<std::vector<_ind_t> > _indices;
_indices.resize( _fg->nrVars() );
for( size_t i = 0; i < _fg->nrVars(); i++ ) {
_indices[i].clear();
if( I != J ) {
Prob prod( _fg->var(i).states(), 1.0 );
foreach( const Neighbor &K, _fg->nbV(i) )
- if( K.node != I && K.node != J.node )
+ if( K.node != I && K.node != J.node )
prod *= _bp_dual.msgM( i, K.iter );
_R[I][i.iter][J.iter] = prod;
}
um[x_I] *= _adj_m_unnorm_jI[ind[x_I]];
um *= 1 - props.damping;
_adj_psi_F[I] += um;
-
+
/* THE FOLLOWING WOULD BE SLIGHTLY SLOWER:
_adj_psi_F[I] += (Factor( _fg->factor(I).vars(), U(I, _j) ) * Factor( _fg->var(j), _adj_m_unnorm[j][_I] )).p() * (1.0 - props.damping);
*/
void BBP::getMsgMags( Real &s, Real &new_s ) {
- s = 0.0;
+ s = 0.0;
new_s = 0.0;
size_t e = 0;
for( size_t i = 0; i < _fg->nrVars(); i++ )
new_s += _new_adj_n[i][I.iter].sumAbs();
e++;
}
- s /= e;
+ s /= e;
new_s /= e;
}
do {
_iters++;
mag = getTotalMsgM();
- if( mag < tol )
+ if( mag < tol )
break;
for( size_t i = 0; i < _fg->nrVars(); i++ )
// psi_1_prb.normalize();
size_t I = bp_prb->fg().nbV(i)[0]; // use first factor in list of neighbors of i
bp_prb->fg().factor(I) *= Factor( bp_prb->fg().var(i), psi_1_prb );
-
+
// call 'init' on the perturbed variables
bp_prb->init( bp_prb->fg().var(i) );
-
+
// run copy to convergence
bp_prb->run();
// use to estimate adjoint for i
adj_est.push_back( (cf_prb - cf0) / h );
-
+
// free cloned InfAlg
delete bp_prb;
}
// add it to list of adjoints
adj_n_est.push_back((cf_prb-cf0)/h);
}
-
+
vector<double> adj_m_est;
// for each value xi
for(size_t xi=0; xi<bp_dual.var(i).states(); xi++) {
void initBBPCostFnAdj( BBP &bbp, const InfAlg &ia, bbp_cfn_t cfn_type, const vector<size_t> *stateP ) {
const FactorGraph &fg = ia.fg();
-
+
switch( (size_t)cfn_type ) {
case bbp_cfn_t::CFN_BETHE_ENT: {
vector<Prob> b1_adj;
double b = ia.beliefV(i)[state[i]];
switch( (size_t)cfn_type ) {
case bbp_cfn_t::CFN_GIBBS_B:
- delta[state[i]] = 1.0;
+ delta[state[i]] = 1.0;
break;
case bbp_cfn_t::CFN_GIBBS_B2:
- delta[state[i]] = b;
+ delta[state[i]] = b;
break;
case bbp_cfn_t::CFN_GIBBS_EXP:
- delta[state[i]] = exp(b);
+ delta[state[i]] = exp(b);
break;
- default:
+ default:
DAI_THROW(UNKNOWN_ENUM_VALUE);
}
b1_adj.push_back( delta );
double b = ia.beliefF(I)[x_I];
switch( (size_t)cfn_type ) {
case bbp_cfn_t::CFN_GIBBS_B_FACTOR:
- delta[x_I] = 1.0;
+ delta[x_I] = 1.0;
break;
case bbp_cfn_t::CFN_GIBBS_B2_FACTOR:
- delta[x_I] = b;
+ delta[x_I] = b;
break;
case bbp_cfn_t::CFN_GIBBS_EXP_FACTOR:
delta[x_I] = exp( b );
break;
- default:
+ default:
DAI_THROW(UNKNOWN_ENUM_VALUE);
}
b2_adj.push_back( delta );
}
bbp.init( bbp.getZeroAdjV(fg), b2_adj );
break;
- } default:
+ } default:
DAI_THROW(UNKNOWN_ENUM_VALUE);
}
}
}
}
break;
- } default:
+ } default:
DAI_THROWE(UNKNOWN_ENUM_VALUE, "Unknown cost function " + std::string(cfn_type));
}
return cf;
} // end of namespace dai
-/* {{{ GENERATED CODE: DO NOT EDIT. Created by
- ./scripts/regenerate-properties include/dai/bbp.h src/bbp.cpp
+/* {{{ GENERATED CODE: DO NOT EDIT. Created by
+ ./scripts/regenerate-properties include/dai/bbp.h src/bbp.cpp
*/
namespace dai {
}
if( foundCycle )
break;
- }
+ }
}
levels.push_back( newLevel );
nr_1 += newLevel.ind1.size();
assert( opts.hasKey("maxiter") );
assert( opts.hasKey("logdomain") );
assert( opts.hasKey("updates") );
-
+
props.tol = opts.getStringAs<double>("tol");
props.maxiter = opts.getStringAs<size_t>("maxiter");
props.logdomain = opts.getStringAs<bool>("logdomain");
_edge2lut.reserve( nrVars() );
for( size_t i = 0; i < nrVars(); ++i ) {
_edges.push_back( vector<EdgeProp>() );
- _edges[i].reserve( nbV(i).size() );
+ _edges[i].reserve( nbV(i).size() );
if( props.updates == Properties::UpdateType::SEQMAX ) {
_edge2lut.push_back( vector<LutType::iterator>() );
_edge2lut[i].reserve( nbV(i).size() );
message( i, I.iter ).fill( c );
newMessage( i, I.iter ).fill( c );
if( props.updates == Properties::UpdateType::SEQMAX )
- updateResidual( i, I.iter, 0.0 );
+ updateResidual( i, I.iter, 0.0 );
}
}
}
Factor Fprod( factor(I) );
Prob &prod = Fprod.p();
- if( props.logdomain )
+ if( props.logdomain )
prod.takeLog();
// Calculate product of incoming messages and factor I
foreach( const Neighbor &j, nbF(I) )
if( j != i ) { // for all j in I \ i
// prod_j will be the product of messages coming into j
- Prob prod_j( var(j).states(), props.logdomain ? 0.0 : 1.0 );
+ Prob prod_j( var(j).states(), props.logdomain ? 0.0 : 1.0 );
foreach( const Neighbor &J, nbV(j) )
- if( J != I ) { // for all J in nb(j) \ I
+ if( J != I ) { // for all J in nb(j) \ I
if( props.logdomain )
prod_j += message( j, J.iter );
else
Prob marg;
if( !DAI_BP_FAST ) {
/* UNOPTIMIZED (SIMPLE TO READ, BUT SLOW) VERSION */
- if( props.inference == Properties::InfType::SUMPROD )
+ if( props.inference == Properties::InfType::SUMPROD )
marg = Fprod.marginal( var(i) ).p();
else
marg = Fprod.maxMarginal( var(i) ).p();
marg = Prob( var(i).states(), 0.0 );
// ind is the precalculated IndexFor(i,I) i.e. to x_I == k corresponds x_i == ind[k]
const ind_t ind = index(i,_I);
- if( props.inference == Properties::InfType::SUMPROD )
+ if( props.inference == Properties::InfType::SUMPROD )
for( size_t r = 0; r < prod.size(); ++r )
marg[ind[r]] += prod[r];
else
for( size_t r = 0; r < prod.size(); ++r )
- if( prod[r] > marg[ind[r]] )
+ if( prod[r] > marg[ind[r]] )
marg[ind[r]] = prod[r];
marg.normalize();
}
double tic = toc();
Diffs diffs(nrVars(), 1.0);
-
+
vector<Edge> update_seq;
vector<Factor> old_beliefs;
}
}
} else if( props.updates == Properties::UpdateType::PARALL ) {
- // Parallel updates
+ // Parallel updates
for( size_t i = 0; i < nrVars(); ++i )
foreach( const Neighbor &I, nbV(i) )
calcNewMessage( i, I.iter );
// Sequential updates
if( props.updates == Properties::UpdateType::SEQRND )
random_shuffle( update_seq.begin(), update_seq.end() );
-
+
foreach( const Edge &e, update_seq ) {
calcNewMessage( e.first, e.second );
updateMessage( e.first, e.second );
void BP::calcBeliefV( size_t i, Prob &p ) const {
- p = Prob( var(i).states(), props.logdomain ? 0.0 : 1.0 );
+ p = Prob( var(i).states(), props.logdomain ? 0.0 : 1.0 );
foreach( const Neighbor &I, nbV(i) )
if( props.logdomain )
p += newMessage( i, I.iter );
Factor Fprod( factor( I ) );
Prob &prod = Fprod.p();
- if( props.logdomain )
+ if( props.logdomain )
prod.takeLog();
foreach( const Neighbor &j, nbF(I) ) {
// prod_j will be the product of messages coming into j
Prob prod_j( var(j).states(), props.logdomain ? 0.0 : 1.0 );
foreach( const Neighbor &J, nbV(j) )
- if( J != I ) { // for all J in nb(j) \ I
+ if( J != I ) { // for all J in nb(j) \ I
if( props.logdomain )
prod_j += newMessage( j, J.iter );
else
}
-string BP::identify() const {
+string BP::identify() const {
return string(Name) + printProperties();
}
void BP::updateResidual( size_t i, size_t _I, double r ) {
- EdgeProp* pEdge = &_edges[i][_I];
- pEdge->residual = r;
-
- // rearrange look-up table (delete and reinsert new key)
- _lut.erase( _edge2lut[i][_I] );
- _edge2lut[i][_I] = _lut.insert( make_pair( r, make_pair(i, _I) ) );
+ EdgeProp* pEdge = &_edges[i][_I];
+ pEdge->residual = r;
+
+ // rearrange look-up table (delete and reinsert new key)
+ _lut.erase( _edge2lut[i][_I] );
+ _edge2lut[i][_I] = _lut.insert( make_pair( r, make_pair(i, _I) ) );
}
if( visitedVars[i] )
continue;
visitedVars[i] = true;
-
+
// Maximise with respect to variable i
Prob prod;
calcBeliefV( i, prod );
maximum[i] = max_element( prod.begin(), prod.end() ) - prod.begin();
-
+
foreach( const Neighbor &I, nbV(i) )
- if( !visitedFactors[I] )
+ if( !visitedFactors[I] )
scheduledFactors.push(I);
while( !scheduledFactors.empty() ){
if( visitedFactors[I] )
continue;
visitedFactors[I] = true;
-
+
// Evaluate if some neighboring variables still need to be fixed; if not, we're done
bool allDetermined = true;
- foreach( const Neighbor &j, nbF(I) )
+ foreach( const Neighbor &j, nbF(I) )
if( !visitedVars[j.node] ) {
allDetermined = false;
break;
}
if( allDetermined )
continue;
-
+
// Calculate product of incoming messages on factor I
Prob prod2;
calcBeliefF( I, prod2 );
maxProb = prod2[s];
}
}
-
+
// Decode the argmax
foreach( const Neighbor &j, nbF(I) ) {
if( visitedVars[j.node] ) {
visitedVars[j.node] = true;
maximum[j.node] = maxState( var(j.node) );
foreach( const Neighbor &J, nbV(j) )
- if( !visitedFactors[J] )
+ if( !visitedFactors[J] )
scheduledFactors.push(J);
}
}
IndexFor ind( fg().var(i), fg().factor(I).vars() );
for( size_t x = 0; ind >= 0; x++, ++ind )
marg[ind] += prod[x];
-
+
_msgs.Zm[i][_I] = marg.normalize();
_msgs.m[i][_I] = marg;
}
void CBP::setBeliefs( const std::vector<Factor> &bs, double logZ ) {
size_t i = 0;
- _beliefsV.clear();
+ _beliefsV.clear();
_beliefsV.reserve( nrVars() );
- _beliefsF.clear();
+ _beliefsF.clear();
_beliefsF.reserve( nrFactors() );
- for( i = 0; i < nrVars(); i++ )
+ for( i = 0; i < nrVars(); i++ )
_beliefsV.push_back( bs[i] );
- for( ; i < nrVars() + nrFactors(); i++ )
+ for( ; i < nrVars() + nrFactors(); i++ )
_beliefsF.push_back( bs[i] );
_logZ = logZ;
}
double CBP::run() {
size_t seed = props.rand_seed;
- if( seed > 0)
+ if( seed > 0)
rnd_seed( seed );
InfAlg *bp = getInfAlg();
choose_count++;
if( props.clamp_outfile.length() > 0 )
*_clamp_ofstream << choose_count << "\t" << clamped_vars_list.size() << "\t" << i << "\t" << xis[0] << endl;
-
+
if( clampingVar )
- foreach( size_t xi, xis )
+ foreach( size_t xi, xis )
assert(/*0<=xi &&*/ xi < var(i).states() );
else
- foreach( size_t xI, xis )
+ foreach( size_t xI, xis )
assert(/*0<=xI &&*/ xI < factor(i).states() );
// - otherwise, clamp and recurse, saving margin estimates for each
// clamp setting. afterwards, combine estimates.
vector<size_t> cmp_xis = complement( xis, clampingVar ? var(i).states() : factor(i).states() );
/// \todo could do this more efficiently with a nesting version of saveProbs/undoProbs
- Real lz;
+ Real lz;
vector<Factor> b;
InfAlg *bp_c = bp->clone();
if( clampingVar ) {
lz = bp_c->logZ();
b = bp_c->beliefs();
- Real cmp_lz;
+ Real cmp_lz;
vector<Factor> cmp_b;
InfAlg *cmp_bp_c = bp->clone();
if( clampingVar ) {
double p = unSoftMax( lz, cmp_lz );
Real bp__d = 0.0;
-
+
if( Recursion() == Properties::RecurseType::REC_BDIFF && recTol() > 0 ) {
vector<Factor> combined_b( mixBeliefs( p, b, cmp_b ) );
Real new_lz = logSumExp( lz,cmp_lz );
win_xk = bp->beliefV(k).p().argmax().first;
}
}
- assert( win_k >= 0 );
+ assert( win_k >= 0 );
assert( win_xk >= 0 );
- i = win_k;
+ i = win_k;
xis.resize( 1, win_xk );
DAI_IFVERB(2, endl<<"CHOOSE_MAXENT chose variable "<<i<<" state "<<xis[0]<<endl);
if( bp->beliefV(i).p()[xis[0]] < tiny ) {
win_xk = bp->beliefF(k).p().argmax().first;
}
}
- assert( win_k >= 0 );
+ assert( win_k >= 0 );
assert( win_xk >= 0 );
- i = win_k;
+ i = win_k;
xis.resize( 1, win_xk );
DAI_IFVERB(2, endl<<"CHOOSE_MAXENT chose factor "<<i<<" state "<<xis[0]<<endl);
if( bp->beliefF(i).p()[xis[0]] < tiny ) {
int win_k = -1, win_xk = -1;
for( size_t k = 0; k < nrVars(); k++ ) {
for( size_t xk = 0; xk < var(k).states(); xk++ ) {
- if( bp->beliefV(k)[xk] < tiny )
+ if( bp->beliefV(k)[xk] < tiny )
continue;
InfAlg *bp1 = bp->clone();
bp1->clamp( k, xk );
delete bp1;
}
}
- assert( win_k >= 0 );
+ assert( win_k >= 0 );
assert( win_xk >= 0 );
- i = win_k;
+ i = win_k;
xis.resize( 1, win_xk );
} else if( ChooseMethod() == Properties::ChooseMethodType::CHOOSE_BBP ) {
- Real mvo;
- if( !maxVarOut )
+ Real mvo;
+ if( !maxVarOut )
maxVarOut = &mvo;
bool clampingVar = (Clamping() == Properties::ClampType::CLAMP_VAR);
pair<size_t, size_t> cv = bbpFindClampVar( *bp, clampingVar, props.bbp_props, BBP_cost_function(), &mvo );
<< i << " state " << xi \
<< " (p=" << (clampingVar?bp->beliefV(i)[xi]:bp->beliefF(i)[xi]) \
<< ", entropy = " << (clampingVar?bp->beliefV(i):bp->beliefF(i)).entropy() \
- << ", maxVar = "<< mvo << ")"
+ << ", maxVar = "<< mvo << ")"
Prob b = ( clampingVar ? bp->beliefV(i).p() : bp->beliefF(i).p());
if( b[xi] < tiny ) {
cerr << "Warning, at level " << clamped_vars_list.size() << ", bbpFindClampVar found unlikely " << VAR_INFO << endl;
BBP bbp( &in_bp, bbp_props );
initBBPCostFnAdj( bbp, in_bp, cfn, NULL );
bbp.run();
-
+
// find and return the (variable,state) with the largest adj_psi_V
size_t argmax_var = 0;
size_t argmax_var_state = 0;
assert(/*0 <= argmax_var_state &&*/
argmax_var_state < in_bp.fg().factor(argmax_var).states() );
}
- if( maxVarOut )
+ if( maxVarOut )
*maxVarOut = max_var;
return make_pair( argmax_var, argmax_var_state );
}
} // end of namespace dai
-/* {{{ GENERATED CODE: DO NOT EDIT. Created by
- ./scripts/regenerate-properties include/dai/cbp.h src/cbp.cpp
+/* {{{ GENERATED CODE: DO NOT EDIT. Created by
+ ./scripts/regenerate-properties include/dai/cbp.h src/cbp.cpp
*/
namespace dai {
set<size_t> varindices;
for( size_t i = 0; i < vars.size(); ++i )
varindices.insert( i );
-
+
// Do variable elimination
while( !varindices.empty() ) {
set<size_t>::const_iterator lowest = varindices.end();
cl.eraseNonMaximal();
ClusterGraph result;
-
+
// Do variable elimination
for( vector<Var>::const_iterator n = ElimSeq.begin(); n != ElimSeq.end(); n++ ) {
size_t i = cl.findVar( *n );
*/
Factor calcMarginal( const InfAlg &obj, const VarSet &ns, bool reInit ) {
Factor Pns (ns);
-
+
InfAlg *clamped = obj.clone();
if( !reInit )
clamped->init();
// set clamping Factors to delta functions
for( VarSet::const_iterator n = ns.begin(); n != ns.end(); n++ )
clamped->clamp( varindices[*n], s(*n) );
-
+
// run DAIAlg, calc logZ, store in Pns
if( reInit )
clamped->init();
Pns[s] = 0;
else
Pns[s] = exp(logZ - logZ0); // subtract logZ0 to avoid very large numbers
-
+
// restore clamped factors
clamped->restoreFactors( ns );
}
else
Z_xj = exp(logZ - logZ0); // subtract logZ0 to avoid very large numbers
- for( size_t k = 0; k < N; k++ )
+ for( size_t k = 0; k < N; k++ )
if( k != j ) {
Factor b_k = clamped->belief(vns[k]);
- for( size_t k_val = 0; k_val < vns[k].states(); k_val++ )
+ for( size_t k_val = 0; k_val < vns[k].states(); k_val++ )
if( vns[j].label() < vns[k].label() )
pairbeliefs[j * N + k][j_val + (k_val * vns[j].states())] = Z_xj * b_k[k_val];
else
clamped->restoreFactors( ns );
}
}
-
+
delete clamped;
// Calculate result by taking the geometric average
*/
Factor calcMarginal2ndO( const InfAlg & obj, const VarSet& ns, bool reInit ) {
// returns a a probability distribution whose 1st order interactions
- // are unspecified, whose 2nd order interactions approximate those of
+ // are unspecified, whose 2nd order interactions approximate those of
// the marginal on ns, and whose higher order interactions are absent.
vector<Factor> pairbeliefs = calcPairBeliefs( obj, ns, reInit );
Factor Pns (ns);
for( size_t ij = 0; ij < pairbeliefs.size(); ij++ )
Pns *= pairbeliefs[ij];
-
+
return( Pns.normalized() );
}
Factor pairbelief( VarSet(*nj, *nk) );
// clamp Vars j and k to their possible values
- for( size_t j_val = 0; j_val < nj->states(); j_val++ )
+ for( size_t j_val = 0; j_val < nj->states(); j_val++ )
for( size_t k_val = 0; k_val < nk->states(); k_val++ ) {
// save unclamped factors connected to ns
clamped->backupFactors( ns );
// restore clamped factors
clamped->restoreFactors( ns );
}
-
+
result.push_back( pairbelief.normalized() );
}
}
-
+
delete clamped;
assert( result.size() == (ns.size() * (ns.size() - 1) / 2) );
}
-CondProbEstimation::CondProbEstimation( size_t target_dimension, const Prob &pseudocounts )
- : _target_dim(target_dimension), _stats(pseudocounts), _initial_stats(pseudocounts)
+CondProbEstimation::CondProbEstimation( size_t target_dimension, const Prob &pseudocounts )
+ : _target_dim(target_dimension), _stats(pseudocounts), _initial_stats(pseudocounts)
{
assert( !(_stats.size() % _target_dim) );
}
labels.push_back( varorder[i].label() );
outVS |= varorder[i];
}
-
+
// Construct the sigma array for the permutation object
std::vector<size_t> sigma;
sigma.reserve( dims.size() );
for( VarSet::iterator set_iterator = outVS.begin(); sigma.size() < dims.size(); ++set_iterator )
sigma.push_back( find(labels.begin(), labels.end(), set_iterator->label()) - labels.begin() );
-
+
return Permute( dims, sigma );
}
SharedParameters::SharedParameters( std::istream &is, const FactorGraph &fg_varlookup )
- : _varsets(), _perms(), _varorders(), _estimation(NULL), _deleteEstimation(true)
+ : _varsets(), _perms(), _varorders(), _estimation(NULL), _deleteEstimation(true)
{
// Read the desired parameter estimation method from the stream
std::string est_method;
labelparse >> label;
VarSet::const_iterator vsi = vs.begin();
for( ; vsi != vs.end(); ++vsi )
- if( vsi->label() == label )
+ if( vsi->label() == label )
break;
if( vsi == vs.end() )
DAI_THROW(INVALID_EMALG_FILE);
}
-SharedParameters::SharedParameters( const SharedParameters &sp )
+SharedParameters::SharedParameters( const SharedParameters &sp )
: _varsets(sp._varsets), _perms(sp._perms), _varorders(sp._varorders), _estimation(sp._estimation), _deleteEstimation(sp._deleteEstimation)
{
// If sp owns its _estimation object, we should clone it instead
for( std::map< FactorIndex, Permute >::iterator i = _perms.begin(); i != _perms.end(); ++i ) {
Permute &perm = i->second;
VarSet &vs = _varsets[i->first];
-
+
Factor b = alg.belief(vs);
Prob p( b.states(), 0.0 );
for( size_t entry = 0; entry < b.states(); ++entry )
for( std::map<FactorIndex, Permute>::iterator i = _perms.begin(); i != _perms.end(); ++i ) {
Permute &perm = i->second;
VarSet &vs = _varsets[i->first];
-
+
Factor f( vs, 0.0 );
for( size_t entry = 0; entry < f.states(); ++entry )
f[perm.convert_linear_index(entry)] = p[entry];
_msteps.reserve(num_msteps);
for( size_t i = 0; i < num_msteps; ++i )
_msteps.push_back( MaximizationStep( msteps_file, estep.fg() ) );
-}
+}
void EMAlg::setTermConditions( const PropertySet &p ) {
for( std::map<Var, size_t>::const_iterator i = _obs.begin(); i != _obs.end(); ++i )
alg.clamp( alg.fg().findVar(i->first), i->second );
}
-
+
void Evidence::addEvidenceTabFile( std::istream &is, FactorGraph &fg ) {
std::map<std::string, Var> varMap;
void Evidence::addEvidenceTabFile( std::istream &is, std::map<std::string, Var> &varMap ) {
std::string line;
getline( is, line );
-
+
// Parse header
std::vector<std::string> header_fields;
tokenizeString( line, header_fields );
std::vector<std::string>::const_iterator p_field = header_fields.begin();
- if( p_field == header_fields.end() )
+ if( p_field == header_fields.end() )
DAI_THROW(INVALID_EVIDENCE_FILE);
std::vector<Var> vars;
DAI_THROW(INVALID_EVIDENCE_FILE);
vars.push_back( elem->second );
}
-
+
// Read samples
while( getline(is, line) ) {
std::vector<std::string> fields;
tokenizeString( line, fields );
- if( fields.size() != vars.size() )
+ if( fields.size() != vars.size() )
DAI_THROW(INVALID_EVIDENCE_FILE);
-
+
Observation sampleData;
for( size_t i = 0; i < vars.size(); ++i ) {
if( fields[i].size() > 0 ) { // skip if missing observation
void ExactInf::setProperties( const PropertySet &opts ) {
assert( opts.hasKey("verbose") );
-
+
props.verbose = opts.getStringAs<size_t>("verbose");
}
}
-string ExactInf::identify() const {
+string ExactInf::identify() const {
return string(Name) + printProperties();
}
"Invalid Expectation-Maximization file",
"Unrecognized parameter estimation method",
"Requested object not found"
- };
+ };
}
void FactorGraph::constructGraph( size_t nrEdges ) {
// create a mapping for indices
hash_map<size_t, size_t> hashmap;
-
+
for( size_t i = 0; i < vars().size(); i++ )
hashmap[var(i).label()] = i;
-
+
// create edge list
vector<Edge> edges;
edges.reserve( nrEdges );
vector<Factor> facs;
size_t nr_Factors;
string line;
-
+
while( (is.peek()) == '#' )
getline(is,line);
is >> nr_Factors;
I_vars |= Var(labels[mi], dims[mi]);
}
facs.push_back( Factor( I_vars, 0.0 ) );
-
+
// calculate permutation sigma (internally, members are sorted)
vector<size_t> sigma(nr_members,0);
VarSet::iterator j = I_vars.begin();
// calculate multindices
Permute permindex( dims, sigma );
-
+
// read values
size_t nr_nonzeros;
while( (is.peek()) == '#' )
getline(is,line);
is >> nr_nonzeros;
- if( verbose >= 3 )
+ if( verbose >= 3 )
cerr << " nonzeroes: " << nr_nonzeros << endl;
for( size_t k = 0; k < nr_nonzeros; k++ ) {
size_t li;
VarSet FactorGraph::Delta( const VarSet &ns ) const {
VarSet result;
- for( VarSet::const_iterator n = ns.begin(); n != ns.end(); n++ )
+ for( VarSet::const_iterator n = ns.begin(); n != ns.end(); n++ )
result |= Delta(findVar(*n));
return result;
}
vector<VarSet> FactorGraph::Cliques() const {
vector<VarSet> result;
-
+
for( size_t I = 0; I < nrFactors(); I++ ) {
bool maximal = true;
for( size_t J = 0; (J < nrFactors()) && maximal; J++ )
if( (factor(J).vars() >> factor(I).vars()) && (factor(J).vars() != factor(I).vars()) )
maximal = false;
-
+
if( maximal )
result.push_back( factor(I).vars() );
}
mask[x] = 1.0;
map<size_t, Factor> newFacs;
- foreach( const BipartiteGraph::Neighbor &I, nbV(i) )
+ foreach( const BipartiteGraph::Neighbor &I, nbV(i) )
newFacs[I] = factor(I) * mask;
setFactors( newFacs, backup );
}
map<size_t, Factor> newFacs;
- foreach( const BipartiteGraph::Neighbor &I, nbV(i) )
+ foreach( const BipartiteGraph::Neighbor &I, nbV(i) )
newFacs[I] = factor(I) * mask_n;
setFactors( newFacs, backup );
}
size_t st = factor(I).states();
Factor newF( factor(I).vars(), 0.0 );
- foreach( size_t i, is ) {
- assert( i <= st );
+ foreach( size_t i, is ) {
+ assert( i <= st );
newF[i] = factor(I)[i];
}
_var_counts.reserve( nrVars() );
for( size_t i = 0; i < nrVars(); i++ )
_var_counts.push_back( _count_t( var(i).states(), 0 ) );
-
+
_factor_counts.clear();
_factor_counts.reserve( nrFactors() );
for( size_t I = 0; I < nrFactors(); I++ )
cerr << endl;
double tic = toc();
-
+
randomizeState();
for( size_t iter = 0; iter < props.iters; iter++ ) {
cerr << "counts for variable " << var(i) << ": " << Prob( _var_counts[i].begin(), _var_counts[i].end() ) << endl;
}
}
-
+
if( props.verbose >= 3 )
cerr << Name << "::run: ran " << props.iters << " passes (" << toc() - tic << " clocks)." << endl;
assert( opts.hasKey("verbose") );
assert( opts.hasKey("doubleloop") );
assert( opts.hasKey("clusters") );
-
+
props.tol = opts.getStringAs<double>("tol");
props.maxiter = opts.getStringAs<size_t>("maxiter");
props.verbose = opts.getStringAs<size_t>("verbose");
_Qb.reserve(nrIRs());
for( size_t beta = 0; beta < nrIRs(); beta++ )
_Qb.push_back( Factor( IR(beta) ) );
-
+
// Create messages
_muab.clear();
_muab.reserve( nrORs() );
cl = fg.Cliques();
} else if( props.clusters == Properties::ClustersType::DELTA ) {
for( size_t i = 0; i < fg.nrVars(); i++ )
- cl.push_back(fg.Delta(i));
+ cl.push_back(fg.Delta(i));
} else if( props.clusters == Properties::ClustersType::LOOP ) {
cl = fg.Cliques();
set<VarSet> scl;
}
-string HAK::identify() const {
+string HAK::identify() const {
return string(Name) + printProperties();
}
*
* In some cases, the muab's can have very large entries because the muba's have very
* small entries. This may cause NANs later on (e.g., multiplying large quantities may
- * result in +inf; normalization then tries to calculate inf / inf which is NAN).
+ * result in +inf; normalization then tries to calculate inf / inf which is NAN).
* A fix of this problem would consist in normalizing the messages muab.
* However, it is not obvious whether this is a real solution, because it has a
* negative performance impact and the NAN's seem to be a symptom of a fundamental
* numerical unstability.
*/
- muab(alpha,_beta).normalize();
+ muab(alpha,_beta).normalize();
}
Factor Qb_new;
size_t _beta = alpha.dual;
muba(alpha,_beta) = _Qb[beta] / muab(alpha,_beta);
- /* TODO: INVESTIGATE WHETHER THIS HACK (INVENTED BY KEES) TO PREVENT NANS MAKES SENSE
+ /* TODO: INVESTIGATE WHETHER THIS HACK (INVENTED BY KEES) TO PREVENT NANS MAKES SENSE
*
* muba(beta,*alpha).makePositive(1e-100);
*
void JTree::setProperties( const PropertySet &opts ) {
assert( opts.hasKey("verbose") );
assert( opts.hasKey("updates") );
-
+
props.verbose = opts.getStringAs<size_t>("verbose");
props.updates = opts.getStringAs<Properties::UpdateType>("updates");
}
JTree::JTree( const FactorGraph &fg, const PropertySet &opts, bool automatic ) : DAIAlgRG(fg), _mes(), _logZ(), RTree(), Qa(), Qb(), props() {
setProperties( opts );
- if( !isConnected() )
- DAI_THROW(FACTORGRAPH_NOT_CONNECTED);
+ if( !isConnected() )
+ DAI_THROW(FACTORGRAPH_NOT_CONNECTED);
if( automatic ) {
// Create ClusterGraph which contains factors as clusters
void JTree::GenerateJT( const std::vector<VarSet> &Cliques ) {
- // Construct a weighted graph (each edge is weighted with the cardinality
+ // Construct a weighted graph (each edge is weighted with the cardinality
// of the intersection of the nodes, where the nodes are the elements of
// Cliques).
WeightedGraph<int> JuncGraph;
for( size_t i = 0; i < Cliques.size(); i++ )
for( size_t j = i+1; j < Cliques.size(); j++ ) {
size_t w = (Cliques[i] & Cliques[j]).size();
- if( w )
+ if( w )
JuncGraph[UEdge(i,j)] = w;
}
-
+
// Construct maximal spanning tree using Prim's algorithm
RTree = MaxSpanningTreePrims( JuncGraph );
Qb.clear();
Qb.reserve( nrIRs() );
- for( size_t beta = 0; beta < nrIRs(); beta++ )
+ for( size_t beta = 0; beta < nrIRs(); beta++ )
Qb.push_back( Factor( IR(beta), 1.0 ) );
_mes.clear();
// IR(i) = seperator OR(RTree[i].n1) && OR(RTree[i].n2)
Factor new_Qb = Qa[RTree[i].n2].marginal( IR( i ), false );
_logZ += log(new_Qb.normalize());
- Qa[RTree[i].n1] *= new_Qb / Qb[i];
+ Qa[RTree[i].n1] *= new_Qb / Qb[i];
Qb[i] = new_Qb;
}
if( RTree.empty() )
// Make outer region RTree[i].n2 consistent with outer region RTree[i].n1
// IR(i) = seperator OR(RTree[i].n1) && OR(RTree[i].n2)
Factor new_Qb = Qa[RTree[i].n1].marginal( IR( i ) );
- Qa[RTree[i].n2] *= new_Qb / Qb[i];
+ Qa[RTree[i].n2] *= new_Qb / Qb[i];
Qb[i] = new_Qb;
}
size_t i = nbIR(e)[1].node; // = RTree[e].n2
size_t j = nbIR(e)[0].node; // = RTree[e].n1
size_t _e = nbIR(e)[0].dual;
-
+
Factor piet = OR(i);
foreach( const Neighbor &k, nbOR(i) )
- if( k != e )
+ if( k != e )
piet *= message( i, k.iter );
message( j, _e ) = piet.marginal( IR(e), false );
_logZ += log( message(j,_e).normalize() );
size_t i = nbIR(e)[0].node; // = RTree[e].n1
size_t j = nbIR(e)[1].node; // = RTree[e].n2
size_t _e = nbIR(e)[1].dual;
-
+
Factor piet = OR(i);
foreach( const Neighbor &k, nbOR(i) )
if( k != e )
}
// Only for logZ (and for belief)...
- for( size_t beta = 0; beta < nrIRs(); beta++ )
+ for( size_t beta = 0; beta < nrIRs(); beta++ )
Qb[beta] = Qa[nbIR(beta)[0].node].marginal( IR(beta) );
}
for( DEdgeVec::const_iterator e = RTree.begin(); e != RTree.end(); e++ )
oldTree.insert( UEdge(e->n1, e->n2) );
DEdgeVec newTree = GrowRootedTree( oldTree, maxalpha );
-
+
// identify subtree that contains variables of ns which are not in the new root
VarSet nsrem = ns / OR(maxalpha).vars();
set<DEdge> subTree;
// Find remaining variables (which are not in the new root)
VarSet nsrem = ns / OR(T.front().n1).vars();
Factor Pns (ns, 0.0);
-
+
// Save Qa and Qb on the subtree
map<size_t,Factor> Qa_old;
map<size_t,Factor> Qb_old;
if( !Qb_old.count( beta ) )
Qb_old[beta] = Qb[beta];
}
-
+
// For all states of nsrem
for( State s(nsrem); s.valid(); s++ ) {
// CollectEvidence
if( Qa[T[i].n2].vars() >> *n ) {
Factor piet( *n, 0.0 );
piet[s(*n)] = 1.0;
- Qa[T[i].n2] *= piet;
+ Qa[T[i].n2] *= piet;
}
Factor new_Qb = Qa[T[i].n2].marginal( IR( b[i] ), false );
logZ += log(new_Qb.normalize());
- Qa[T[i].n1] *= new_Qb / Qb[b[i]];
+ Qa[T[i].n1] *= new_Qb / Qb[b[i]];
Qb[b[i]] = new_Qb;
}
logZ += log(Qa[T[0].n1].normalize());
assert( opts.hasKey("verbose") );
assert( opts.hasKey("cavity") );
assert( opts.hasKey("updates") );
-
+
props.tol = opts.getStringAs<double>("tol");
props.maxiter = opts.getStringAs<size_t>("maxiter");
props.verbose = opts.getStringAs<size_t>("verbose");
// create pancakes
_pancakes.resize( nrVars() );
-
+
// create cavitydists
for( size_t i=0; i < nrVars(); i++ )
_cavitydists.push_back(Factor( delta(i) ));
}
-string LC::identify() const {
+string LC::identify() const {
return string(Name) + printProperties();
}
long LC::SetCavityDists( std::vector<Factor> &Q ) {
- if( props.verbose >= 1 )
+ if( props.verbose >= 1 )
cerr << Name << "::SetCavityDists: Setting initial cavity distributions" << endl;
if( Q.size() != nrVars() )
return -1;
for( size_t i = 0; i < nrVars(); i++ ) {
_pancakes[i] = _cavitydists[i];
-
+
foreach( const Neighbor &I, nbV(i) ) {
_pancakes[i] *= factor(I);
if( props.updates == Properties::UpdateType::SEQRND )
_pancakes[i] *= _phis[i][I.iter];
}
-
+
_pancakes[i].normalize();
CalcBelief(i);
// Sequential updates
if( props.updates == Properties::UpdateType::SEQRND )
random_shuffle( update_seq.begin(), update_seq.end() );
-
+
for( size_t t=0; t < nredges; t++ ) {
size_t i = update_seq[t].first;
size_t _I = update_seq[t].second;
/*=================================================================*
- * *
+ * *
* This is a MEX-file for MATLAB. *
- * *
+ * *
* [logZ,q,md,qv,qf] = dai(psi,method,opts); *
* or *
* [logZ,q,md,qv,qf,qmap] = dai(psi,method,opts); *
- * *
+ * *
*=================================================================*/
#define NR_OUT_OPT 3
-void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] )
-{
+void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] ) {
size_t buflen;
-
+
// Check for proper number of arguments
if( ((nrhs < NR_IN) || (nrhs > NR_IN + NR_IN_OPT)) || ((nlhs < NR_OUT) || (nlhs > NR_OUT + NR_OUT_OPT)) ) {
mexErrMsgTxt("Usage: [logZ,q,md,qv,qf,qmap] = dai(psi,method,opts)\n\n"
" qv = linear cell array containing all variable beliefs.\n"
" qf = linear cell array containing all factor beliefs.\n"
" qmap = (V,1) array containing the MAP labeling (only for BP).\n");
- }
-
+ }
+
char *method;
char *opts;
ss << opts;
PropertySet props;
ss >> props;
-
+
// Construct InfAlg object, init and run
InfAlg *obj = newInfAlg( method, fg, props );
obj->init();
// Hand over results to MATLAB
LOGZ_OUT = mxCreateDoubleMatrix(1,1,mxREAL);
*(mxGetPr(LOGZ_OUT)) = logZ;
-
+
Q_OUT = Factors2mx(obj->beliefs());
-
+
MD_OUT = mxCreateDoubleMatrix(1,1,mxREAL);
*(mxGetPr(MD_OUT)) = maxdiff;
qmap_p[n] = map_state[n];
}
delete obj;
-
+
return;
}
/*=================================================================*
- * *
+ * *
* This is a MEX-file for MATLAB. *
- * *
+ * *
* N = dai_potstrength(psi,i,j); *
- * *
+ * *
*=================================================================*/
#define NR_OUT 1
-void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] )
-{
+void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] ) {
long ilabel, jlabel;
// Check for proper number of arguments
- if ((nrhs != NR_IN) || (nlhs != NR_OUT)) {
+ if ((nrhs != NR_IN) || (nlhs != NR_OUT)) {
mexErrMsgTxt("Usage: N = dai_potstrength(psi,i,j);\n\n"
"\n"
"INPUT: psi = structure with a Member field and a P field, like a CPTAB.\n"
" j = label of another variable in psi.\n"
"\n"
"OUTPUT: N = strength of psi in direction i->j.\n");
- }
-
+ }
+
// Get input parameters
Factor psi = mx2Factor(PSI_IN);
ilabel = (long)*mxGetPr(I_IN);
// Calculate N(psi,i,j);
double N = psi.strength( i, j );
-
+
// Hand over result to MATLAB
N_OUT = mxCreateDoubleMatrix(1,1,mxREAL);
*(mxGetPr(N_OUT)) = N;
-
+
return;
}
/*=================================================================*
- * *
+ * *
* This is a MEX-file for MATLAB. *
- * *
+ * *
* [psi] = dai_readfg(filename); *
- * *
+ * *
*=================================================================*/
#define NR_OUT 1
-void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] )
-{
+void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] ) {
char *filename;
-
+
// Check for proper number of arguments
- if ((nrhs != NR_IN) || (nlhs != NR_OUT)) {
+ if ((nrhs != NR_IN) || (nlhs != NR_OUT)) {
mexErrMsgTxt("Usage: [psi] = dai_readfg(filename);\n\n"
"\n"
"INPUT: filename = filename of a .fg file\n"
"OUTPUT: psi = linear cell array containing the factors\n"
" (psi{i} is a structure with a Member field\n"
" and a P field, like a CPTAB).\n");
- }
-
+ }
+
// Get input parameters
size_t buflen;
buflen = mxGetN( FILENAME_IN ) + 1;
filename = (char *)mxCalloc( buflen, sizeof(char) );
mxGetString( FILENAME_IN, filename, buflen );
-
+
// Read factorgraph
FactorGraph fg;
vector<Factor> psi;
for( size_t I = 0; I < fg.nrFactors(); I++ )
psi.push_back(fg.factor(I));
-
+
// Hand over results to MATLAB
PSI_OUT = Factors2mx(psi);
-
+
return;
}
/*=================================================================*
- * *
+ * *
* This is a MEX-file for MATLAB. *
- * *
+ * *
* dai_writefg(psi, filename); *
- * *
+ * *
*=================================================================*/
#define NR_OUT 0
-void mexFunction( int nlhs, mxArray * /*plhs*/[], int nrhs, const mxArray*prhs[] )
-{
+void mexFunction( int nlhs, mxArray * /*plhs*/[], int nrhs, const mxArray*prhs[] ) {
char *filename;
-
// Check for proper number of arguments
if ((nrhs != NR_IN) || (nlhs != NR_OUT)) {
mexErrMsgTxt("Usage: dai_writefg(psi,filename);\n\n"
" (psi{i} should be a structure with a Member field\n"
" and a P field, like a CPTAB).\n"
" filename = filename of a .fg file\n");
- }
-
+ }
+
// Get input parameters
vector<Factor> factors = mx2Factors(PSI_IN,0);
-
+
size_t buflen;
buflen = mxGetN( FILENAME_IN ) + 1;
filename = (char *)mxCalloc( buflen, sizeof(char) );
mxGetString( FILENAME_IN, filename, buflen );
-
+
// Construct factorgraph
FactorGraph fg(factors);
size_t nr = Ps.size();
mxArray *psi = mxCreateCellMatrix(nr,1);
-
+
const char *fieldnames[2];
fieldnames[0] = "Member";
fieldnames[1] = "P";
-
+
size_t I_ind = 0;
for( vector<Factor>::const_iterator I = Ps.begin(); I != Ps.end(); I++, I_ind++ ) {
mxArray *Bi = mxCreateStructMatrix(1,1,2,fieldnames);
mxSetField(Bi,0,"Member",BiMember);
mxSetField(Bi,0,"P",BiP);
-
+
mxSetCell(psi, I_ind, Bi);
}
return( psi );
int n1 = mxGetM(psi);
int n2 = mxGetN(psi);
- if( n2 != 1 && n1 != 1 )
- mexErrMsgTxt("psi should be a Nx1 or 1xN cell matrix.");
+ if( n2 != 1 && n1 != 1 )
+ mexErrMsgTxt("psi should be a Nx1 or 1xN cell matrix.");
size_t nr_f = n1;
if( n1 == 1 )
nr_f = n2;
if( verbose >= 3 )
cerr << "reading factor " << cellind << ": " << endl;
mxArray *cell = mxGetCell(psi, cellind);
- mxArray *mx_member = mxGetField(cell, 0, "Member");
+ mxArray *mx_member = mxGetField(cell, 0, "Member");
size_t nr_mem = mxGetN(mx_member);
double *members = mxGetPr(mx_member);
const mwSize *dims = mxGetDimensions(mxGetField(cell,0,"P"));
/* Convert CPTAB-like struct to Factor */
Factor mx2Factor(const mxArray *psi) {
- mxArray *mx_member = mxGetField(psi, 0, "Member");
+ mxArray *mx_member = mxGetField(psi, 0, "Member");
size_t nr_mem = mxGetN(mx_member);
double *members = mxGetPr(mx_member);
const mwSize *dims = mxGetDimensions(mxGetField(psi,0,"P"));
}
-string MF::identify() const {
+string MF::identify() const {
return string(Name) + printProperties();
}
piet *= henk;
piet = piet.marginal(var(i), false);
piet = piet.exp();
- jan *= piet;
+ jan *= piet;
}
jan.normalize();
Real MF::logZ() const {
Real s = 0.0;
-
+
for(size_t i=0; i < nrVars(); i++ )
s -= beliefV(i).entropy();
for(size_t I=0; I < nrFactors(); I++ ) {
assert( opts.hasKey("verbose") );
assert( opts.hasKey("updates") );
assert( opts.hasKey("inits") );
-
+
props.tol = opts.getStringAs<double>("tol");
props.verbose = opts.getStringAs<size_t>("verbose");
props.updates = opts.getStringAs<Properties::UpdateType>("updates");
// init N, con, nb, tJ, theta
void MR::init(size_t Nin, double *_w, double *_th) {
size_t i,j;
-
+
N = Nin;
con.resize(N);
con[i]++;
}
}
-
+
theta.resize(N);
for(i=0; i<N; i++)
theta[i] = _th[i];
for(k=0; k<N; k++){
if(k!=cavity) {
for(size_t i=0; i<con[k]; i++)
- thbJsite[i] = tJ[k][i];
+ thbJsite[i] = tJ[k][i];
for(l=0; l<con[k]; l++){
- xinter = 1.;
+ xinter = 1.;
rinter = 0.;
if(k==s2) rinter += 1.;
for(j=0; j<con[k]; j++)
// compute the observables (i.e. magnetizations and responses)******
for(size_t i=0; i<concav; i++){
- rinter = 0.;
+ rinter = 0.;
xinter = 1.;
if(i!=i2)
- for(j=0; j<con[nb[cavity][i]]; j++){
+ for(j=0; j<con[nb[cavity][i]]; j++){
variab2 = tanh(xfield[kmax*nb[nb[cavity][i]][j]+kindex[nb[cavity][i]][j]]);
variab1 = tJ[nb[cavity][i]][j]*variab2;
rinter += tJ[nb[cavity][i]][j]*rfield[kmax*nb[nb[cavity][i]][j]+kindex[nb[cavity][i]][j]]*(1-variab2*variab2)/(1-variab1*variab1);
// A is a subset of nb[i]
//
// calculate T{(i)}_A as defined in Rizzo&Montanari e-print (2.17)
-
+
sub_nb _nbi_min_A(con[i]);
_nbi_min_A.set();
_nbi_min_A &= ~A;
// A is a subset of nb[i]
//
// calculate the product of all tJ[i][_j] for _j in A
-
+
sub_nb::size_type _j = A.find_first();
if( _j == sub_nb::npos )
return 1.0;
//
// calculate the moment of variables in A from M and cors, neglecting higher order cumulants,
// defined as the sum over all partitions of A into subsets of cardinality two at most of the
- // product of the cumulants (either first order, i.e. M, or second order, i.e. cors) of the
+ // product of the cumulants (either first order, i.e. M, or second order, i.e. cors) of the
// entries of the partitions
-
+
sub_nb::size_type _j = A.find_first();
if( _j == sub_nb::npos )
return 1.0;
}
} while (!B.none());
}
-
-void MR::solvemcav() {
+
+void MR::solvemcav() {
double sum_even, sum_odd;
double maxdev;
size_t maxruns = 1000;
}
}
-
-void MR::solveM() {
+
+void MR::solveM() {
for(size_t i=0; i<N; i++) {
if( props.updates == Properties::UpdateType::FULL ) {
// find indices in nb[i]
}
-string MR::identify() const {
+string MR::identify() const {
return string(Name) + printProperties();
}
supported = false;
break;
}
-
+
if( !supported )
return;
-
+
// check whether all interactions are pairwise or single
for( size_t I = 0; I < fg.nrFactors(); I++ )
if( fg.factor(I).vars().size() > 2 ) {
double *w = new double[Nin*Nin];
double *th = new double[Nin];
-
+
for( size_t i = 0; i < Nin; i++ ) {
th[i] = 0.0;
for( size_t j = 0; j < Nin; j++ )
VarSet::const_iterator jit = psi.vars().begin();
size_t j = fg.findVar( *(++jit) );
- w[i*Nin+j] += 0.25 * log(psi[3] * psi[0] / (psi[2] * psi[1]));
- w[j*Nin+i] += 0.25 * log(psi[3] * psi[0] / (psi[2] * psi[1]));
+ w[i*Nin+j] += 0.25 * log(psi[3] * psi[0] / (psi[2] * psi[1]));
+ w[j*Nin+i] += 0.25 * log(psi[3] * psi[0] / (psi[2] * psi[1]));
th[i] += 0.25 * log(psi[3] / psi[2] * psi[1] / psi[0]);
th[j] += 0.25 * log(psi[3] / psi[1] * psi[2] / psi[0]);
}
}
-
+
init(Nin, w, th);
delete th;
assert( alpha != nrORs() );
}
RecomputeORs();
-
+
// create bipartite graph
G.construct( nrORs(), nrIRs(), edges.begin(), edges.end() );
// Retain only maximal clusters
ClusterGraph cg( cl );
cg.eraseNonMaximal();
-
+
// Create outer regions, giving them counting number 1.0
ORs.reserve( cg.size() );
foreach( const VarSet &ns, cg.clusters )
assert( alpha != nrORs() );
}
RecomputeORs();
-
+
// Create inner regions - first pass
set<VarSet> betas;
for( size_t alpha = 0; alpha < cg.clusters.size(); alpha++ )
IRs.reserve( betas.size() );
for( set<VarSet>::const_iterator beta = betas.begin(); beta != betas.end(); beta++ )
IRs.push_back( Region(*beta,0.0) );
-
+
// Create edges
vector<pair<size_t,size_t> > edges;
for( size_t beta = 0; beta < nrIRs(); beta++ ) {
edges.push_back( pair<size_t,size_t>(alpha,beta) );
}
}
-
+
// create bipartite graph
G.construct( nrORs(), nrIRs(), edges.begin(), edges.end() );
void RegionGraph::Calc_Counting_Numbers() {
// Calculates counting numbers of inner regions based upon counting numbers of outer regions
-
+
vector<vector<size_t> > ancestors(nrIRs());
boost::dynamic_bitset<> assigned(nrIRs());
for( size_t beta = 0; beta < nrIRs(); beta++ ) {
bool RegionGraph::Check_Counting_Numbers() {
// Checks whether the counting numbers satisfy the fundamental relation
-
+
bool all_valid = true;
for( vector<Var>::const_iterator n = vars().begin(); n != vars().end(); n++ ) {
double c_n = 0.0;
os << "Edges" << endl;
for( size_t alpha = 0; alpha < rg.nrORs(); alpha++ )
foreach( const RegionGraph::Neighbor &beta, rg.nbOR(alpha) )
- os << alpha << "->" << beta << endl;
+ os << alpha << "->" << beta << endl;
return(os);
}
assert( opts.hasKey("maxiter") );
assert( opts.hasKey("verbose") );
assert( opts.hasKey("type") );
-
+
props.tol = opts.getStringAs<double>("tol");
props.maxiter = opts.getStringAs<size_t>("maxiter");
props.verbose = opts.getStringAs<size_t>("verbose");
if( newalpha1 == _a.size() ) {
_Qa.push_back( Factor( jt_Qa[alpha1].vars(), 1.0 ) );
_a.push_back( alpha1 ); // save old index in index conversion table
- }
+ }
size_t newalpha2 = find(_a.begin(), _a.end(), alpha2) - _a.begin();
if( newalpha2 == _a.size() ) {
_Qa.push_back( Factor( jt_Qa[alpha2].vars(), 1.0 ) );
_a.push_back( alpha2 ); // save old index in index conversion table
}
-
+
_RTree.push_back( DEdge( newalpha1, newalpha2 ) );
_Qb.push_back( Factor( jt_Qb[beta].vars(), 1.0 ) );
_b.push_back( beta );
}
-void TreeEP::TreeEPSubTree::init() {
+void TreeEP::TreeEPSubTree::init() {
for( size_t alpha = 0; alpha < _Qa.size(); alpha++ )
_Qa[alpha].fill( 1.0 );
for( size_t beta = 0; beta < _Qb.size(); beta++ )
Qa[_a[alpha]].fill( 0.0 );
for( size_t beta = 0; beta < _Qb.size(); beta++ )
Qb[_b[beta]].fill( 0.0 );
-
+
// For all states of _nsrem
for( State s(_nsrem); s.valid(); s++ ) {
// Multiply root with slice of I
_Qa[_RTree[i].n2] *= delta;
}
Factor new_Qb = _Qa[_RTree[i].n2].marginal( _Qb[i].vars(), false );
- _Qa[_RTree[i].n1] *= new_Qb / _Qb[i];
+ _Qa[_RTree[i].n1] *= new_Qb / _Qb[i];
_Qb[i] = new_Qb;
}
// DistributeEvidence
for( size_t i = 0; i < _RTree.size(); i++ ) {
Factor new_Qb = _Qa[_RTree[i].n1].marginal( _Qb[i].vars(), false );
- _Qa[_RTree[i].n2] *= new_Qb / _Qb[i];
+ _Qa[_RTree[i].n2] *= new_Qb / _Qb[i];
_Qb[i] = new_Qb;
}
// mutual information between the nodes
// ALT: construct weighted graph with as weights an upper bound on the
// effective interaction strength between pairs of nodes
-
+
WeightedGraph<double> wg;
for( size_t i = 0; i < nrVars(); ++i ) {
Var v_i = var(i);
vector<VarSet> Cliques;
for( size_t i = 0; i < tree.size(); i++ )
Cliques.push_back( VarSet( var(tree[i].n1), var(tree[i].n2) ) );
-
- // Construct a weighted graph (each edge is weighted with the cardinality
+
+ // Construct a weighted graph (each edge is weighted with the cardinality
// of the intersection of the nodes, where the nodes are the elements of
// Cliques).
WeightedGraph<int> JuncGraph;
if( w )
JuncGraph[UEdge(i,j)] = w;
}
-
+
// Construct maximal spanning tree using Prim's algorithm
RTree = MaxSpanningTreePrims( JuncGraph );
// Check counting numbers
Check_Counting_Numbers();
-
+
// Create messages and beliefs
Qa.clear();
Qa.reserve( nrORs() );
Qb.clear();
Qb.reserve( nrIRs() );
- for( size_t beta = 0; beta < nrIRs(); beta++ )
+ for( size_t beta = 0; beta < nrIRs(); beta++ )
Qb.push_back( Factor( IR(beta), 1.0 ) );
// DIFF with JTree::GenerateJT: no messages
-
+
// DIFF with JTree::GenerateJT:
// Create factor approximations
_Q.clear();
}
-string TreeEP::identify() const {
+string TreeEP::identify() const {
return string(Name) + printProperties();
}
// been reached or until the maximum belief difference is smaller than tolerance
for( _iters=0; _iters < props.maxiter && diffs.maxDiff() > props.tol; _iters++ ) {
for( size_t I = 0; I < nrFactors(); I++ )
- if( offtree(I) ) {
+ if( offtree(I) ) {
_Q[I].InvertAndMultiply( Qa, Qb );
_Q[I].HUGIN_with_I( Qa, Qb );
_Q[I].InvertAndMultiply( Qa, Qb );
for( size_t I = 0; I < nrFactors(); I++ )
if( offtree(I) )
s += (_Q.find(I))->second.logZ( Qa, Qb );
-
+
return s;
}
/// Uniform distribution with values between 0 and 1 (0 inclusive, 1 exclusive).
boost::uniform_real<> _uni_dist(0,1);
-/// Global uniform random random number
+/// Global uniform random random number
boost::variate_generator<_rnd_gen_type&, boost::uniform_real<> > _uni_rnd(_rnd_gen, _uni_dist);
/// Normal distribution with mean 0 and standard deviation 1.
// Start with the root
treeV.insert( Root );
-
+
// Keep adding edges until done
while( !(Gr.empty()) )
for( Graph::iterator e = Gr.begin(); e != Gr.end(); ) {
// from an almost uniform probability distribution over these graphs
// (which becomes uniform in the limit that d is small and N goes
// to infinity).
-
+
assert( (N * d) % 2 == 0 );
bool ready = false;
// Read FactorGraph from the file specified by the first command line argument
FactorGraph fg;
fg.ReadFromFile(argv[1]);
-
+
// Set some constants
size_t verbose = 0;
double tol = 1.0e-9;
time += toc() - tic;
}
- ~TestDAI() {
+ ~TestDAI() {
if( obj != NULL )
delete obj;
}
string identify() const {
if( obj != NULL )
- return obj->identify();
+ return obj->identify();
else
return "NULL";
}
obj->init();
obj->run();
time += toc() - tic;
-
+
try {
logZ = obj->logZ();
has_logZ = true;
else
throw;
}
-
+
try {
iters = obj->Iterations();
has_iters = true;
else
throw;
}
-
+
q = allBeliefs();
};
}
err.push_back( dist( q[i], x[i], Prob::DISTTV ) );
}
- double maxErr() {
+ double maxErr() {
return( *max_element( err.begin(), err.end() ) );
}
-
- double avgErr() {
+
+ double avgErr() {
return( accumulate( err.begin(), err.end(), 0.0 ) / err.size() );
}
};
// repeat until method name == alias name ('looped'), or
// there is no longer an alias 'method'
}
-
+
// check whether name is valid
size_t n = 0;
for( ; strlen( DAINames[n] ) != 0; n++ )
if( m > 0 ) {
cout.setf( ios_base::scientific );
cout.precision( 3 );
-
+
double me = clipdouble( piet.maxErr(), 1e-9 );
cout << me << "\t";
-
+
double ae = clipdouble( piet.avgErr(), 1e-9 );
cout << ae << "\t";
-
+
if( piet.has_logZ ) {
cout.setf( ios::showpos );
double le = clipdouble( piet.logZ / logZ0 - 1.0, 1e-9 );
./testem 3var.fg 2var_data.tab 3var.em >> $TMPFILE1
./testem ../hoi1.fg hoi1_data.tab hoi1_share_f0_f2.em >> $TMPFILE1
./testem ../hoi1.fg hoi1_data.tab hoi1_share_f0_f1_f2.em >> $TMPFILE1
-diff -s $TMPFILE1 testem.out || exit 1
+diff -s $TMPFILE1 testem.out || exit 1
rm -f $TMPFILE1
int main( int argc, char** argv ) {
if( argc != 4 )
usage("Incorrect number of arguments.");
-
+
FactorGraph fg;
fg.ReadFromFile( argv[1] );
Factor BinaryFactor( const Var &n, double field ) {
assert( n.states() == 2 );
double buf[2];
- buf[0] = exp(-field);
+ buf[0] = exp(-field);
buf[1] = exp(field);
return Factor(n, &buf[0]);
}
void MakeHOIFG( size_t N, size_t M, size_t k, double sigma, FactorGraph &fg ) {
- vector<Var> vars;
- vector<Factor> factors;
+ vector<Var> vars;
+ vector<Factor> factors;
vars.reserve(N);
- for( size_t i = 0; i < N; i++ )
- vars.push_back(Var(i,2));
-
- for( size_t I = 0; I < M; I++ ) {
- VarSet vars;
- while( vars.size() < k ) {
- do {
- size_t newind = (size_t)(N * rnd_uniform());
- Var newvar = Var(newind, 2);
- if( !vars.contains( newvar ) ) {
- vars |= newvar;
- break;
- }
- } while( 1 );
- }
+ for( size_t i = 0; i < N; i++ )
+ vars.push_back(Var(i,2));
+
+ for( size_t I = 0; I < M; I++ ) {
+ VarSet vars;
+ while( vars.size() < k ) {
+ do {
+ size_t newind = (size_t)(N * rnd_uniform());
+ Var newvar = Var(newind, 2);
+ if( !vars.contains( newvar ) ) {
+ vars |= newvar;
+ break;
+ }
+ } while( 1 );
+ }
factors.push_back( RandomFactor( vars, sigma ) );
- }
+ }
fg = FactorGraph( factors.begin(), factors.end(), vars.begin(), vars.end(), factors.size(), vars.size() );
}
void Make3DPotts( size_t n1, size_t n2, size_t n3, size_t states, double beta, FactorGraph &fg ) {
vector<Var> vars;
vector<Factor> factors;
-
+
for( size_t i1 = 0; i1 < n1; i1++ )
for( size_t i2 = 0; i2 < n2; i2++ )
for( size_t i3 = 0; i3 < n3; i3++ ) {
WTh2FG( w, th, fg );
}
-
+
void MakeGridNonbinaryFG( bool periodic, size_t n, size_t states, double beta, FactorGraph &fg ) {
size_t N = n*n;
UEdgeVec g = RandomDRegularGraph( N, d );
for( size_t i = 0; i < g.size(); i++ )
w(g[i].n1, g[i].n2) = rnd_stdnormal() * sigma_w + mean_w;
-
+
for( size_t i = 0; i < N; i++ )
th[i] = rnd_stdnormal() * sigma_th + mean_th;
do {
prod = (prod * x) % p;
n++;
- } while( prod != 1 );
+ } while( prod != 1 );
return n;
}
cout << "interactions with <states> states and inverse temperature <beta>." << endl;
} else
cerr << "Unknown type (should be one of 'full', 'grid', 'grid_torus', 'dreg', 'loop', 'tree', 'hoi', 'ldpc_random', 'ldpc_group', 'ldpc_small', 'potts3d')" << endl;
-
+
if( type == FULL_TYPE || type == GRID_TYPE || type == GRID_TORUS_TYPE || type == DREG_TYPE || type == LOOP_TYPE || type == TREE_TYPE ) {
if( type == GRID_TYPE || type == GRID_TORUS_TYPE || type == LOOP_TYPE ) {
cout << "if <states> > 2: factor entries are exponents of Gaussians with mean 0 and standard deviation beta; otherwise," << endl;
cout << "singleton interactions are Gaussian with mean <mean_th> and standard" << endl;
cout << "deviation <sigma_th>; pairwise interactions are Gaussian with mean" << endl;
cout << "<mean_w> and standard deviation <sigma_w>." << endl;
- }
+ }
}
cout << endl << desc << endl;
return 1;
MakeGridNonbinaryFG( periodic, n, states, beta, fg );
else
MakeGridFG( periodic, n, mean_w, mean_th, sigma_w, sigma_th, fg );
-
+
cout << "# n = " << n << endl;
cout << "# N = " << N << endl;
-
+
if( states > 2 )
cout << "# beta = " << beta << endl;
else {
// Add likelihoods
for( size_t i = 0; i < N; i++ )
- factors.push_back( Factor(Var(i,2), likelihood + output[i]*2) );
+ factors.push_back( Factor(Var(i,2), likelihood + output[i]*2) );
// Construct Factor Graph
fg = FactorGraph( factors );
cout << "LCBP with full cavities needs " << cavsum_lcbp << " BP runs" << endl;
cout << "LCBP with only pairinteractions needs " << cavsum_lcbp2 << " BP runs" << endl;
cout << "Cavity sizes: ";
- for( map<size_t,size_t>::const_iterator it = cavsizes.begin(); it != cavsizes.end(); it++ )
+ for( map<size_t,size_t>::const_iterator it = cavsizes.begin(); it != cavsizes.end(); it++ )
cout << it->first << "(" << it->second << ") ";
cout << endl;
facsizes[Isize] = 1;
}
cout << "Factor sizes: ";
- for( map<size_t,size_t>::const_iterator it = facsizes.begin(); it != facsizes.end(); it++ )
+ for( map<size_t,size_t>::const_iterator it = facsizes.begin(); it != facsizes.end(); it++ )
cout << it->first << "(" << it->second << ") ";
cout << endl;