Contents

 This menu gives access to:

  • Adaptive Questionnaire
     
  • Interactive Inference  

  • Interactive Bayesian Updating
     
  • Batch Labeling
     
  • Batch Inference
     
  • Batch Most Probable Explanation Labeling
     
  • Batch Most Probable Explanation Inference
     
  • Batch Joint Probability
     
  • Batch Likelihood
     
  • Inference Type

The different batch analysis can be performed on a database that will be loaded from a text file or a jdbc database. Whatever the database is, it can't modify the structure of the network when it is imported, contrary to the classic database association. However, if a database is already associated with the net- work, these batch analyses can be performed on it.

Two inference methods are proposed: 

  • Exact inference by junction trees (by default) : This inference method is based on the construction of a new structure (the junction tree) before making the first inference. This construction can be time and memory costly depending on the network size and connectivity. On the other side, once the junction tree is constructed, the inference is realized very quickly. This tree has to be reconstructed after each modification of the network (structure or parameters).
     
  • Approximate inference by Likelihood Weighting : When the construction cost of the junction tree is prohibitive (not enough memory), it is possible to use approximate inference. This method is based on the law of large numbers and use network simulation to approximate the probabilities.

While the exact inference cost appears in the junction tree construction (when passing from Modeling to Validation mode only), approximate inference requires very few memory but has a time cost at each inference.

Some of the networks have a too important complexity to perform exact inference on them. The junction tree may be too big to be represented in memory and the inference time can be extremely important. In this case, when the user asks to go in inference mode, a dialog box is displayed to propose several options:

 

  • The use of the approximate inference avoids the memory size problem but the exactness of the computation is lost as well as some analyses that are design to work only with exact inference.
     
  • A complexity reducing algorithm allows removing the less important arcs in the network. To do this, it uses the current database or generates one according to the probability distributions in order to compute the importance of each arc in the network. The less important arcs will be removed until the exact inference becomes possible in memory and time.
     
  • It is possible to go back to modeling mode in order to modify by hand the network structure to be usable.
     
  • It is possible to continue with exact inference without take the warning into account.