Learning | Unsupervised Structural Learning
The particles described in the data set are considered as a representative sample of the joint probability distribution of the domain under study. All variables therefore have the same weight, they each represent a dimension of the hypercube. The objective of the BayesiaLab's Unsupervised Structural Learning algorithms is to analyze these particles and find the best representation of the joint probability distribution, as measured by the Minimum Description Length (MDL) score.
By default, the Conditional Probability Distributions (CPDs) are represented by tables. The option Parameter Estimation with Trees offers the ability of using Conditional Probability Trees (CPTr) for compactly representing CPDs by exploiting Contextual Independencies, e.g. when the state of one parent makes the other co-parent(s), the spouse(s), independent of the child node.
New Feature: EQ, TabooEQ and SopLEQ
As of version 9.0, all unsupervised structural learning algorithms are compatible with the probabilities estimation with Conditional Probability Trees.