New Metric: Normalized Entropy
Entropy, H(X), is a key metric in BayesiaLab for measuring the uncertainty associated with the probability distribution of a variable X. The entropy of a variable can also be understood as the sum of the Expected Log-Losses of its states.
It is expressed in bits and defined as follows:
The maximum value of the entropy logarithmically increases with the number of states of the variable:
where SX is the number of states of the variable X.
Thus, the entropies of variables that do not have the same number of states cannot be directly compared. This is why version 7.0 features now the Normalized Entropy, a metric that takes into account the maximum value of the entropy and returns a normalized measure of the uncertainty associated with the variable:
The Normalized Entropy is available in the tooltip associated with the monitors, in the Information Mode, while hovering over the monitor:
It is also possible to sort the monitors by Normalized Entropy via the Monitor Panel Contextual Menu
New Metric: Normalized Mutual Information
The Conditional Entropy H(X|Y) measures, in bits, the Expected Log-Loss associated with variable X once we have information on variable Y:
This conditional entropy is thus key for defining what is the mutual information between X and Y.
The Mutual Information I(X, Y) measures the amount of information gained on variable X (the reduction in the expected Log-Loss) by observing variable Y:
The Venn Diagram below illustrates this concept:
Version 7.0 features now a normalized version of the Mutual Information that takes into account the maximum value that can take the entropy of X:
New Metric: Symmetric Normalized Mutual Information
As we can see on the Venn Diagram, the mutual information I(X, Y) is identical for the two variables X and Y. However, these variables can have a different number of states.
Thus, the Symmetric Normalized Mutual Information takes into account the maximum values that can take the entropies of X and Y:
Updated Metric: Relative Mutual Information
This metric measures the percentage of information gained on X by observing Y:
This metric was named Normalized Mutual Information in the previous BayesiaLab's versions.
Updated Metric: Symmetric Relative Mutual Information
This normalization is calculated akin to the Pearson Correlation Coefficient. The mutual information is an analogue to covariance and the entropy is analogous to variance.
A closely related metric was named Symmetric Normalized Mutual Information in the previous BayesiaLab's versions, with a slightly different normalization factor: