Child pages
  • Edit Number of Uniform Prior Samples (9.0)



Edit | Edit Number of Uniform Prior Samples

In the machine learning framework, probabilities are estimated by using the Maximum Likelihood Estimation (MLE) method, where the probability of each state is its observed frequency in the data set. This method is purely frequentist. However, it is possible to make this method Bayesian by mixing the observed particles (those described in the data set) with virtual particles defined by Dirichlet Priors

The easiest way to define Dirichlet Priors in BayesiaLab is to define uninformative priors, where we specify that everything is possible. This is done by representing our prior knowledge with a Bayesian network where all nodes are independent and uniformly distributed, and we use this network to generate Uniform Prior Samples, which leads to what we have called a Smooth Probability Estimation.

Renamed Feature: Edit Number of Uniform Prior Samples

As of version 9.0, Edit Smoothed Probability Estimation  as been renamed Edit Number of Uniform Prior Samples.