Child pages
  • Advanced Course

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Current »


Artificial Intelligence with Bayesian Networks and BayesiaLab

Training Overview

  • Teaching objectives: Advanced knowledge modeling, machine learning and analysis methods with Bayesian networks and BayesiaLab
  • Length: 3 days
  • Required Level: Participants in the Advanced Course are required to have completed the Introductory Course.
  • Teaching methods: Tutorials with practical exercises using BayesiaLab plus plenty of one-on-one coaching
  • Trainer: Dr. Lionel Jouffe, CEO, Bayesia SAS.
  • Training materials: A printed tutorial (approx. 250 slides), plus a memory stick containing numerous exercises and white papers
  • Bayesian network Software: Bayesia provides all trainees with an unrestricted 60-day license of BayesiaLab Professional Edition, so they can participate in all exercises on their own laptops
  • Cost: Between 2,100 and 2,500 Euros/Trainee, depending on the location of the training session. Discounts are available for groups of trainees of the same company. A special academic discount of 50% is also available for students and teachers of accredited educational institutions. 

Here is a link to a Prezi presentation that describes the entire learning journey, from the short introduction to Bayesian networks to the 3rd day of the Advanced course:

The Introductory course gives you a broad view of what you can do with Bayesian networks. In the Advanced course, we study in more detail topics that are only quickly touched during the Introductory course:

  • Expert-Based Modeling with BEKEE
  • Discretization of the Continuous Variables
  • Synthesis of New Variables (Manual Synthesis and Data Clustering)
  • Fine-Tuning of Learning Algorithms
  • Network Quality Evaluation
  • Target Optimization

But more importantly, we cover new topics, such as:

  • Parameter Sensitivity Analysis
  • Function Nodes
  • Influence Diagrams
  • Dynamic Bayesian Networks
  • Bayesian Updating
  • Aggregation of the Discrete States
  • Missing Values Processing
  • Credible/Confidence Intervals Analysis
  • Evidence Analysis
  • Function Optimization
  • Contribution Analysis

Note that we also have much more hands-on exercises than during the Introductory course given that you are already familiar with all the basic concepts. 


The registration is complete upon payment of the fee by Bank Transfer, or Credit Cards. Visit the BayesiaLab Store to get the prices corresponding to the type of your organization and number of seats your are interested in.

Training Program

Day 1


  • Expert-Based Modeling via Brainstorming
  • Why Expert-Based Modeling?
  • Value of Expert-Based Modeling
  • Structural Modeling: Bottom-Up and Top-Down Approaches
  • Parametric Modeling
  • Cognitive Biases
  • BEKEE: Bayesia Expert Knowledge Elicitation Environment
    • Interactive
    • Batch
    • Segmentation of the Experts
    • Creation of Bayesian Belief Networks based on the Elicited Probabilities
    • Analysis of the Expert Assessments
    • Parameter Sensitivity Analysis
  • Exercise: Interactive Session for Probability Elicitation

  • Utility Nodes
  • Decision Nodes
  • Expected Utility
  • Automatic Policy Optimization
  • Example: Oil Wildcatter
  • Exercices

  • Motivations
  • Inference Functions
  • Formatting
  • Function Nodes as Parents
  • Exercise

  • Hidden Markov Chain
  • Unfolded Temporal Bayesian Networks
  • Dynamic Bayesian Networks
  • Temporal Simulations (Scenarios, Temporal Conditional Dependencies, Temporal Monitoring)
  • Exact and Approximate Inference
  • Unfolding Dynamic Bayesian Networks
  • Exercise: Maintenance of a Fluid Distribution System
  • Network Temporalization
  • Temporal Forecast
  • Exercise: Box & Jenkins

Day 2

  • Unrolled Networks
  • Compact Networks
    • Hyperparameters
    • Conditional Dependencies
  • Exercise: Bayesian Updating for Equine Anti-Doping

  • Impact of Discretization
  • Requirements for a Good Discretization
  • Pre and Post Discretization
  • Discretization viewed as the Creation of Latent Variables
  • Discretization Methods
    • Manual by Expertise
    • Univariate
      • Equal Frequency
      • (Normalized) Equal Distance
      • Density Approximation
      • K-Means
      • R2-GenOpt
      • R2-GenOpt*
    • Bi-Variate
      • Tree
      • Perturbed Tree
    • Multi-Variate
      • Supervised with Random Forest
      • Unsupervised with Random Forest
      • R2-GenOpt
      • LogLoss-GenOpt
  • Exercise
  • Aggregation Methods for Symbolic Variables
    • Manual by Expertise
    • Semi-Automatic
    • Bi-Variate with Tree
  • Exercise

  • Types of Missingness:
    • Missing Completely at Random (MCAR)
    • Missing at Random (MAR)
    • Not Missing at Random (NMAR)
    • Filtered/Censored/Skipped
  • Types of Methods
    • Static
      • Filtering
      • A Priori Replacement
      • Entropy Based and Standard Static Imputation
    • Dynamic
      • Dynamic Imputation
      • Entropy Based Dynamic Imputation
      • Structural Expectation-Maximization
      • Approximate Dynamic Imputation with Static Imputation

  • Missing Values Imputation (Standard, Entropy-Based, Maximum Probable Explanation)

  • Exercise

  • Filtered/Censored/Skipped Values

  • Example: Survey Analysis

Day 3

  • Manual Synthesis
  • Binarization
  • Clustering
    • K-Means
    • Bayesian Clustering
    • Hierarchical Bayesian Clustering
  • Exercises

  • Minimum Description Length (MDL) Score
  • Parameter Estimation with Trees
  • Structural Coefficient
  • Stratification
  • Smooth Probability Estimation
  • Exercise: CarStarts

  • Credible/Confidence Interval Analysis
  • Evidence Analysis
    • Most Probable Explanation
    • Joint Probability of Evidence
    • Log-Loss
    • Information Gain
    • Bayes Factor
  • Performance Analysis
    • Supervised
    • Unsupervised
      • Compression
      • Multi-Target
  • Outlier Detection
  • Path Analysis
  • Exercises

  • Genetic Algorithm
  • Objective Function
    • States/Mean
    • Function value
    • Maximization/Minimization
    • Target Value
    • Resources
    • Joint Probability/Support
  • Search Methods
    • Hard Evidence
    • Numerical Evidence
    • Direct Effects
  • Exercise: Marketing Mix Optimization

  • Direct Effects
  • Type I Contribution
  • Type II Contribution
  • Base Mean
  • Normalization
  • Stacked Curves
  • Synergies