Child pages
  • Introductory Course

Contents

Artificial Intelligence with Bayesian Networks and BayesiaLab

Training Overview

  • Teaching objectives: Comprehensive understanding of the Bayesian network paradigm plus practical skills for real-world research applications
  • Length: 3 days
  • Required Level: The course is taught at a beginner level, so no prior knowledge of Bayesian networks is necessary. However, undergraduate-level familiarity with probability theory and statistics is recommended.
  • Teaching methods: Tutorials with practical exercises using BayesiaLab plus plenty of one-on-one coaching
  • Trainer: Dr. Lionel Jouffe, CEO, Bayesia SAS.
  • Training materials: A printed tutorial (approx. 300 slides), plus a memory stick containing numerous exercises and white papers
  • Bayesian network Software: Bayesia provides all trainees with an unrestricted 90-day license of BayesiaLab Professional Edition, so they can participate in all exercises on their own laptops
  • Cost: Between 2,100 and 2,500 Euros/Trainee, depending on the location of the training session. Discounts are available for groups of trainees of the same company. A special academic discount of 50% is also available for students and teachers of accredited educational institutions. 

Here is a link to a Prezi presentation that describes the entire learning journey, from the short introduction to Bayesian networks to the 3rd day of the Advanced coursehttps://prezi.com/view/VeZ1oBtGKQtLZHwwkVhn/


Registration

The registration is complete upon payment of the fee by Bank Transfer, or Credit Cards. Visit the BayesiaLab Store to get the prices corresponding to the type of your organization and number of seats your are interested in.

Training Program

Day 1: Theoretical Introduction


  • Bayesian Networks: Artificial Intelligence for Decision Support under Uncertainty
  • Probabilistic Expert System
  • The Modeling World
  • Bayesian Networks and Cognitive Science
  • Unstructured and Structured Particles Describing the Domain
  • Expert Based Modeling and/or Machine Learning
  • Predictive (association) versus Explicative (causation) Models
  • Application Examples: Medical Expert Systems, Stock Market Analysis, Microarray Analysis, Consumer Segmentation, Drivers Analysis and Product Optimization

  • Cognitive Science: How our Probabilistic Brain uses Priors in the Interpretation of Images
  • Interpreting Results of Medical Tests

  • Kahneman & Tversky’s Yellow Cab/White Cab example
  • The Monty Hall Problem, Solving a Vexing Puzzle with a Bayesian network
  • Simpson’s Paradox - Observational Inference vs Causal Inference


  • Probabilistic Axioms
  • Perception of the Particles
  • Joint Probability Distribution (JPD)
  • Probabilistic Expert System for Decision Support: Types of Requests
  • Leveraging Independence Properties
  • Product/Chain Rule for Compact Representation of JPD


  • Qualitative Part: Directed Acyclic Graph
  • Graph Terminology
  • Graphical Properties
  • D-Separation
  • Markov Blanket
  • Quantitative Part: Marginal and Conditional Probability Distributions
  • Exact and Approximate Inference in Bayesian networks
  • Example of Probabilistic Inference: Alarm System


  • Expert-Based Modeling via Brainstorming
  • Why Expert-Based Modeling?
  • Value of Expert-Based Modeling
  • Structural Modeling: Bottom-Up and Top-Down Approaches
  • Parametric Modeling
  • Cognitive Biases
  • BEKEE: Bayesia Expert Knowledge Elicitation Environment

Day 2: Machine Learning - Part 1


  • Maximum Likelihood Estimation
  • Bayesian Parameter Estimation with Dirichlet Priors
  • Smooth Probability Estimation (Laplacian Correction)


  • Information is a Measurable Quantity: Log-Loss
  • Expected Log-Loss
  • Entropy
  • Conditional Entropy
  • Mutual Information
  • Symmetric Relative Mutual Information
  • Kullback-Leibler Divergence


  • Entropy Optimization
  • Minimum Description Length (MDL) Score
  • Structural Coefficient
  • Minimum Size of Data Set
  • Search Spaces
  • Search Strategies
  • Learning Algorithms
    • Maximum Weight Spanning Tree
    • Taboo Search
    • EQ
    • TabooEQ
    • SopLEQ
    • Taboo Order
  • Data Perturbation
  • Example: Exploring the relationships in body dimensions
    • Data Import (Typing, Discretization)
    • Definition of Classes
    • Exclusion of a Node
    • Heuristic Search Algorithms
    • Data Perturbation (Learning, Bootstrap)
    • Choice of the Structural Coefficient
    • Console
    • Symmetric Layout
    • Analysis of the Model (Arc Force, Node Force, Pearson Coefficient)
    • Dictionary of Node Positions
    • Association of an Image in the Background


  • Learning Algorithms
    • Naive
    • Augmented Naive
    • Manual Augmented Naive
    • Tree-Augmented Naive
    • Sons & Spouses
    • Markov Blanket
    • Augmented Markov Blanket
    • Minimal Augmented Markov Blanket
  • Variable selection with Markov Blanket
  • Example: Predictions based on body dimensions
    • Data Import (Data Type, Supervised Discretization)
    • Heuristic Search Algorithms
    • Target Evaluation (In-Sample, Out-of-Sample: K-Fold, Test Set)
    • Smoothed Probability Estimation
    • Analysis of the Model (Monitors, Mapping, Target Report, Target Posterior Probabilities, Target Interpretation Tree)
    • Evidence Scenario File
    • Automatic Evidence-Setting
    • Adaptive Questionnaire
    • Batch Labeling

Day 3: Machine Learning - Part 2

  • Algorithms
  • Example: S&P 500 Analysis
    • Variable Clustering
      • Changing the number of Clusters
      • Dynamic Dendrogram
      • Dynamic Mapping
      • Manual Modification of Clusters
      • Manual Creation of Clusters
    • Semi-Supervised Learning
    • Search Tool (Nodes, Arcs, Monitors, Actions)
    • Sticky Notes


  • Synthesis of a Latent Variable
  • Expectation-Maximization Algorithm
  • Ordered Numerical Values
  • Cluster Purity
  • Cluster Mapping
  • Log-Loss and Entropy of the Data
  • Contingency Table Fit
  • Hypercube Cells Per State
  • Example: Segmentation of men based on body dimensions
    • Data Clustering (Equal frequency discretization, Meta-Clustering)
    • Quality Metrics (Purity, Log-Loss, Contingency Table Fit)
    • Posterior Mean Analysis (Mean, Delta-means, Radar charts)
    • Mapping
    • Cluster Interpretation with Target Dynamic Profile
    • Cluster Interpretation with Target Optimization Tree
    • Projection of the Cluster on other Variables


  • PSEM Workflow
    • Unsupervised Structural Learning
    • Variable Clustering
    • Multiple Clustering for Creating a Factor Variable (via data Clustering) per Cluster of Manifest Variables
    • Unsupervised Learning for Representing the Relationships between the Factors and the Target Variables
  • Example: The French Market of Perfumes
    • Cross-validation of the Clusters of Variables
    • Displayed Classes
    • Total Effects
    • Direct Effects
    • Direct Effect Contributions
    • Tornado Analysis
    • Taboo, EQ, TabooEQ, and Arc Constraints
    • Multi-Quadrants
    • Export Variations
    • Target Optimization with Dynamic Profile
    • Target Optimization with Tree