# Current Slide

Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.

### Training Bayesian Networks: Several Scenarios

- Scenario 1: Given both the network structure and all variables observable:
*compute only the CPT entries* - Scenario 2: Network structure known, some variables hidden:
*gradient descent*(greedy hill-climbing) method, i.e., search for a solution along the steepest descent of a criterion function - Weights are initialized to random probability values
- At each iteration, it moves towards what appears to be the best solution at the moment, w.o. backtracking
- Weights are updated at each iteration & converge to local optimum
- Scenario 3: Network structure unknown, all variables observable: search through the model space to
*reconstruct network topology* - Scenario 4: Unknown structure, all hidden variables: No good algorithms known for this purpose
- D. Heckerman. A Tutorial on Learning with Bayesian Networks. In
*Learning in Graphical Models,*M. Jordan, ed. MIT Press, 1999.

**Speaker notes:**

## Content Tools

Tools

Sources (0)

Tags (0)

Comments (0)

History

Usage

Questions (0)

Playlists (0)

Quality

### Sources

There are currently no sources for this slide.