​​580.691/491 Learning, Estimation, and Control

Text book: Biological Learning and Control, Shadmehr and Mussa-Ivaldi

The course introduces modern techniques in mathematical analysis of biomedical data. Techniques include maximum likelihood, estimation theory via Kalman equation, state-space models, Bayesian estimation, classification of labeled data, dimensionality reduction, clustering, expectation maximization, and dynamic programming via the Bellman equation. 


  • Regression and maximum likelihood
  • Estimation theory
    • State estimation of dynamical systems (chapter 4.6 and 4.7). Optimal parameter estimation, parameter uncertainty, state noise and measurement noise, adjusting learning rates to minimize model uncertainty.  Derivation of the Kalman filter algorithm.
    • Kalman filter and signal dependent noise (chapter 4.9 and 4.10).

  • Bayesian estimation
    • Bayes estimation and its relationship to Kalman filters (chapter 5.1). Factorization of joint distribution of Gaussian variables. 
      • Homework: posterior distribution with two observed data points; maximizing the posterior directly.
    • Causal inference (chapter 5.2 and 5.3). The problem of deciding between two generative models.
    • Examples from biological learning: Kamin blocking, backward blocking, and comparisons between Kalman learning and LMS.
    • MAP estimation.

  • Classification with labeled data
    • ​Fisher linear discriminant, posterior probabilities. Classification using posterior probabilities with explicit models of densities, confidence and error bounds of the Bayes classifier, Chernoff error bounds.
    • ​​Linear and quadratic decision boundaries. Equal-variance Gaussian densities (linear discriminant analysis), unequal-variance Gaussian densities (quadratic discriminant analysis), Kernel estimates of density
    • Logistic regression. Iterative re-weighted least squares estimation