Syllabus (tentative)
- Introduction: What is machine learning, supervised, reinforcement and unsupervised learning
- Types of predictors, bias-variance tradeoff, overfitting, curse of dimensionality
- Families of predictors for regression and classification: linear and logistic regression, nearest neighbor classifiers, regression and decision trees, neural networks, boosted predictors
- Training predictors; gradient descent and stochastic gradient descent and smoothness
- Regularization: Lasso and sparsity, Support Vector Machines and the kernel trick, overparametrization and double descent
- Neural networks: two layer, multi-layer, convolutional,
- Deep neural networks, autoencoders, generative models, flow models and theoretical views: the neuro-tangent kernel, NN's as Gaussian processes
- Attention, transformers, large language models
- Elements of unsupervised learning: Clustering, mixtures distributions, K-means and EM algorithms, density estimation, Principal Components Analysis
- Combining predictors
- Adversarial learning
- Privacy, fairness, explanability and interpretability
Topics in [] are optional, time permitting;
new topics are highlighted in blue.
|