Selected Talks

Versions of these talks or slides have been given at a number of places, both in academia and various industries. These slides were designed to be presented orally, so may be of limited use. I may add some comments on their contents at some point.

Optimization and Probability in Machine Learning · PDF

An experimental talk, in some sense about variational inference in probabilistic models.

Machine Learning Basics · PDF

A brief introduction to machine learning with no math, targeted at senior non-quant financial managers and practitioners.

Structure and Regularization · PDF

This tutorial surveyed several topics related to the use of the \(\ell_1\) norm to obtain a sparse or structured solution to a given problem. The first topic is the \(\ell_1\) norm and \(\ell_1\) regularization itself: several interpretations, some basic theoretical results, and some practical heuristics that are often useful. The second topic is algorithms for solving problems involving \(\ell_1\) norms and related functions. These are called proximal methods and rely on the concept of the proximal operator of a convex function. They generalize familiar algorithms: the projected gradient method is a special case. The final topic is examples and variations on the basic theme of promoting sparsity via the \(\ell_1\) norm. The examples include topics like basis pursuit, sparse coding, the elastic net, the group lasso, the structured or overlapping group lasso, multi-task learning, the fused lasso, the graphical lasso, low rank matrix completion, matrix decomposition, and robust and sparse PCA, among others. The point was that it is possible to cover many examples because they just reuse the same building blocks in various ways.

A Tour of Proximal Algorithms · PDF

A tutorial on proximal algorithms; essentially a subset of the material from “Proximal Algorithms” in slide form.


ORIE 5260: Machine Learning for Finance

ORIE 5260 was a masters/intro Ph.D.-level introduction to machine learning taught at Cornell Tech in 2018. The course was officially part of the masters program in computational finance and financial data science offered by the Department of Operations Research and Industrial Engineering (ORIE). This is a good introduction to machine learning at the graduate level, with a focus on mathematical foundations and classical machine learning and modern statistics (no deep learning). There are no implementation or programming exercises because the students were concurrently enrolled in a separate projects course. Several of the exercises and structure are inspired by or borrowed from Andrew Ng’s machine learning course and Stephen Boyd’s convex optimization course, though some of those materials are also rewritten and adapted.

CS 228T: Advanced Topics in Probabilistic Graphical Models

CS 228T was an advanced Ph.D.-level course taught at Stanford under Daphne Koller in 2011. The course had been restructured the previous year, and this version included many revised and added course materials. The course, which was a second course in graphical models, covered advanced Markov chain Monte Carlo methods, variational inference, structured prediction models, nonparametric Bayes, and other topics. The course is no longer offered, but the course materials may still be of interest.

EE 364A: Convex Optimization

EE 364A is the classic Stanford convex optimization course created by Stephen Boyd and Lieven Vandenberghe, based on their now-standard textbook. I taught this course one summer and guest lectured for a couple of weeks in the regular version.

EE 364B: Advanced Topics in Convex Optimization

EE 364B is the followup to EE 364A. I was one of the senior teaching assistants for this course at least once, and helped write the material on proximal operators, monotone operators and splitting methods, and ADMM.