Optimal Transport and Machine Learning
Optimal transport (OT) provides a powerful and flexible way to compare probability measures, of all shapes: absolutely continuous, degenerate, or discrete. This includes of course point clouds, histograms of features, and more generally datasets, parametric densities or generative models.
The OT theory has its roots in the seminal work of Gaspard Monge ( "Mémoire sur la théorie des déblais et des remblais", 1781) who formulated the problem of remblais anddéblais as finding a measure-preserving map minimizing a total transportation cost.
Two centuries later the theory led to Nobel Prizes for Koopmans and Kantorovich in 1975 as well as Figalli's Fields medal in 2018.
Recent advances in computational aspects have introduced OT to machine learning community. Nowadays, the area of its application covers numerous challenging scenarios, such as supervised learning (e.g. generalization of k-means type clustering, Wasserstein barycenters), non-supervised learning (e.g. distance or kernel based classifiers, metric learning with the OT geometry, domains adaptation), estimation of generative models (e.g. Wasserstein GANs), computer vision, graphics and many other. In this talk we introduce the basic concepts of OT theory and discuss some OT related methods used ML.