Dimitri P. Bertsekas - Optimization Society Prize

https://zhuanlan.zhihu.com/p/47453144

Deep learning Optimizers

Recent Optimization Algorithms for Deep Learning

Adaptive Learning of the Optimal Mini-Batch Size of SGD,Arxiv2005

Adaptively change the batch size

ADMM

Differentiable Linearized ADMM,ICML19

Dive into nn

Gradient Descent Finds Global Minima of Deep Neural Networks,ICML19

Deep Declarative Network

https://anucvml.github.io/ddn-cvprw2020/#invited-speakers

OptNet: Differentiable Optimization as a Layer in Neural Networks,ICML17

github

Meta-learning with differentiable closed-form solvers,ICLR19

Meta-Learning with Differentiable Convex Optimization,CVPR19

Learning End-to-end Video Classification with Rank-Pooling,ICML16

MISC

How Many Samples is a Good Initial Point Worth?

Optimizing the threshold over regions of the landscape, we see that, for initial points not too close to the ground truth, a linear improvement in the quality of the initial guess amounts to a constant factor improvement in the sample complexity.