Application and Theory of Differentiable Optimization in Machine LearningSpeaker: Kai
Title: Application and Theory of Differentiable Optimization in Machine Learning
Date: 01 Apr 2021 17:00-18:00 EST
Abstract: The differentiability of optimization problem is not a new direction and it is highly tied to the sensitivity analysis of optimal solutions. However, the use of differentiable optimization layers in deep learning is relatively new and has created many important shifts to the area. In particular, differentiable optimization layers create a new door for integrating optimization problems that we are solving into the training pipeline of machine learning model. This allows us to concatenate machine learning models with an optimization task while keeping the whole training process end-to-end. By integrating the optimization problem into the learning pipeline, we can leverage more problem structure and achieve better overall performance. This talk covers the theoretical foundation of the integration of learning and planning problems under different scenarios, including scalability concern in large-scale optimization problems, Nash equilibrium in multi-agent systems as differentiable layers, and sequential decision-making problems like reinforcement learning. Lastly, an open direction on the impact of differentiable optimization layers to generalization bound is provided.