This is a short course on distributed convex optimization over networks with a specific attention to asynchronous implementations which are robust under lossy communication. The course is self-contained and no prior knowledge of distributed optimization is expected and it intends to expose the participants to the most popular ideas and algorithms that have been proposed in this area in the last decades. The course will start with real-world applications which motivate the need of distributed optimization. It will then introduce the consensus algorithms with its properties and its application to distributed optimization. It will then propose an alternative approach to consensus based on non-expansive operator theory which is particularly effective when asynchronous implementation with unreliable communication. The course will also presents prospective and an outlook on recent advancements on Federate Learning and time-varying distributed optimization. Finally, the course will include a hands-on Matlab laboratory where some of the presented algorithms are implemented and compared (Python implementations are also available up-on request). The class is supported with slides and detailed PDF notes. Only basic knowledge of linear algebra, discrete-time linear dynamical systems and convex optimization (Lagrangians, primal-dual problems, optimality gap) is necessary.
The short course will take place in building ETZ, room E81, at walking distance from the main conference venue (building HG).
For more information https://necsys22.control.ee.ethz.ch/short-course