PEOPLE. LUCA SCHENATO. ECC13.

Luca Schenato

Full Professor

Department of Information Engineering

University of Padova

Via Gradenigo 6/B – 35131 Padova Italy

Tel.: +39 049.827.7925

HYCON2 Workshop on Distributed Optimization in Large Networks and its Applications

July 16 July 2013, ETH, Zurich, Switzerland joint with ECC’13 conference

Context and motivations

The International Workshop on Distributed Optimization in Large Networks and its Application is partially supported by the FP7 Framework project HYCON2.

The objective of this workshop is to provide a self-contained overview of this growing body of literature in distributed optimization from a control perspective.

 

Although several invited sessions and special events have appeared in international conferences, this would be one of the first workshop to address the problem of distributed optimization from a control perspective.

 

More specifically, the proliferation of relatively inexpensive devices capable of communicating, computing, sensing, interacting with the environment and storing information is promising an unprecedented number of novel applications throughout the cooperation of these devices toward a common goal.

 

These applications include swarm robotics, wireless sensor networks, smart energy grids, smart traffic networks, smart camera networks.

 

These applications also pose new challenges, of which distributed and asynchronous optimization is one of the major ones. In fact, distributed optimization has being attracting ever growing attention in the past years since many problems in large scale network can be cast as convex optimization problems.

Intended Audience

The Workshop in intended to provide to a wide and diverse audience interested in distributed optimization in large scale networks with an overview of the state-of-the-art from a control point of view.

 

In particular, being the fist part of the workshop devoted to tutorial seminars, it is particularly suitable for Ph.D. students and young researchers who are willing to enter this new area of research and are not necessarily experts, since most relevant mathematical tools are references will be provided.

 

However, it is also relevant for practitioners and researchers in distributed optimization, since the second part of the workshop will present some recent advances in this area and some industrial application of these tools

Registration

Registration fee is 50CHS for all participants.

 

In case of any question about registration, please refer to ECC13 Workshop webpage.

 

For questions about the technical program, please contact Prof. Luca Schenato (e-mail: schenato@dei.unipd.it ) or Mikael Johansson (email: mikaelj@kth.se)

Organizers

Luca Schenato

Associate Professor
Department of Information Engineering
University of Padova
Via Gradenigo 6/b, 35131 Padova, Italy
tel. +39 049 827 7925
email: schenato@dei.unipd.it

 

 
Mikael Johansson

Full Professor
Electrical Engineering Department
KTH Royal Institute of Technology
Osquldas vag 10, Stockholm, Sweeden
Tel: +46-8-7907436
e-mail: mikaelj@ee.kth.se Associate

Organization and program

The workshop comprises of three parts:

  1. Tutorial/survey: in the morning there will be two lectures with the goal of providing a general overview of the most popular strategies adopted for distributed optimization in large scale networks. In particular, Prof. Angelia Nedich will provide an overview of first-order methods in multi agent networks and illustrate some of the open problems. Later, Prof. Mikael Johansson will give a lecture on decomposition techniques which exploit problem structure to divide a large optimization problem into many small ones that are coordinated towards the global optimum.
  2. Advanced topics: In the afternoon, there will be three lectures on advanced topics in distributed optimization. In the first lecture, Prof. Xiao Xavier will propose ADMM and fast gradient based approaches exhibiting less communication steps than currently available distributed algorithms. In the second lecture, Prof. Luca Schenato will propose a consensus- based extension of the celebrated Newton-Raphson algorithm for distributed optimization of smooth convex function and he will provide comparisons with the state-of-the-art algorithms in this context. Finally, in the last lecture, Prof. Jason Marden will provide a game-theoretic approach to distributed optimization which allow the design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective.
  3. Applications: In the last part of the Workshop, Prof. Vladimir Havlena will illustrate some applications to industrial scenarios of distributed model predictive control and in particular to the Barcelona water distribution network.

Schedule

9:00-10:30 Angelia Nedich (UIUC) "First-order methods for distributed in-network optimiztion" (SLIDES)

 

10:30-11:00 Coffee Break

 

11:00-12:30 Mikael Johansson (KTH) "Decomposition techniques for distributed optimization: a tutorial overview" (SLIDES)

 

12:30-14:00 Lunch

 

14:00-14:40 Joao Xavier (Univ. Lisbon) "ADMM and fast gradient methods for distributed optimization" (SLIDES)

 

14:40-15:20 Luca Schenato (Univ. Padova) "Newton-Raphson consensus for distributed convex optimization" (SLIDES)

 

15:20-15:50 Coffee Break

 

15:50-16:30 Ion Necoara (Univ. of Bucharest) "Coordinate descent methods for huge scale problems" (SLIDES)

 

16:30-17:10 Vladimir Havlena (Honeywell) "Industrial applications of large-scale and distributed optimization" (SLIDES)

Extra Material (pre-prints papers) (.zip)

Speakers

First-order methods for distributed in-network optimization

Angelia Nedic

University of Illinois at Urbana-Champaign, USA   Abstract: The advances in wired and wireless technology necessitated the development of theory, models and tools to cope with new challenges posed by large-scale optimization problems over networks. We will discuss the current state of these algorithms, as developed recently for distributed multi-agent network systems and their performance properties.   The emphasis will be on the interplay between the optimization methods and the network capability to propagate the information.   The network effects will be discussed for both synchronous and asynchronous methods including the network connectivity structure, noisy communication links, and other network properties. Some challenging open problems will also be presented.     Biography: Angelia Nedich received her B.S. degree from the University of Montenegro (1987) and M.S. degree from the University of Belgrade (1990), both in Mathematics.   She received her Ph.D. degrees from Moscow State University (1994) in Mathematics and Mathematical Physics, and from Massachusetts Institute of Technology in Electrical Engineering and Computer Science (2002).   She has been at the BAE Systems Advanced Information Technology from 2002-2006. She is currently an Associate professor at the Department of Industrial and Enterprise Systems Engineering at the University of Illinois at Urbana- Champaign, USA, which she joined in fall 2006.   She is the recipient of the NSF CAREER Award 2008 in Operations Research for her work in distributed multi-agent optimization. Her general interest is in optimization including fundamental theory, models, algorithms, and applications.   Her current research interest is focused on large-scale convex optimization, distributed multi-agent optimization, stochastic approximations in optimization and variational inequalities with applications in signal processing, machine learning, and control.

Decomposition techniques for distributed optimization: a tutorial overview

Mikael Johansson

KTH, Sweeden Abstract: In mathematical programming, decomposition techniques refer to methods that exploit problem structure to divide a large optimization problem into many small ones that are coordinated towards the global optimum.   This talk will demonstrate how decomposition techniques can be used to exploit the spatial structure inherent in networked decision-making problems, and create a foundation for structured analysis and design of associated distributed algorithms for network-wide optimization.   We will review the three basic paradigms: primal-, dual- and primal/dual-decomposition, and discuss solution methods that are suitable for distributed implementation.   Convergence conditions, convergence rate results and parameters that minimize the convergence factors will be presented. Finally, we demonstrate the power and versatility of the framework by developing novel solutions to classical problems such as consensus, Internet congestion control, and resource allocation.   Somewhat surprisingly, in some cases these decomposition-based approaches improve on the best algorithms presented in the literature.     Biography: Mikael Johansson is a professor in Automatic Control at KTH specializing in distributed optimization and resource allocation in network systems. He held visiting positions at Stanford University (1999-2002) and UC Berkeley (2010-2011).   He has published two books and over one hundred papers, several of which are ISI-highly cited. His work has received recognition in terms of best paper awards from conferences such as ACM Internet Measurement Conference and IEEE ICC.   He has served on the editorial board of Automatica and on the program committee for top conferences such as IEEE CDC, IEEE Infocom, ACM SenSys. He has played a leading role in leading role in several national and international research projects in control and communications.

ADMM and fast gradient methods for distributed optimization

Joao Xavier

University of Lisbon, Portugal

Abstract:

We present distributed optimization algorithms for minimizing the sum of convex functions, each one being the local cost function of an agent in a connected network. This finds applications in distributed learning, consensus, spectrum sensing for cognitive radio networks, resource allocation, etc.

 

We propose both ADMM and fast gradient based approaches exhibiting less communication steps than currently available distributed algorithms for the same problem class and solution accuracy.

 

The convergence rate for the various methods is established theoretically and tied to the network structure. Numerical simulations are provided to illustrate the gains that can be achieved across several applications and network topologies.

 

Biography:

Joao Xavier received the PhD degree in 2002 from Instituto Superior Tecnico (IST), Technical University of Lisbon, Portugal.

 

He is currently an assistant professor in the department of Electrical and Computer Engineering, at IST.

 

He is also a researcher at the Institute for Systems and Robotics (ISR), Lisbon. His current research interests lie in the area of statistical signal processing and optimization for distributed systems.

Newton-Raphson consensus for distributed convex optimization

Luca Schenato

University of Padova, Italy Abstract: We address the problem of distributed unconstrained minimization of separable smooth convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network.   We specifically propose a new technique called Newton-Raphson Consensus which is based on linear consensus algorithms.   Beside having low computational, communication and memory requirements the technique has also the beneficial properties of requiring very little coordination and naturally supporting time-varying topologies.   We show that under suitable assumptions the proposed algorithm guarantees exponential convergence to the global minimum.   Finally, we provide some numerical applications and comparisons with alternative strategies available in the literature.   Biography: Luca Schenato is currently Associate Professor at the Department of Information Engineering at the University of Padova.   He received the Laurea Degree in Electrical Engineering from the University of Padova in 1999, the Management of Technology Certificate from the Haas Business School at UC Berkeley, USA, in 2003, and the Ph.D. from the Electrical Engineering and Computer Science Department of UC Berkeley in 2003.   In 1997 he was a visiting student at the Department of Computing Science at the University of Aberdeen, UK, and in 2004 he help a post-doctoral fellowship at the EECS Department of UC Berkeley.   From 2004 to 2007 he was the recipient of the Professorship ”Returning Brains (Rientro dei Cervelli)” sponsored by the Italian Ministry of Education and he served as Adjunct Professor at the Department of Information Engineering at the University of Padova.   He won the Eli Jury Award from the EECS Department of UC Berkeley in 2006 for his ”for outstanding achievement in the area of Systems, Communications, Control, or Signal Processing”.   His expertise includes distributed control, estiamation and optimization; networked control systems; wireless sensor networks; consensus algorithms; and biomimetic locomotion. He is also serving as Associate Editor for the IEEE Transactions of Automatic Control.

Coordinate descent methods for huge scale problems

Ion Necoara

Politehnica University of Bucharest, Romania Abstract: In this talk we present randomized block coordinate descent methods for minimizing convex optimization problems with linearly coupled constraints over networks and show that they obtain in expectation an ϵ-accurate solution in at most O( N/ϵ ) iterations, where N is the number of nodes in the network. However, the computational complexity per iteration of these methods is much simpler than the one of a method based on full gradient information and each iteration can be computed in a distributed way.   We focus on how to choose the probabilities to make these randomized algorithms to converge as fast as possible and we arrive at solving sparse SDPs. Analysis for rate of convergence in probability is also provided. For strongly convex functions we show that these distributed algorithms converge linearly.   We also discuss the extension of these algorithms to composite convex optimization and show that similar results hold. Finally, we provide some numerical applications and comparisons with alternative strategies from the literature.   Biography: Ion Necoara received the B.Sc. degree in mathematics and the M.Sc. degree in optimization and statistics from the University of Bucharest, in 2000 and 2002.   After graduating he worked as a Ph.D. student at the Delft Center for Systems and Control, Delft University of Technology, The Netherlands.   From August to November 2005 he was an invited researcher at University of Cambridge, UK. For the period 2006–2009, he completed a Postdoctoral Fellowship at the Katholieke Universiteit Leuven, Belgium. Since 2009 he is a staff member of the Faculty of Automatic Control and Computers, University Politehnica Bucharest, where he is now an Associate Professor of Numerical Methods and Optimization. His main research interests cover various topics in developing new optimization algorithms with a focus on structure exploiting and applying optimization techniques for developing new advanced controller design algorithms for complex systems.

Industrial applications of large-scale and distributed optimization

Vladimir Havlena

Honeywell Prague Laboratory, Czech Republic

Abstract:

Model-predictive control (MPC) is an important paradigm in modern process control systems.

 

Plant-wide coordination can easily involve problems with thousands of inputs and outputs, and the optimization problems that the MPC algorithm needs to solve at each sampling instant become truly large-scale. In this talk, we will describe our experience with algorithms for large-scale and distributed optimization in industrial MPC applications.

 

We will describe novel coordination techniques for distributed optimization developed at Honeywell Prague Laboratories, and show how they can be applied to large-scale control problems such as control of transport layer of a Barcelona water distribution network, leading to more than 100 times computational speed-up with significant reduction of local problem recalculation (to about 25%) and number of iterations (to about 10%).

 

Biography:

Vladimir Havlena s a Senior Fellow in Honeywell ACS Advanced Technology Laboratory Prague (HPL) and a full Professor at the Czech Technical University of Prague.

 

In HPL, Dr. Havlena is a technical lead of the Process Control and Optimization group. His team developed e.g. a novel advanced process control application for power generation plants or predictive control solution for supercharged diesel engine.

 

This work included theoretical contributions extending the MPC functionality, development of software prototype and engineering tools, and pilot applications. He holds a number of patents and several awards for innovation in the process control area.