The purpose of this course is to introduce students to techniques for distributed optimization in networks, also in the case when agents have different and conflicting utilities.
Teacher: Giovanni Neglia
Frank Kelly and Elena Yudovina, Stochastic networks, freely available here,
Srinivas Shakkottai and R. Srikant, Network Optimization and Control, available here,
David Easley and Jon Kleinberg, a preprint is available here,
J. R. Marden and J.S. Shamma, Game Theory and Distributed Control, Handbook of Game Theory Vol. 4, edited by Peyton Young and Shmuel Zamir, Elsevier Science, 2013, a preprint is available here,
Evaluation: 20% classwork (a 10 minutes test at every lesson, only 5 best marks will be considered), 20% homework to be delivered at week 4, 60% final exam. Marks are here (please send me an email with the pseudonym you want to be referred by).
You can freely use the slides below for your presentations, but I would like to be informed and please acknowledge the source in your presentation. Any comment is welcome.
First lesson (December 20, 2017): introduction to the course [pptx], an electrical example (Kelly&Yudovina, section 4.1). Scribe notes from previous year [pdf]. Crash course on Lagrange multipliers (A gentle introduction to the interpretation of Lagrange multipliers).
Second lesson (January 10, 2018): crash course on convex optimization. Road traffic models (Kelly&Yudovina, section 4.2). Scribe notes from previous year [pdf1], [pdf2].
Third lesson (January 17, 2018): Network Utility Maximization (NUM), dynamics of the distributed approach to capacity sharing (Kelly&Yudovina, section 7.3), what is TCP doing (Kelly&Yudovina, sections 7.4 and 7.5). Scribe notes from previous year [pdf].
Fourth lesson (January 24, 2018): convergence of rate control dynamics for NUM, distributed optimation for machine learning (ML). For the first part you can refer to the scribe notes from previous year [pdf], up to section 3. For the second part: this paper from Leon Bottou has a light introduction on gradient methods and explains the success of stochastic gradient for large-scale ML; a formal description of the methods and their properties is in chapter 1 of Bertsekas' book Nonlinear programming; this set of slides from Smola's course on Scalable ML describes the distributed implementation of gradient methods; a discussion about distributed ML requirements and the limits of current dataflow systems (Spark, Hadoop, etc.) is in a survey from Xing et al.
Fifth lesson (January 31, 2018): how to optimize in presence of selfish rational agents: notions on games (Easley&Kleinberg, sections 6.1-6.4) and single-item auctions (Easley&Kleinberg, sections 9.1-9.5, 9.7.A and 9.7.B), matching markets (Easley&Kleinberg, sections 15.1, 15.2). Slides: introduction on game theory, auctions part 1. Scribe notes from previous year [pdf].
Sixth lesson (February 7, 2018): Google ads and VCG mechanism (Easley&Kleinberg, sections 15.3-15.8). Generalities on the use of game theory for engineering (Marden&Shamma, sec. 1), how to solve optimization problems through potential games, best response dynamics and potential games (Easley&Kleinberg, section 8.3 A). Slides [pdf].
Seventh lesson (February 14, 2018): the case of marginal contribution protocol, price of anarchy for cost/welfare sharing games, some learning algorithms: fictitious play, simple experimentation dynamics, log-linear learning (Marden&Shamma sections 2.1-2.3, sections 3.1-3.2)
First (and unique) assignment, deadline January 31 2018 at 9am, [pdf]. A paper version should be handed over to the professor before the lesson. If this is not possible, an electronic version should be sent by email by the same time.
This document is an example of a possible exam text, both regarding the organization and the specific questions which are taken from previous years' exams. Students are encouraged to solve these exercises.