There are 13 tutorials presented in 2 tracks in parallel. Tutorials start at 9 a.m. and end at 5.15 p.m. both on Monday and Tuesday.
Monday, October 3
Tuesday, October 4
Abstract: How do properties of wireless multi-hop networks scale with their size? With the advent of ad hoc and sensor networks, this question has given birth to an intense research activity in networking. Traditional methods from stochastic performance are not adequate to solve this problem: new tools are needed. In this tutorial, we review some of the recent results on connectivity, transport capacity, throughput scaling in large scale wireless networks. A particular emphasis is put on techniques from percolation theory that played a key role in solving some of these questions.
Part 1: An introduction to bond percolation
Part 2: Application to scaling laws of wireless multi-hop networks
Speaker's biography: Patrick Thiran received the electrical engineering degree from the Université Catholique de Louvain, Louvain-la-Neuve, Belgium, the M.S. degree in electrical engineering from the University of California at Berkeley, and the Ph.D. degree from EPFL, Lausanne, Switzerland. He became a Professor at EPFL in 1998, and was on leave at Sprintlabs, Burlingame, CA, in 2000-2001. His research interests are in communication networks, performance analysis, dynamical systems, and stochastic models.
Abstract: We will lecture on some recent works with stochastic-geometry models in action for communication networks. The outline of the talk is:
Abstract: The simulation of mobility models such as the random waypoint often cause subtle problems, for example:
Brief description of material:
Speaker's biography: Jean-Yves Le Boudec contributed to the research effort on ISDN and ATM technology for more than 7 years at Nortel (Ottawa, Canada) and IBM (Zürich Research Lab). He developed performance models (product form networks, models for loss estimation) and architectural concepts (LAN emulation, ATM control point). He joined EPFL in 1994 where he is teaching and conducting research in the architecture and performance of communication networks. He is co-author of the book "Network Calculus". He was awarded (with Milan Vojnović) the Infocom best paper award in 2005 for his work on the simulation of mobility models.
Speaker: Leandros Tassiulas (University of Maryland, USA)
Abstract: Wireless technology advances over the last few years lead to sophisticated physical layer designs that may interact with the access and network layer in multiple modes. Link quality related information is passed from the physical layer, to be used in access and network layer actions. At the same time several considerations belonging naturally to the physical layer, like channel coding rate, signal constellation selection, power level adjustments, frequency selection and beam steering in multiple antenna systems are to the disposal of the access layer, that may control them in various time scales. That interaction is particularly useful for full exploitation of the volatile error-prone mobile channel and the establishment of reliable broadband wireless links in the interference limited radio medium. High performance on the other hand is necessary for wireless networks to offer QoS guarantees necessary for the support of the traffic mixture running over the broadband integrated network infrastructure to which the wireless component will be the natural extension. The design challenge of wireless systems necessitates novel models that capture the interactions described above as well as analytical tools for the design of resource allocation algorithms running at the various layers of the system. That is the focus of the current tutorial that will include more specifically the following topics:
Speaker's biography: Leandros Tassiulas is Professor in the Department of Computer Engineering and Telecommunications at the University of Thessaly Greece since 2002 and Research Professor at the University of Maryland College Park. His research activity over the last fifteen years is towards the development of communication and information processing networks that facilitate access and exchange of information among multiple entities. Current research and teaching topics include wireless mobile communications, ad-hoc networks, smart antennas, sensor networks, high speed networked environments. He was Assistant Professor at Polytechnic University, NY, 1991-1995, Associate Professor at the University of Maryland, College Park until 2002 (on leave 2000-2002) and Professor of Computer Science at the University of Ioannina Greece 1999-2002. He obtained the Diploma in Electrical Engineering from the University of Thessaloniki, Greece in 1987, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Maryland, College Park in 1989 and 1991 respectively. He has been Associate Editor for Communication Networks for IEEE Transactions on Information Theory and an editor for IEEE/ACM Transactions on Networking. His research activity received several recognitions including a National Science Foundation (NSF) Research Initiation Award in 1992, an NSF CAREER Award in 1995, an Office of Naval Research, Young Investigator Award in 1997, a Bodossaki Foundation award in 1999 and the INFOCOM 1994 best paper award.
Speaker: Philippe Jacquet (INRIA, France)
Speaker: Vishal Misra (Columbia University, USA)
Speaker: Christoph Lindemann (University of Dortmund, Germany)
Abstract: A mobile ad hoc network (MANET) is a collection of mobile nodes that dynamically self organize in a wireless network without using any pre-existing infrastructure. In a MANET, the applications are typically "peer-to-peer" (P2P) rather than "client-server". Moreover, a MANET is often built to support a specific application, thus the networking is application-driven. The tutorial will start with an overview of the two main scenarios: Person-to-person communication and car-to-car networking. Then, a thorough discussion of the two building blocks of a P2P system, i.e., distributed lookup services and reliable data transport, will be given. The tutorial shows that P2P systems designed for the wired Internet cannot be deployed in MANET in a straightforward way and present epidemic data dissemination as an effective approach to cope with the shortcomings.
Speaker's biography: Christoph Lindemann is an Associate Professor in the Department of Computer Science of the University of Dortmund and leading the mobile computing systems group. Prior to his appointment at Dortmund he held positions at the Fraunhofer Institute for Computer Systems and Software Technology in Berlin and at the IBM Almaden Research Center, San Jose CA. In the fall semester 2003/04 he spent his sabbatical at the computer science department of the University of Wisconsin as a visiting professor. His current research interests cover mobile ad hoc networks and peer-to-peer computing as well as performance evaluation as an umbrella topic. Christoph Lindemann is member of the IFIP working group 7.3 and a senior member of the IEEE. He is a member of the editorial board of the international journal Performance Evaluation. He is serving as General Co-chair for the 11th International Conference on Mobile Computing and Networking (ACM Mobicom 2005).
Speaker: Thomas Bonald and Alexandre Proutière (France Telecom R&D, France)
Intended audience: Students, researchers and engineers working on IP and wireless networks.
Abstract: The teletraffic theory was born in 1917 when Erlang published his famous formula giving the call blocking probability as a function of the traffic intensity and the link capacity. The Erlang formula has since been used to dimension all telephone networks, including recent cellular networks. It may be argued that its enduring success is essentially due to the following insensitivity property that makes it so simple: the blocking probability does not depend on the distribution of call duration.
The objective of this tutorial is to show that such simple and insensitive results exist for the Internet provided traffic is modeled at flow level. Specifically, traffic is represented as a random superposition of sessions, where each session is composed of a succession of data flows separated by an interval of inactivity referred to as a think-time. We review a number of flow-level models for elastic traffic where the distribution of the number of ongoing flows does not depend on any traffic characteristics (number of flows per session, flow size distribution, possibly correlated flow sizes and think-time durations, etc.) except for the traffic intensity. These include:
Speakers' biography: Both instructors are jointly with the traffic modelling group of Jim Roberts at France Telecom and the network theory group of François Baccelli at Ecole Normale Supérieure. Their research interests include queueing theory, stochastic models of communication networks, performance evaluation and design of control mechanisms for fixed and wireless networks. They received in 2004 the Best Paper Award of the joint Sigmetrics / Performance conference.
Thomas Bonald graduated from Ecole Polytechnique in 1994 and qualified as an engineer at Ecole Nationale Supérieure des Télécommunications in 1996. He has a PhD in applied mathematics from Ecole Polytechnique, his graduate research being performed at INRIA in the area of network flow control. He was a member of the program committee of a number of conferences, including Globecom, Infocom, Sigmetrics and Performance.
Alexandre Proutière graduated from Ecole Normale Supérieure in 1996 and qualified as an engineer at Ecole Nationale Supérieure des Télécommunications in 1998. He has a PhD in applied mathematics from Ecole Polytechnique, his graduate research being performed at France Telecom in the area of quality of service in integrated networks. He was a member of the program committee of a number of conferences, including Globecom and Infocom, and is the program committee chair of RAWNET 2005.
T2.B - Fluid approach to the control of processing networks: Continuous linear programs, infinite virtual buffers, and maximum pressure policies
Speaker: Gideon Weiss (U. Haifa, Israel)
Abstract: A fundamental problem in operations research is the control of many items whose evolution requires the use of shared resources over time. Manufacturing, vehicle traffic, communication networks, multiproject scheduling, and supply chain management, are among the systems in which this is crucial. This can be formulated as online control of a discrete stochastic processing network over a finite time horizon. We propose to solve it via a fluid approach:
Processing networks were introduced by Harrison. Items are classified into classes k∈K, activities which process, route, assemble or transport items are j∈J, and they require resources i∈I. Qk(t) denotes the number of items of class k at time t, uj(t) is the rate at which resources are allocated to activity j at time t. Infinite virtual buffers k∈K∞⊆K are such that items are always available, and Qk(t) is then measured relative to a nominal input αkt. The fluid problem approximates Qk(t) by a continuous deterministic fluid level qk(t):
s.t. ∫0t R u(s) ds + q(t) = q(0)
A u(t) ≤ 1, q(t), u(t) ≥ 0, t ∈ [0,T]
Maximum pressure policies were introduced by Dai and Lin. At time t the maximum pressure allocation is u(t)= arg maxu Q(t)TRu, where the maximization is over the available extreme allocation vectors u. If the network satisfies the structural EAA assumption then u(t) will minimize the derivative of the Lyapunov function q(t)T q(t) and stabilize the network, whenever the utilization ρ is ≤ 1. This stability continues to hold for infinite virtual buffers, where stability implies that the buffer levels track the nominal inputs. Our fluid tracking policy tracks the fluid in each subinterval by classifying all non-empty fluid buffers as infinite virtual buffers, and using maximum pressure. We prove that this is asymptotically optimal.
Speaker's biography: Gideon Weiss is Professor at the Department of Statistics, University of Haifa, Israel. He studied Mathematics at the Hebrew University of Jerusalem, Operations Research at the Technion, Haifa, and completed his Ph.D. in Statistics under the supervision of Professor Sir David Cox at Imperial College, London. He has previously been on the faculty of the Technion Israel, Georgia Tech Atlanta, and Tel Aviv University Israel, and has visited Stanford University, MIT, The University of Birmingham in England, and The University of California at Berkeley. His research has been funded consecutively for the past 20 years by the NSF, BSF (Binational US Israel Science Foundation), GIF (German Israel Fund) and ISF (Israel Science Foundation). His current research interests are in optimization, working on continuous linear programming, in applied probability working on fluid approximation of processing networks, and in combining these to approximate optimal control of complex discrete stochastic systems. He is the author of some 60 scientific papers, and serves as associate editor of Operations Research, Queueing Systems, and Journal of Scheduling.
Abstract: In this tutorial we discuss planning issues in call centers. A call center is a collection of resources (typically agents and ICT equipment) capable of handling customer contacts by telephone. If the call center handles not only telephone contacts but also contacts by fax, email, and so forth, then it is usually called a customer contact center. The goal of call center planning is to schedule call center representatives or agents in such a way that a certain minimal service level (SL) is obtained, for minimal costs. We focus on waiting time as main SL metric. Planning is done on several time scales: there are agent hiring strategies at the scale of months, there is workforce management at the scale of weeks, and there is intraday performance control at the scale of minutes and hours. We focus on workforce management and intraday performance. Planning is strongly related to the structure of the call center. We also discuss how decisions concerning the structure influence planning.
In the first part of this tutorial we focus on the current practice of workforce management (wfm). Central in wfm are performance models, with the Erlang C model as the most often used model. We discuss the whole wfm cycle consisting of forecasting, determination of staffing levels, and agent scheduling. In the second part of this tutorial we discuss the weak parts of the current practice. They have to do with random variations that are not well modeled (such as variations in arrival rate and in agent absence) and with human bahavior that is not captured by the Erlang C model. These issues have important consequences for the design of call centers. In the third and final part we discuss contact centers with different call types and different ways (such as internet) to communicate with the customer. These so-called multi-skill and multi-channel call centers give challenges at all planning levels. We discuss them, with a focus on the short-term performance models.
Speaker's biography: Ger Koole is a professor in the Department of Mathematics at the Vrije Universiteit Amsterdam. He graduated in 1992 from Leiden University, and held postdoc position at CWI (Amsterdam) and INRIA (Sophia Antipolis). He is interested in the theory and applications of controlled queueing systems. In recent years he has worked on service operations management, where he focuses on call centers and health care. He is author of more than 50 research papers, and is editor of a special issue of Management Science on call centers (to appear in 2006). His email address is
Speaker: Moshe Haviv (Department of Statistics, The Hebrew University of Jerusalem, Israel)
Intended Audience: Anybody interested in centralized and decentralized decision making in queueing systems.
Abstract: The whereabouts of customers in a queueing system interact. If more customers join the queue, one's waiting time may increase, to such a level that one may prefer not to join at all. But who decides who will join and who will not? Also, if too many packets are routed along one path, then an individual packet might be better assigned to some other path. But who will select this path, the individual himself or a central planner? Their objective may not be the same. For another decision problem, suppose acquiring the information which line or path is shorter comes with some cost. What should one do? On the one hand, one has an incentive to pay for this information. On the other hand, this incentive decreases the larger the fraction becomes of those who acquire this information as the more informed customers will tend to make the two lines more even anyhow. The tutorial will deal with these and similar decision models.
Many decision problems in queues can be modeled as non-cooperative games. In such games players (or, in our case, customers, service providers or routers) have to take actions. While selecting their actions they are not fully informed on the actions taken by others. In fact, in many situations, decision making is done simultaneously. Of course, each player likes to optimize his gains (utility), but one's optimal action is a function of what others are doing. Here the solution concept of Nash equilibrium is introduced and it replaces the concept of optimal solution which is used when a single decision maker is assumed. Specifically, Nash equilibrium is a set of strategies, one for each player (usually referred to as a strategy profile), so that assuming that all obey this recipe, each of the individuals does his best by following his share of this profile as well.
Most of the tutorial will be devoted to customers as decision makers who take actions as time progresses. Examples of that are to join or not to join the queue, which line to join, to pay or not to pay a premium in order to get higher priority, or to renege from the queue once waiting conditions deteriorate. We will also consider service providers as decision makers. Here the assumption is that they select some parameters while trying to maximize their or the society's revenues. Examples of that are selecting entry fees, designing set of priority premiums, or selecting service rates (ie, bandwidth allocation). The service providers analyze their revenues by considering the Nash equilibria in the games between customers resulting from any of their decisions.
The way the tutorial will approach the various decision models is by considering the information that customers possess when they select their actions. Specifically, we first look into models in which customers do not know the actual level of congestion when they make up their mind (the unobservable case), while in the second case we assume that they do know (the observable case). A classic example is when customers have to decide whether to queue or not to queue in a first-come first-served model where they are not informed of the actual queue length upon their arrival (the unobservable case) and when they are informed of the queue length (the observable case). Another example is when joining customers have the option to pay a premium in order to belong to a higher priority class of customers. One option of decision making here is when they do not know how many customers are in the system at decision instants (let alone how many belong to which class) while in the other they observe the number of customers belonging to each class. A few other examples will be given in the tutorial.
Any of the above games can be incorporated into the decision making faced by the service operator. For example, he can calculate for any entry fee what is the effective arrival rate (based on the above analysis of the game between customers), and hence can compute its revenue for each selected fee. In turn, the optimal fee can be found. Of course, in the unobservable case a one-for-all price has to be charged but in the observable case, prices can vary with the queue length. Another possible question is if the service provider should reveal the queue length to the customers or not so as to maximize profits (of course, assuming that this is at his discretion).
The talk is based on a 2003 book co-authored with Refael Hassin bearing the same title "To Queue or not to Queue: Equilibrium Behavior in Queueing Systems", and published by Kluwer's International Series.
Speaker's biography: Moshe Haviv is currently a Professor at the Department of Statistics in the Hebrew University of Jerusalem with a Ph.D. from Yale (1983). He has also previously lectured at the University of British Columbia and the University of Sydney. In general, his main research interest is in queueing systems, and in particular, in the modelling and the analysis of decision making in congested systems as non-cooperative games. Specifically, two years ago he published (with Refael Hassin) a book entitled "To queue or not to queue: Equilibrium behavior in queueing system," (Kluwer International Series). It is the first book written on this topic and as well as its original content, it summarizes the state of the art on the topic of the interaction in decision making among customers, on the one hand, and the interaction between customers and service providers, on the other. He has also made research contributions on the topics of Markov decision processes, on computational schemes for large Markov chains, and on singularly perturbed Markov chains. He has published more than forty research papers. The full list can be seen in
Speaker: Milan Vojnović and Ayalvadi Ganesh (Microsoft Research, UK)
Abstract: Worms and viruses pose a significant threat to computer systems in a highly networked environment where most home and corporate machines are connected to the Internet. In recent years, there have been several instances of worms spreading rapidly over the Internet and managing to infect most vulnerable hosts within 24 hours. The time between patch release and the release of a worm exploiting that vulnerability has been shrinking and it may soon happen that a worm appears before a patch is available (aka zero-day worms). It is widely accepted that a worm containment system for zero-day worms must be automatic as human-assisted response is too slow for effective containment. We need new techniques and analysis for fast-response systems of automatic containment.
In this tutorial, we first describe how worms differ from viruses and study some of their commonly used spreading strategies as well as potentially new ones. We then go on to discuss mathematical models of worm spread, which are in fact derived from classical models of biological epidemics that have been studied extensively for decades. Starting with a simple model of uniform scanning, we go on to consider progressively more complicated models that incorporate local preference in scanning and latencies and network effects. The aim is to give insights into the main qualitative features that are common to epidemic spread, irrespective of details of the worm scanning strategy.
We then go on to consider countermeasures. The primary class of countermeasures of interest involves automated worm detection and patch generation. The effectiveness of such a scheme depends on how quickly the worm can be detected and the patch generated, and then how quickly the patch can be disseminated to end systems. We discuss all three aspects of the response, with particular emphasis on the last one, namely the speed of patch dissemination. In order to compete with the rapid epidemic style spread of the worm, the patch also needs to rely on exponential growth. This could be achieved either by using epidemic spread again, or by relying on multicast trees or structured or unstructured overlays. We describe how the simple epidemic models studied earlier can be modified to incorporate the effect of countermeasures. The resulting analysis enables us to quantify how the fraction of hosts that eventually become infected depend on the various parameters describing the worm and the countermeasure. In particular, they help determine how fast patch dissemination needs to be in order for reactive patching to be successful in containing the epidemic.
Ayalvadi Ganesh graduated from the Indian Institute of Technology, Madras, in 1988, and received his MS and PhD in Electrical Engineering from Cornell University in 1991 and 1995 respectively. He joined the Networking group at Microsoft Research in March 1999. His research interests include network performance and security, resource allocation, and P2P and wireless networks. The research is in the mathematical fields of queueing theory, large deviations and random graphs.
Speaker: Merouane Debbah (Institut Eurécom, France)
Abstract: The asymptotic behaviour of the eigenvalues of large random matrices has been extensively studied since the fifties. One of the first related result was the work of Eugène Wigner in 1955 who remarked that the eigenvalue distribution of a standard Gaussian hermitian matrix converges to a deterministic probability distribution called the semi-circular law when the dimensions of the matrix increases. Since that time, the study of the eigenvalue distribution of random matrices has triggered numerous works, in the theoretical physics as well as probability theory communities. However, as far as communications systems are concerned, until 1996, exhaustive simulations were thought to be the only hope to get some hints on how communications systems with many users/antennas behave even with respect to very basic performance measures such as bit error probability or capacity. All this changed in 1997 when large system analysis based on random matrix theory was discovered as an appropriate tool to regain intuitive insight into communication systems, even if their performance measures happened to depend on thousand of parameters. The results led to very active research in many fields such as CDMA, MC-CDMA, MIMO systems, multi-cell systems, ad-hoc networks and channel modelling.
This talk is open to a large audience and does not require any pre-requisite. The instructor will introduce, in simple cases, the standard tools of random matrix theory, show their usefulness in communication systems and provide some applications in the context of wireless communications.
Speaker's biography: Merouane Debbah is presently an Assistant Professor with the department of Mobile Communications of the Institute Eurecom. He entered the Ecole Normale Supérieure de Cachan (France) in 1996 where he received the M.Sc and the Ph.D. degrees respectively in 1999 and 2002. From 1999 to 2002, he worked for Motorola Labs on Wireless Local Area Networks (IEEE802.11a) and prospective 4G systems (OFDM and MC-CDMA). From October 2002, he was appointed Senior Reseacher at the Vienna Research Center for Telecommunications (ftw.), Vienna, Austria working on MIMO wireless channel modeling issues. M. Debbah has published about 35 papers and several patents all in the area of wireless networks. His research interests are in information theory and wireless communications. For more information, please go to :