|
Our research
activities are organized into the five main following themes: Security in infrastructure-less and constrained
networks The
Internet was not designed to operate in a completely open and hostile
environment. It was designed by researchers that trust each other and
security was not an issue. The situation is quite different today and the
Internet community has drastically expanded. The Internet is now composed of
more than 300 millions computers worldwide and the trust relationship has
disappeared. One of the reasons of the Internet success is that it provides
ubiquitous inter-connectivity. This is also one of its main weaknesses since
it allows to launch attacks and to exploit vulnerabilities in a large-scale
basis. The Internet is vulnerable to many different attacks, for example,
distributed Denial-of Service (DDoS) attacks, epidemic attacks (Virus/Worm),
spam/phishing and intrusions attacks. The Internet is not only insecure but
it also infringes users’ privacy. Those breaches are due to the Internet
protocols but also to new applications that are being deployed (VoIP,
RFID,...). A
lot of research is required to improve the Internet security and privacy. For
example, more research work is required to understand, model, quantify and
hopefully eliminate (or at least mitigate) existing attacks. Furthermore,
more and more small devices (RFIDs or sensors) are being connected to the
Internet. Current security/cryptographic solutions are too expensive and
current trust models are not appropriate. New protocols and solutions are
required: security and privacy must be considered in the Internet
architecture as an essential component. The whole Internet architecture must
be reconsidered with security and privacy in mind. Our
current activities in this domain on security in wireless, ad hoc and sensor
networks, mainly the design of new key exchange protocols and of secured
routing protocols. We work also on location privacy techniques and
authentication cryptographic protocols and opportunistic encryption. Rapid
advances in microelectronics are making it possible to mass-produce tiny
inexpensive devices, such as processors, RFIDs, sensors, and actuators. These
devices are already, or soon will be, deployed in many different settings for
a variety of purposes, which typically involve tracking (e.g., of hospital
patients, military/rescue personnel, wildlife/livestock and inventory in
stores/warehouses) or monitoring (e.g., of seismic activity, border/perimeter
control, atmospheric or oceanic conditions). In fact, it is widely believed
that, in the future, sensors will permeate the environment and will be truly
ubiquitous in clothing, cars, tickets, food packaging and other goods.
Simultaneously, ad hoc networks are gaining more and more interest in the
research community. An ad hoc network is a ”spontaneous” network of wireless
devices/users that does not rely on any fixed infrastructure. In such a
network, each node is also a router, i.e., it routes/forwards packets for
other nodes. Ad
hoc networks can be categorized into two main groups: Mobile Ad Hoc networks
(MANET) and Wireless Sensor Networks (WSN). MANETs are used to provide a
communication infrastructure to end-users when a fixed infrastructure is
unavailable. MANETs are typically used in emergency/rescue situations, i.e.,
following an earthquake, when infrastructure is destroyed. They can be also
used to provide relatively cheap and flexible wireless access to network
backbones. In contrast to MANETs, WSNs are not meant to provide a
communication infrastructure to end-users, but rather to reach a collective
conclusion regarding the environment. A WSN is typically composed of a base
station (sink) and many small sensors. Communication is often one-way, i.e.
only from sensors to the base stations. Even though MANETs and WSNs are
closely related, they have quite different characteristics. WSNs are usually
much larger than MANETs, by at least an order of magnitude. Also, WSNs act
under severe technological constraints: they have severely limited
computation and communication abilities. Furthermore, their power (battery)
resources are limited, i.e. if a node runs out of battery power, it essentially
becomes permanently non-operational. These new highly networked environments
create many new exciting security and privacy challenges. Our goals are to
understand and tackle some of them. We
are also interested in the particular case of RFID tag security. An RFID
(Radio-Frequency IDentification) tag is a small circuit attached to a small
antenna, capable of transmitting data to a distance of several meters to a
reader device (reader) in response to a query. Most RFID tags are passive,
meaning that they are battery-less, and obtain their power from the query
signal. They are already attached to almost anything: clothing, foods, access
cards and so on. Unfortunately, the ubiquity of RFID tags poses many security
threats: denial of service, tag impersonation, malicious traceability, and
information leakage. We focus in this work on this latter point that arises
when tags send sensitive information, which could be eavesdropped by an
adversary. In the framework of a library, for example, the information openly
communicated by the tagged book could be its title or author, which may not
please some readers. More worryingly, marked pharmaceutical products, as
advocated by the US Food and Drug Administration, could reveal a person’s
pathology. For example, an employer or an insurer could find out which
medicines a person is taking and thus work out his state of health. Large
scale applications like the next generation of passports are also subject to
such an issue. Avoiding eavesdropping can be done by establishing a secure
channel between the tag and the reader. This requires the establishment of a
session secret key, which is not always an easy task considering the very
limited devices’ capacities. This difficulty is reinforced by the fact that
tags and reader do not share a master key in most of the applications. In the
future, implementing a key establishment protocol may become a mandatory
feature. For example Californian Bill 682 requires such a technical measure
to be implemented in ID-cards deployed in RFID
deployment creates many new exciting security and privacy challenges. Our
goals are again to understand and tackle some of them. New dissemination paradigms The
future Internet will be even more heterogeneous and should provide a scalable
support for seamless information dissemination, whatever the underlying
support. A lot of work has already been done on the efficient support of
group communications on the Internet, both at routing, transport and
application levels. These works gave birth to content broadcasting services
(e.g. in DVB-H networks) as well as some content dissemination peer-to-peer
systems (e.g. BitTorrent). Mastering scalable communications requires to deal
with a wide range of networking components and techniques, like reliable
multicast, FEC codes, multicast routing and alternative group communication
techniques, audio and video coding, announcement and control protocols.
Our
goal in this domain is to design and implement such components to ensure
efficient and scalable group communications. To realize this goal, we
investigate several key services and building blocks: first, the efficient
application-level Forward Error Correction (AL-FEC) codes that are needed to
improve the transmission reliability and application efficiency; secondly the
security services (e.g. content integrity, source authentication,
confidentiality) whose importance will become more and more acute especially
in heterogeneous networking/broadcasting environments; and finally scalable
session-level control tools that will be required to control at a high
abstraction level the operational aspects of the underlying dissemination
systems. One
the other hand, peer-to-peer technology is widely widespread and highly
studied. However, the dynamics of a peer-to-peer network is still not fully
understood. Indeed, we observe significant differences in service capacities
among the different peer-to-peer protocols. These differences are due to
small protocols specificities. It is of major importance to understand why and
how these specificities impact the dynamics of a peer-to-peer network. Our
goal, with this activity, is to gain a deep understanding of these dynamics
in order to propose improvements for the next generation of peer-to-peer
protocols. Wireless Networking The
tremendous success of the wireless access technologies and their great
diversity has further increased the heterogeneity of the Internet. The
miniaturization of electronic components gave birth to a large number of new
applications such as RFID, wireless sensors/nanosensors for medical
applications, all kinds of wireless sensors that for example are able to
avoid or forecast natural disasters, etc. Each of these new applications has
particular needs and requires specific optimizations (e.g., battery life,
power control to limit interferences, optimal multihop routing). These new
miniaturized circuits and applications have launched the beginning of the new
era of ambient networks, where the heterogeneity is more and more present. All
these new applications have very different characteristics, with multiple
standards, all with the same target to communicate. It
is therefore important to address management and control of wireless networks
including support for auto-configuration and self-organization under policy
and security constraints; creation of survivable systems in the face of the
challenges of the wireless environment; issues in wireless networks from a
systems perspective such as the interactions of protocol layers and different
access networks including cross-layer optimizations and feedback/control
mechanisms; and realistic and affordable means for carrying out
representative, repeatable, and verifiable experiments to validate research
on wireless networks including open tools and simulation models, as well as
experimental facilities to access realistic environments and map experimental
results to simulation models. We
work also on how to efficiently support audio and video applications in
heterogeneous wired and wireless environments. Here we focus on congestion
control for multicast layered video transmission, scalable protocols for
large scale virtual environments and on performance improvements and quality
of service support for wireless LANs. We also consider the impact of new
transmission media on the TCP protocol performance. Our goal is to provide
each end-user the best quality possible taking into account its varying
capacities and characteristics of multimedia flows, and to propose adaptation
to the TCP protocol to make it fully profit from the available resources in a
heterogeneous environment. Understanding the Internet behavior One
topic in this area is to develop mathematically rigorous models to study and
analyze the dynamics and properties of large-scale networks. One of the goals
is to understand the fundamental performance limits of networks and to design
algorithms that allow us to approach these limits. Another topic is to
address the fundamental methodological barriers that make it hard to
reproduce experiments or to validate simulations in real world systems. The
goal here is to understand network behaviors for varying time-scales, a range
of spatial topologies, and a range of protocol interactions. One of the major
challenges with the future Internet will be how to monitor it in a scalable
and distributed way. This requires designing intelligent sampling methods
that provide good network coverage while reducing overhead. Another challenge
will be in the characterizing of traffic sources by network operators and the
detection of anomalies. The challenge for network operators in the future
will be in providing an attack free Internet connectivity to their end-users
and in prohibiting malicious users from using their premises. A third
challenge is to understand the issues related to transport and peer-to-peer
protocols dynamics on a very large scale with the current Internet, and to
propose efficient solutions for the future Internet. We describe briefly in
the following our main activities in this domain. An
important objective in this domain is a better monitoring of the Internet and
a better control of its resources. On one side, we focus on new measurement
techniques that scale with the fast increase in Internet traffic. Among
others, we use the results of measurements to infer the topology of the
Internet and to localize its distributed resources. The inference of Internet
topology and the localization of its resources is a building block that
serves for the optimization of distributed applications and group communications.
We cite in particular replicated web servers, peer-to-peer protocols and
overlay routing technologies. On
the other side, we focus on solutions that optimize the utilization of
network resources. Our solutions are usually based on mathematical modeling
of the underlying problem and an optimization using analytical and numerical
tools. This optimization is meant to provide insights on how to tune
protocols and dimension networks. As examples of activities in this direction
one can find the optimization of routing and its mapping to underlying
layers, the dimensioning of wireless mesh networks, the clustering of network
entities for the purpose for traffic collecting and monitoring, etc. Experimental environment for future Internet
architecture It
is important to have an experimental environment that increases the quality
and quantity of experimental research outcomes in networking, and to
accelerate the transition of these outcomes into products and services. These
experimental platforms should be designed to support both research and
deployment, effectively filling the gap between small-scale experiments “in
the lab”, and mature technology that is ready for commercial deployment. In
terms of experimental platforms, the well-known PlanetLab testbed is gaining
ground as a secure, highly manageable, cost-effective world-wide platform,
especially well fitted for experiments around New Generation Internet
paradigms like overlay networks. The current trends in this field, as
illustrated by the germinal successor known as GENI, are to address the
following new challenges. Firstly, a more modular design will allow achieving
federation, i.e. a model where reasonably independent Management Authorities
can handle their respective sub-part of the platform, while preserving the
integrity of the whole. Secondly,
there is a consensus on the necessity to support various access and physical
technologies, such as the whole range of wireless or optical links. It is
also important to develop realistic simulators taking into account the
tremendous growth in wireless networking, so to include the many variants of
IEEE 802.11 networking, emerging IEEE standards such as WiMax (802.16), and
cellular data services (GPRS, CDMA). While simulation is not the only tool
used for data networking research, it is extremely useful because it often
allows research questions and prototypes to be explored at many
orders-of-magnitude less cost and time than that required to experiment with
real implementations and networks.
|