Olivier Dalle's

important<< Job(s) available to work in Mascotte !
Open position, starting Sep 2011:

  • postdoc (12 month) on Composability and reuse in component-based simulation, within the OSA project.

See details here

SIMUTools 2016
9th EAI Intl. Conf. on Simulation Tools and Techniques
22–24 August, 2016 — Prague, Czech Republic

Dead-line extended: May 1st 2016

1.  What I Do

I am Maître de Conférences (eqv associate professor) at University of Nice - Sophia Antipolis (since Sept 2000).

1.1  About My Research Affiliations

In 2012, I joined the SCALE team (formerly OASIS). Formerly, I was a member of the MASCOTTE Research group from 2000 to 2012. SCALE is part of the COMRED group of the I3S Laboratory, which is itself a mixed research unit (UMR 6070) of the University of Nice Sophia Antipolis (UNS) and CNRS.

1.2  Teaching activities

I teach networks, operating systems, programming, and simulation within the Computing Science department (aka Informatique in French, see this quote from Dijkstra about this terminology issue).

Since 2014, I am in charge of the 2d year of the Computer Science curriculum of the Faculty of Sciences of University of Nice.

In 2009–2010, I was co-head of the Cryptography, Systems, Security & Networking (CSSR) of the 2d year of the Master in Computer Science. This new curriculum is a merge of two previously existing curriculi: Cryptography & Security (CS) and Systems, Security & Networking (SSR) that were both created in 2008.

From 2003 to 2006, I was in charge of the 1st year of the Comp. Science Master’s Degree. (Fabrice Huet succeeded me in charge of 1st year of the Comp. Science Master’s Degree.)

New opportunity for foreign students: in 2009, our Computer Science Master starts offering a new full-time english (research oriented) curriculum in Ubiquitous Networking and Computing.

Students that also speak French, may be interested by the Cryptography, Systems, Security & Networking (CSSR) curriculum, in which some of the lectures are given in English and others in French (depending on students selection).

↑ Contents

2.  Current Research

My current research activities focus on telecomunication networks simulation and in particular on component based modeling techniques. In this scope I used to be involved and still participate to the following projects:

2.1  Funded Projects

The INFRA-SONGS ANR Project (2012–2015)

The SONGS Project is a follow-up to the USS-SIMGRID ANR Project (see also here). The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area. Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management. La page du WP8

The EA DISSIMINET (Associated Team) (2011–2013)

Since January 2011, the MASCOTTE project-team is an associate team with ARS Laboratory at Carleton University, Ottawa, ON (Canada). This Franco-Canadian team will advance research on the definition of new algorithms and techniques for component-based simulation using a web-services based approach. On one hand, the use of web-services is expected to solve the critical issues that pave the way toward the simulation of systems of unprecedented complexity, especially (but not exclusively) in the studies involving large networks such as Peer-to-peer networks. Web-Service oriented approaches have numerous advantages, such as allowing the reuse of existing simulators, allowing non-computer experts to merge their respective knowledge, or seamless integration of complementary services (eg. on-line storage and repositories, weather forecast, traffic, etc.). One important expected outcome of this approach is to significantly the simulation methodology in network studies, especially by enforcing the seamless reproducibility and traceability of simulation results. On the other hand, a net-centric approach of simulation based on web-services comes at the cost of added complexity and incurs new practices, both at the technical and methodological levels. The results of this common research will be integrated into both teams’ discrete-event distributed simulators: the CD++ simulator at Carleton University and the simulation middle-ware developed in the MASCOTTE EPI, called OSA, whose developments are supported by an INRIA ADT (Development Action) named OSA starting in December 2011.

The OSA project (Supported by INRIA since 2005, currently by an ADT funding, 2011-2012)

OSA stands for Open Simulation Architecture. This is a development project for a new discrete event simulation platform. The original elements of this new platform are:

  1. the integration in the same tool of a large number of the Modeling & Simuilation concerns (modeling, developments, instrumenting, …)
  2. the extensive of Component-Based Sofware Engineering (CBSE) techniques, and more particularly the Fractal component model (for example, in order to ease the reuse and replacement of parts of the platform AND models —cf this paper — )
  3. the use of Aspect Oriented Programming (AOP) techniques in order to separate concerns
  4. an open (Open Source) and modular architecture, easy to use (automatic dependencies management based on a Maven repository), inspired AND based on Eclipse
  5. a collaborative development model (forge, wiki …)

OSA v0.6 is available on the INRIA forge with a demo of Peer-to-peer storage simulation.

1 Software Engineer position available to work on this project starting Sept 2010 (1 yr, renewable). Details about how to apply soon published here.

2.2  Other projects

Armada (Since 2015)

A new generation NAS storage system that uses a Peer-to-peer backup infrastructure to save costs and improve reliability.

Binding Layers (Since Dec 2011)

Binding Layers is a new Component Architecture Model.

A software Component Architecture Model (CAM) describes a set of operating rules and mechanisms for building complex applications using a structured assembly of software components. Compared to a component model, eg. J2EE, Spring, SCA or Fractal, a CAM does NOT specify the component model itself, but builds instead on top of existing Component Models (CMs). As a result, an important property seeked in BL-CAM is genericity: BL-CAM is meant to be compliant with many Component Models.

Various approaches have been proposed so far to specify the structure of complex applications based on components, but the most popular are certainly the following:

  • Flat structures: all components lay in a common container and interact directly with each other depending on their dependencies;
  • Hierarchical structure: components can be grouped into bigger units, that can in turn be used to form even bigger units, and so on.

Boths approaches have their pros and cons: Flat structures avoid the complexity of hierarchy and therefore usually offer better performance, but at the cost of a lesser reusability and control; on contrary, hierarchical structures offer great means for reusing parts of an application, and the hierarchy provides a de facto means for building complex control and fine-tuned non-fonctionnal services. However, despite their popularity, both approaches fail to provide good means for the Separation of Concerns at the architectural level.

Binding Layers is an attempt to solve this issue by following a third, different approach. Like flat structures, BL does not suffer from a many-level hierarchy performance cost, and yet, like hierarchical structures, it allows for sophisticated grouping strategies. For this purpose, BL relies extensively on two original features: component sharing and layering by extension.

Component sharing means that a single component instance can be found in many component assemblies. Therefore, assuming that component assemblies are formed according to some common concern, component sharing allows a component to be directly part of a concern, rather than to reach for it, eg. using a complex path through the component hierarchy. A usual idiom found in other component models is to shorten this path by placing non-fonctionnal concerns in, or beside each component (eg. in the membrane of Fractal components). However, this approach creates an artificial dichotomy among components, each of which endding-up belonging to either of the two dimensions: functional or non-fonctional. On contrary, thanks to component sharing, Binding Layers support seamlessly and uniformly an arbitrary number of dimensions (including functional and non-fonctional).

Component groups formed in each dimensions are called layers. Each layer has a flat structure. However, reuse is made easy: First because the number of layers is not limited, and therefore each layer, typically in charge of a concern, can be reused independently to build new applications (eg. a persistence layer can be reused for many applications). In addition, Binding Layer offers an extension mechanism, somewhat similar to the heritage mechanism found in OO languages, that allows for incremental specializations of a given layer.

Status: work-in-progress.

See this presentation (PDF, 836 KiB) for more details.

2.3  Olivier’s SandBox

You will find on this page links to some ongoing projects, drafts, experiments.

2.4  Latest and soon coming Visitors

  • Gabriel Wainer, Carleton University, Ottawa, Canada (July 2012)
  • Joe Peters, SFU, Vancouver, Canada (June 2012)
  • Rassul Ayani, KTH, Stockholm, Sweeden (February-March 2012)
  • Gabriel Wainer, Carleton University, Ottawa, Canada (June-July 2011)

2.5  Recent talks (or soon coming)

  • “Some questions about the relations between activity and time representations”, presented at the ACTIMS Workshop in Zurich, Jan 16–18 2014.
  • Binding Layers Level 0: An abstract multi-purpose component layer, Sophia Antipolis, SCADA meeting, Nov 28 2013.
    See project description above.
    (NB: This is the latest version of a talk first given at Carleton University, Ottawa, on Oct 13, 2013.)
  • “Using TM for high-performance Discrete-Event Simulation on multi-core architectures”. Presentation at the EuroTM’2013 Workshop on Transactional Memory, Prague, April 14th 2013.
    Abstract: I recently started to investigate how TM could possibly be used for optimizing the performance of a discrete-event simulation (DES) engine on a multi-core architecture. A DES engine needs to process events in chronological order. For this purpose, it needs an efficient data structure, typically abstracted as a heap or priority queue. Therefore, my goal is to design an optimized heap-like data structure supporting concurrent multi-thread access patterns, such that multiple events can be processed in parallel by multiple threads. In DES, traditional parallelization techniques fall in two categories: either conservative, or optimistic. In the conservative approach, events are dequeued and processed in strict chronological order, which requires a synchronization protocol between the concurrent logical processes (LPs), to ensure consistency. In the optimistic approach, LPs are free to proceed and possibly violate the chronological order, but in case such a violation happens, a roll-back mechanism is used to return to the last consistent state (which requires a snapshot). The solution I am currently investigating is based on a Software Emulation Library for C++ called TBoot.STM. This library offers various transaction semantics, among which one, called invalidate-on-commit, that allows a transaction to be invalidated by the process that “suffers” the violation rather than the one that originates it. In our case, assuming that a transaction is associated to the dequeuing and processing of an event, a transaction is deemed successful when it completes without any prior event to be inserted in the heap an no earlier event is still pending. This is where building a solution based on invalidate-on-commit and transaction composition seems promising: Indeed, it seems easier to discover chronological violation when new events are inserted. In that case, all transactions that where mistakenly started too early can be invalidated. This library also provides a way for composing transactions, which could also prove to be helpful. For example, an agressively optimistic strategy could dequeue new events before the full completion of earlier events in which case the composition could be used to make the completion of later events depend on the completion of earlier ones. I am still at an early stage of this work, for which I just started experiments and performance evaluations.
  • Using Computer Simulations for Producing Scientific Results: Are We There Yet?
    Keynote presentation given at WNS3 2013, the 2013 Workshop on NS3, Cannes, France, March 5 2013
    Abstract: A rigourous scientific methodology has to follow a number of supposedly well-known principles. These principles come from as far as the ancient Greece where they started to be established by Philosophers like Aristotle; later noticeable contributions include principles edicted by Descartes, and more recently Karl Popper. All disciplines of modern Science do manage to comply with those principles with quite some rigor. All … Except maybe when they come to computer-based Science.
    Computer-based Science should not to be confused with the Computer Science discipline (a large part of which is not computer-based); It designates the corpus of scientific results obtained, in all disciplines, by means of computers, using in-silico experiments, and in particular computer simulations. Issues and flaws in computer-based Science started to be regularly pointed out in the scientific community during the last decade.
    In this talk, after a brief historical perspective, I will review some of these major issues and flaws, such as reproducibility of results or reusability and traceability of scientific software material and data. Finally I will discuss a number of ideas and techniques that are currently investigated or could possibly serve as part of candidate solutions to solve those issues and flaws.
  • On Reproducibility and Traceability of Simulation Experiments (PDF) presented at WinterSim in Berlin, Dec. 2012.
    Abstract: Reproducibility of experiments is the pillar of a rigorous scientific approach. However, simulation-based experiments often fail to meet this fundamental requirement. In this paper, we first revisit the definition of reproducibility in the context of simulation. Then, we give a comprehensive review of issues that make this highly desirable feature so difficult to obtain. Given that experimental (in-silico) science is only one of the many applications of simulation, our analysis also explores the needs and benefits of providing the simulation reproducibility property for other kinds of applications. Coming back to scientific applications, we give a few examples of solutions proposed for solving the above issues. Finally, going one step beyond reproducibility, we also discuss in our conclusion the notion of traceability and its potential use in order to improve the simulation methodology.
  • My D.E.S. is Going to Be Better Than Yours (PDF) at SFU Seminar, in Surrey Campus, Surrey, BC, May 28 2012.
    Abstract: Although provocative, this claim is often made by those who are considering the perilous project of writing their own discrete event simulator. In this talk, I will first review the pros and cons of writing a new simulator to demonstrate that there is no clear choice between writing a new simulator or reusing an existing one. Assuming that the decision to write a new simulator is eventually made, I will present a number of technical issues and some techniques that I have been using, in the last few years, to solve them. Most of these techniques involve advanced software engineering techniques and concepts, including Software Reuse, Aspect Oriented Programming, Separation of Concerns, Component Frameworks, and Architecture Description Languages.
    Then, I will introduce the Open Simulation Architecture (OSA) and its philosophy. OSA is a research project that I have been leading during the last few years, whose goal is to experiment with the various techniques described above in order to improve the simulation methodology. In response to the provocative title of this talk, I will show how OSA aims at offering a new simulator by attempting to integrate and reuse the best parts of other simulators. Finally, I will focus on the particular “layered” design used in OSA. While this concept sounds familiar, this layering is actually a unique feature that allows all of the previously mentioned concepts to fit together and serves the overall modeling and simulation methodology surprisingly well.
  • Some desired features for the DEVS ADL (PDF) at the DEVS/TMS Workshop, Boston, April 6th, 2011.
  • Invited presentation at the USS-SIMGRID workshop in Cargese (Corsica, FR), April 2010. (Some Methodology Issues and Methodology Experiments in the OSA Project - PDF slides)
  • Invited presentation at the ARS/SCS seminar at Carleton University (Ottawa, CA), August 2010 (Same slides as Cargese above).

↑ Contents

3.  Open Positions

Internship subjects available

No internship position available at the moment for working with Olivier.

Non local students must send resume and motivation letter to apply.
Local student may apply using the standard procedure.

Using transactional memory for implementing a multithreaded simulation engine This subject is offered as a 1st year of master project for a group of 2–4 students OR as a Master 2 subject.

Heap structures are critical for the performance of many applications. One good exemple is Discrete-Event Simulation (DES), in which the events which represent the history of the system are stored randomly in such a structure during the simulation, but they have to be processed in in strict increasing order (of occurence time) to follow the chronology. In order to speed-up and/or scale-up the execution of such DES, various algorithms have been proposed for distributing simulations on multiple computers and keep the global synchronisation, using message passing. The advent of many-core architctures opens new perspectives for the parallelization of such algorithms on multiple cores, using multiple threads and shared-memory.
The goal of this intership/project is to implement and evaluate the performance of such a multi-threaded heap algorithm based on Transactional Memory. Transactional Memory is recent technique proposed to replace the use of locks and mutual exclusion in concurrent algorithms running on a shared-memory architecture. As its name suggests, it borrows the idea of transactions to the database world : when a concurrent action is needed, a transaction is initiated in memory; if the action completes without conflict (with other hreads), the transaction is committed and the new memory state is kept; if the action generates a conflict, some of the conflicting transactions have to be rolled-back (ie. the new memory state is dropped) and restarted. This new technique was proposed a few years ago, but it was only virtually available by means of software emulation. Major actors, such as Intel and IBM have recently started to build or announce hardware support for Transactional Memory in their latests products (Eg. IBM BlueGene/Q super computer already has it[1], and the next generation of Intel Processors is announced with an extension to instruction set for the support of TM[2]).
Work to do
The work to do is to investigate the use of such a TM software emulation library to implement a multi-threaded Heap data structure. The library we chose is TBoost.STM[3,4], a proposed extension to the Boost C++ library. The work to be done during the TER is the following:
  • implement or retrieve various multi-threaded heap data-stractures in C++ using algorithms:
    • without transactional memory , as found in the literature
    • with transactional memory, custom designed
  • run performance comparisons between the various implementations
    • Build a performance benchmark
    • Run experiments on a multi-core computer
C/C++ programming, experience with Posix threads and concurrent programming (locks, semaphores)
  • [1] Peter Bright. “IBM’s new transactional memory: make-or-break time for multithreaded revolution.” ARS Technica, Aug 31 2011. see here
  • [2] Peter Bright. “Transactional memory going mainstream with Intel Haswell.” ARS Technica, Feb 2012. see here
  • [3] Justin E. Gottschlich, Jeremy G. Siek, Paul J. Rogers, and Manish Vachharajani. “Toward Simplified Parallel Support in C++.” In Proceedings of the Fourth International Conference on Boost Libraries (BoostCon), May 2009. see here
  • [4] The TBoost.STM Library: see here

↑ Contents

4.  Old Stuff

I was a Ph.D. Student in the Sloop project-team (former name of the Mascotte project-team) from December 1994 to December 1998; my advisors were Michel Syska and Jean-Claude Bermond. Then I had a postdoctoral fellowship with CNES (the french Space Agency) from January 1999 to August 2000.

The USS-SIMGRID ANR Project (2010–2011)

Starting Sept 2010, I am taking over the Tasks 2.3 and 6.2 of WP6 of the USS-SIMGRID ANR Project, following the departure of my colleague F. Lefessant (INRIA Saclay). Our contribution to this project will be two-fold:

  1. Application Workload Characterization. The goal of this task is to capture the workload through a set of defined events. Some of them (such as send and receive) are shared between all applications, but they are very low-level. Higher level events have to be specific to the application. For example, in a P2P DHT, join or lookup are classic events while submit is often seen in a batch scheduler context. This task has two goals. First, we aim at implementing an instrumentation tool able to capture the low-level events of any application (using system-level solutions such as ld preload), and record them accordingly to a generic event format. Then, we want to provide a solution to generate the simulation code corresponding to a given event log. We do not plan on providing a tool to capture high-level traces since they are too application-specific for us to devise a generic tool.
  2. Peer-to-peer backups: Simulation environments for large scale distributed applications, such as peer-to-peer video on-demand systems or generic peer-to-peer storage systems, are generally limited to the estimation of metrics such as the number of messages exchanged between the different peers and do not consider timing issues. In the particular case of peer-to-peer backup, being able to estimate the time needed to load or store a file chunk is crucial. We expect the use of such a tool to provide a better understanding of the behavior of a working backup system, and in particular, to compute some parameters that impact the performance of the system and are hard to guess from standard simulations (sizes of volumes, sizes of chunks, failure detection delays) where network characteristics are taken into account enough.

4.1  The Spiderman project (Supported by INRIA, since 2008)

Spiderman is a project initiated by my PhD student Juan-Carlos Maureira and our colleague Diego Dujovne (INRIA Planete project-team). It is new system designed to provide network connectivity to in-motion communicating devices inside buses, trains or subways, moving at high speed. The system is made of two parts. The mobile part, called \emph{Spiderman Device}, is installed in the mobile vehicles and provides a standard WiFi connection service to end users. The static part is made of multiple identical devices, called Wireless Switch, which are installed all along the path and provide the connection with the fixed network infrastructure. The connection between the mobile and fixed parts is maintained using a two-radio IEEE802.11 hand-over custom-made procedure, implemented within the Spiderman device. This handover procedure is designed in order to ensure a continuous connection at the data link layer level for vehicles moving at high speeds up 150 km/h and possibly higher. The system is currently under testing.

4.2  The SPREADS ANR Project (2007–2010)

SPREADS is the acronym for Safe P2p-based REliable Architecture for Data Storage. It is a common research project between UbiStorage, I3S/INRIA/Mascotte, Eurecom/NSTeam, LIP6/INRIA/REGAL and LACL/SC. The project started in Dec. 2007 with a funding from the French National Research Agency (ANR) with an additional sponsorship from the SCS pole of competitivity. Many other people are working with me on this exciting project in the Mascotte team (At the time of writing, we are no less than 2 assoc. profs, 1 Research associate, 1 postdoc, 1 enginner, 3 PhD students !)

4.3  The BROCCOLI INRIA ARC Project (2008–2009)

The goal of the BROCCOLI ARC project is to design a platform for describing, deploying, executing, observing, administrating, and reconfiguring large-scale component-based software architectures, in particular for building discrete event simulation applications. In addition to the Mascotte Project team (Judicael Ribault, Fabrice Peix and me), this project involves 2 other research groups:

4.4  The OSERA ANR project (2005–2008)

OSERA is a project founded by ANR that aims at studying Ambiant Networks in Urban Areas. I am working on this project with two other members of the Mascotte team, Hervé Rivano et David Coudert. Hervé and David mainly focus on optimization and algorithmic apects, and I focus on the simulation and discrete-event modeling ones. For this purpose, I initiated the design and development of a new open component-based simulation platform called OSA. I work on this platform with the help of Cyrine Mrabet, who is Associate Engineer in our team and Judicael Ribault (MEng. student in CS. Engineering).

4.5  ASIMUT CNES project (1999–2004)

Asimut is a telecommunications network simulator. During my post-doc at the French Space Agency center at Toulouse, I participated to the design effort of a new simulation environment for (satellite) telecommunication networks, called ASIMUT. In short, ASIMUT innovates in the field of network simulation, because it rely on a new hierarchical, component-based modeling concept. ASIMUT is complete environment that provides support for network architecture design, simulation campains, experiment planing, data analysis and, to some extent, model components developpment (in C++).

4.6  “Exotic” File Systems for Unix/Linux

I started to focus on this topic during my PhD with the Multi-Points Communications File Systems. MPCFS is a kernel extension that allows Unix users to exchange data between Unix systems by simply reading or writing these data to/from special files. What is interesting in this approach is that once the MPCFS extension (a kernel module) is plugged in the operating system, users can benefit from its multi-points communication ability without any special tools or library. Sending data accross the network is as easy as writing to a file, or redirecting the standard output of a process to a nammed pipe.

Unfortunately, the first prototype of MPCFS for Linux (1998) was too big and buggy to be of real use. Since the idea was still funny, I decided to restart the project from the begining (2001) with a much more modular approach: develop the File System based API on one hand, and the multi-point communications protocols on the other hand. The first prtotype of the API part was released in 2002 by Olivier Francoise. The protocol part is still under study…

If you want to learn more, you may have a look to the slides (PDF file, 682 KB) of the talk I gave at Sun Labs Europe (Grenoble, France) in november 2002…

… or to this newer version of the slides I use for the talk I gave to the SolutionsLinux Conférence in february 2003 (formats OpenOffice SXI or HTML)

4.7  Communications and dynamic load balancing for parallel and distributed architectures

I worked on this topic during my PhD. thesis and especially on Networks of Workstations:

  • In order to modelize the behavior (the performance level according to the workload level) of the several kinds of workstations available in a local area network, I developped the LoadBuilder environment, a distributed platform designed for the definition and management of distributed experiments. When complete, this platform should help in designing efficient information policies for multi-criteria dynamic load balancing algorithms.
  • With the help of a few engineer trainees of the neighbouring School of Computer Engineering (ESSI), I initiated a project whose goal is to include into a UNIX kernel (Linux) all the functionnalities allowing distributed parallel applications to transparently communicate through the file system. This project resulted in the development of a virtual file system driver for Linux: MPCFS.
  • I was involved in the setting up of the first cluster of workstations that was being installed in INRIA Sophia Antipolis Research Center. I was especially interested in the performance evaluation of its communication network and i was also in charge of installing and evaluating the performances of a small Myrinet network over these workstations.
  • I regularly participated to the GRAPPES Working Group meetings, in which french teams interested in the various aspects of cluster computing presented their work and ongoing research.

↑ Contents

5.  Students

Current Students

Former Students

I was happy (and lucky :-) to supervise the following PhD. students:

  • Julian Monteiro, 2007–2010 (co-advisor with S. Perennes)
Modeling and Analysis of Reliable Peer-to-Peer Storage Systems
  • Juan-Carlos Maureira, 2008–2011 (co-advisor with JC Bermond)
  • Judicael Ribault, 2008–2011
Reuse and Scalability in Modeling and Simulation Software Engineering
  • Damiŕn Vicino, (co-tutelle with Carleton University; co-advisor with G. Wainer and F. Baude), 2012–2015
Improved Time Representation in Discrete-Event Simulation

Recently, I also supervised the following student interships:

  • Mario Taddei, Master 2 IFI (Ubinet), Research Internship (6mon, 2015)
  • Thanh Phuong PHAM, Master 2 IFI (Ubinet), Research Internship (6mon, 2012)
  • Inza Bamba, Master 2 IFI (Ubinet), M.Sc. Research Internship (6mon, 2010)
  • Alaedin Moussa, Polytech’Marseille 2nd year, Research Initiation Internship (2mon, 2010)

↑ Contents

6.  Other Research Activities

6.1  Conference & Workshop organizations

6.2  Conference Programme Committee Memberships

Some even older activities

6.3  Stays abroad

6.4  Miscellaneous tasks & memberships

  • Expert reviewer for Ministry of Higher Education & Research (MESR) CIR applications (Credit Impot Recherche) (2010-)
  • Member of the Comité de Sélection for a permanent faculty position at Univ. of Provence (Marseille) (2010)
  • Expert reviewer for ANR projects and other similar submissions (2008)
  • Member of the VerSim workgroup, where french-speaking people discuss theoretical aspects of simulation (I organized the last meeting in Sophia Antipolis, June 6th 2006)
  • I am member of the Commission de Spécialistes 27e section of U. Nice (the computer science scientific committee) since 2001.
  • I am also member of several committees in both my two research labs: the Commission Développements Logiciels (Sofware Developments Committee) of INRIA Sophia Antipolis, the Commision Informatique (Computing technical committee) of I3S, …
  • I am member of various societies (ACM SIGSIM, ICST, IEEE CS, SCS, …). (Unless I forgot to renew membership…)

↑ Contents

7.  Recent Bibliography (Full biblio…)

  1. Damián Vicino, Olivier Dalle and Gabriel Wainer (2016) An Advanced Data Type with Irrational Numbers to Implement Time in DEVS Simulators. In Proceedings of Intl Symposium on Theory of Modeling and Simulation (TMS/DEVS). Pasadena, CA, April 3–6, pages 1–8, . SCS.To appear.. (BibTeX)
  2. Damián Vicino, Olivier Dalle and Gabriel Wainer (2015) Using Finite Forkable DEVS for Decision-Making Based on Time Measured with Uncertainty. In Proceedings of the 8th EAI International Conference on Simulation Tools and Techniques. Athens, Aug 24–27, aug, 10p. (BibTeX)
  3. Damián Vicino, Daniella Niyonkuru, Gabriel Wainer and Olivier Dalle (2015) Sequential PDEVS Architecture. In Proceedings of the 2015 TMS/DEVS Conference. apr, pages 906–913. (PDF) (BibTeX)
  4. Damian Vicino, Chung-Horn Lung, Gabriel Wainer and Olivier Dalle (2015) Investigation on software-defined networks’ reactive routing against BitTorrent. IET Networks, 4:pp. 249–254. (URL) (BibTeX)
  5. Olivier Dalle, Damian Vicino and Gabriel Wainer (2014) A data type for discretized time representation in DEVS. In {SIMUTOOLS - 7th International Conference on Simulation Tools and Techniques}. Lisbon, Portugal, Mar. (Kalyan Perumalla, Rol and Ewald, Eds.). ICST. (URL) (PDF) (BibTeX)
  6. Damian Vicino, Chung-Horng Lung, Gabriel Wainer and Olivier Dalle (2014) Evaluating the impact of Software-Defined Networks’ Reactive Routing on BitTorrent performance. In FNC - 9th International Conference on Future Networks and Communications. Niagara Falls, Canada. (Elhadi M. Shakshuki, Eds.) Elsevier. (URL) (BibTeX)
  7. Olivier Dalle (2014) Reuse-centric simulation software architectures. In Modeling and Simulation-Based Systems Engineering Handbook, pages 263–292. . CRC Press. (URL) (BibTeX)
  8. Damian Vicino, Gabriel Wainer and Olivier Dalle (2013) Using DEvS models to define fluid based uTP model. ACM SIGSIM PADS - Intl Workshop On Principles of Advanced and Distributed Simulation - Poster Presentation. (BibTeX)
  9. Olivier Dalle and Emilio P. Mancini (2013) NetStep: a micro-stepped distributed network simulation framework (short paper). In {SIMUTools - 6th International ICST Conference on Simulation Tools and Techniques - 2013}. Cannes, France, Mar. (Wentong Cai and Kurt Vanmechelen, Eds.). ICST. (URL) (PDF) (BibTeX)