reves home
accueil group research publications collaborations events gallery


List of seminars
June 14, 2010 Doug DeCarlo
"Visual Explanations"
Apr. 20, 2010 Bruno Galerne
"Image texture synthesis using spot noise and phase randomization"
Apr. 7, 2010
Roland Fleming
"Visual Estimation of 3D Shape"
Dec. 3, 2009
Sylvain Marchand
"Advances in Spectral Modeling of Musical Sound"
Nov. 5, 2009
Olga Sorkine
"Modeling and editing shapes: from local structure to high-level features"
Oct. 20, 2009
Adrien Bousseau
"Expressive image manipulations for a variety of visual representations"
Mar.12, 2009
Miko Shinya
"A Simplified Plane-Parallel Mode for Fast
Multi-Scattering Simulation"
Mar.11, 2009
Ricardo Marroquim
"Image Reconstruction for Point Based Rendering"
Feb. 3, 2009
Anatole Lécuyer
"Using hands and brain to interact in 3D virtual environments with haptic and brain-computer interfaces"
Oct. 13, 2008
Wojciech Jarosz
"Efficient Monte Carlo Rendering Techniques"
Oct. 8, 2008
Ares Lagae
"Accelerating Ray Tracing using Constrained tetrahedralizations & Isotropic Stochastic Procedural Textures by Example"
Sept. 16, 2008
Michiel van de Panne
"Human Motion Synthesis for Graphics and Robotics"
Apr. 24, 2008
Mathieu Lagrange
"Sound Synthesis for Virtual Reality: A modal approach for the audio rendering of complex interactions "
Mar. 27, 2008
Martin Hachet
"3D User Interfaces - from immersive environments to mobile devices"
Nov. 26, 2007
Kaleigh Smith
"Local Enhancement and the Cornsweet Effect "
June 11, 2007
Andrew Nealen
"Interfaces and Algorithms for the Creation and Modification of Surface Meshes "
May 31, 2006
Adrien Bousseau
"Interactive watercolor rendering with temporal coherence and abstraction "
May 23, 2006
Sylvain Paris
"Filtre bilatéral et photographie algorithmique"
Feb. 3, 2006
Katerina Mania
"Fidelity Metrics for Immersive Simulations based on Spatial Cognition"
Nov. 16, 2005
Carsten Dachsbacher  "Reflective Shadow Maps and Beyond "
Sept. 28, 2005
Marcus Magnor
"Video-based rendering"
June 13, 2005
Eugene Fiume
"The next 40 years of computer graphics"
Mar. 29, 2005
Isabelle Viaud-Delmon
"La réalité virtuelle en neurosciences comportementales:
du paradigme expérimental à l'objet d'étude
Feb. 1, 2005 Pat Hanrahan  Informal seminar presenting ideas on current research
Dec. 3, 2004 Sylvain Lefebvre

"Modèles d'habillage de surfaces
 pour la synthèse d'images"
Sept. 28, 2004 Nathan Litke "A variational approach to optimal surface parametrization"
June 4, 2004 Jim Hanan "Modelling of processes in dynamic environments: From Cells to Ecosystems"
Feb. 6, 2004 Kari Pulli
"Mobile 3D Graphics APIs"
Feb. 5, 2004 Michael Gleicher
"Animation by Example"
Feb. 2, 2004 Ken Perlin "Recent Graphics Research at NYU"
May 6, 2003 Holger Regenbrecht "Mixed Reality Research and Applications at DaimlerChrysler"
Feb. 7, 2003 Ken Perlin "Virtual actors that can act"
Nov. 15, 2002
Ronen Barzel
  
"Choreographing dynamics"
June 14, 2002
Victor Ostromoukhov 
"Color in technology, psychology and plastic arts"

Apr. 8, 2002

Sébastien Roy
"3D Vision: To an automatic scene reconstruction"
Dec. 11, 2001
Oliver Deussen
"Modeling and rendering of complex botanical scenes"
Nov. 16, 2001
Simon Gibson
"Recovering Geometric and Illumination Data from Image Sequences"

 

Doug DeCarlo
"Visual explanations"
Top

14 June 2010

Abstract

Human perceptual processes organize visual input to make the structure of the world explicit. Successful techniques for automatic depiction, meanwhile, create images whose structure clearly matches the visual information to be conveyed. We discuss how analyzing these structures and realizing them in formal representations can allow computer graphics to engage with perceptual science, to mutual benefit. We call these representations visual explanations: their job is to account for patterns in two dimensions as evidence of a visual world. I will situate this discussion using some of our recent work on the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization.

Doug DeCarlo

Doug DeCarlo received BS degrees in computer science and computer engineering from Carnegie Mellon in 1991, and his PhD in computer science from the University of Pennsylvania in 1998 working with Dimitris Metaxas. He is currently an associate professor in the Department of Computer Science with a joint appointment in the Center for Cognitive Science at Rutgers University.
Bruno Galerne
"Image texture synthesis using spot noise and phase randomization"
Top

20 april 2010

Abstract

We explore the mathematical and algorithmic properties of two sample-based micro-texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. A mathematical analysis shows that RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments suggest that the textures obtained by these algorithms from identical samples are perceptually similar. In addition to this theoretical study, a solution is proposed to three obstacles that prevented the use of RPN or ADSN to emulate micro- textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the non periodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample. Joint work with Yann Gousseau (LTCI, Télécom ParisTech) and Jean-Michel Morel (CMLA, ENS Cachan). References: - Preprint: Random Phase Textures: Theory and Synthesis, B. Galerne, Y. Gousseau and J.-M. Morel, preprint, submitted. - Online demo: http://www.ipol.im/pub/algo/ggm_random_phase_texture_synthesis/

Bruno Galerne

Bruno Galerne is a third year Ph.D. student at CMLA, ENS Cachan and LTCI, Télécom ParisTech. His advisors are Yann Gousseau (Télécom ParisTech) and Jean-Michel Morel (ENS Cachan). His Ph.D. subject deals with image models and texture synthesis algorithms involving germ-grain random fields (shot noise, colored dead leaves, ...). He graduated from the mathematics department of ENS Cachan. He obtained a master in mathematics and image processing (Master MVA "Mathématiques, Vision, Apprentissage") also at ENS Cachan.

Roland Fleming
"Visual Estimation of 3D Shape"
Top

7 April 2010

Abstract

How does the brain estimate 3D shape? This question presents visual neuroscience with something of an explanatory gap. On one side we have the known response properties of cells early in the visual processing hierarchy (spatial frequency and orientation tuning, etc.). These are relatively well understood, but don't give much insight into how the brain could actually reconstruct the shape of surfaces from the image. On the other side, we have computational theories of shape from shading, shape from texture, and so on. These can guide our understanding of the ambiguities involved in inferring shape, but tell us little about how the solution might actually be carried out in the brain. In this talk I will attempt to bridge this explanatory gap by showing how local image statistics can be 'put to good use' in the estimation of 3D shape. I'll show how populations of cells tuned to different spatial frequencies and image orientations can extract image information that is directly related to 3D shape properties, and that this information is surprisingly stable across a wide range of viewing conditions. I'll argue that these measurements can serve as a unifying front-end for cues that are traditionally thought to be quite different from one another. Finally, through a series of illusions and psychophysical experiments, I'll show that these image statistics correctly predict both successes and failures of human 3D shape perception across a range of conditions. If you are willing to stare for a while, I'll also use this knowledge to make random noise look like a 3D shape by adapting your orientation detectors.

Roland Fleming

Dr. Roland Fleming is the joint project leader of the Perception, Graphics and Computer Vision group at the MPI and specializes in the perception of materials, illumination and 3D shape. He has made several seminal contributions to the interaction between perception and computer graphics, including papers at ACM SIGGRAPH and other prestigious venues. He has conducted some of the first and most highly cited research on the perception of surface reflectance properties and translucency. Insights from this work lead to a method for Image Based Material Editing, in which the material properties of an object in a photograph can be radically altered (e.g. turning porcelain into glass). He has also developed a novel theory of human 3D shape estimation, and used psychophysical techniques to propose methods for displaying images on high dynamic range displays. Since 2009, Dr. Fleming is joint Editor-In-Chief of ACM Transactions on Applied Perception, an interdisciplinary journal dedicated to using perception to advance computer graphics and other fields.

Sylvain Marchand
"Advances in Spectral Modeling of Musical Sound"
Top

3 Dec. 2009

Abstract

Spectral models attempt to parameterize sound at the basilar membrane of the ear. Thus, sound representations and transformations in these models should be closely linked to the perception. Among those models, sinusoidal modeling deals with partials that are pseudo-sinusoidal tracks for which frequencies and amplitudes continuously evolve slowly with time. This is a generalization of additive (modal) synthesis, and is also related to the physical structure of the sounds. Sinusoidal modeling is extremely useful for many applications such as musical sound transformation (time scaling, pitch shifting, re-spatialization, etc.), coding compression), and also classification. Apart from the extension to the non-stationary case, one recent research direction with sinusoidal modeling is the modeling of the parameters of the partials themselves. By re-analyzing the evolutions of the model parameters, we obtain (level-2) parameters of a hierarchical model well-suited for time scaling while preserving musical modulations such as vibrato and tremolo. Moreover, the reanalysis of the spectral parameters turns out to be extremely useful for difficult problems such as lossless compression or source separation for example. An impressive application is "active listening", enabling the user to interact with the sound while it is played. The musical parameters (loudness, pitch, timbre, duration, spatial location) of the sound entities (sources) present in the musical mix can thus be changed interactively.

Sylvain Marchand

Sylvain Marchand is associate professor in the Image and Sound research team of the LaBRI (Computer Science Laboratory), University of Bordeaux 1, since 2001. He is also a member of the "Studio de Création et de Recherche en Informatique et Musique Électroacoustique" (SCRIME), leader of the French ANR DReaM project, and member of the scientific committee of the international DAFx (Digital Audio Effects) conference. He is also associate editor of the IEEE Transactions on Audio, Speech, and Language Processing. Dr Marchand is particularly involved in musical sound analysis, transformation, and synthesis. He focuses on spectral representations, taking perception into account. Among his main research topics are sinusoidal models, analysis / synthesis of deterministic and stochastic sounds, sound localization /spatialization ("3D sound"), separation of sound entities (sources)present in polyphonic music, or "active listening" (enabling the user to interact with the musical sound while it is played).
Olga Sorkine
"Modeling and editing shapes: from local structure to high-level features"
Top

Nov. 5, 2009

Abstract

Understanding and modeling how shapes deform is essential for tasks in geometric modeling and computer animation. Advances in 3D scanning technology provide us with a rich variety of highly detailed realistic 3D shapes, yet these usually come as unstructured discrete models (meshes or point clouds), to which the classical representations and modeling tools from CAGD are not easily applicable. In this talk, I will describe geometric algorithms, coupled with suitable user interface metaphors, to model and edit 3D shapes in an efficient and intuitive manner. I will first discuss low-level deformation methods, targeted to preserve the local surface details as the shape deforms by optimizing the surface with respect to certain differential quantities. Following the broad understanding of this low-level behavior, I will present recent work on shape modeling that evolves towards more high-level, semantic capturing of the edited object by focusing on the nature and interplay between global features of the shape.

Olga Sorkine

Olga Sorkine is currently an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University. She earned her BSc in Mathematics and Computer Science and PhD in Computer Science from Tel Aviv University (2000, 2006). Following her studies, she received the Alexander von Humboldt Foundation Fellowship and spent two years as a postdoc at the Technical University of Berlin. Olga is interested in theoretical foundations and practical algorithms for digital content creation tasks, such as shape representation and editing, artistic modeling techniques, computer animation and digital image manipulation. She also works on fundamental problems in digital geometry processing, including parameterization of discrete surfaces and compression of geometric data. She received the Young Researcher Award from the Eurographics Association in 2008.
Adrien Bousseau
"Expressive image manipulations for a variety of visual representations"
Top

20 October 2009

Abstract

Visual communication greatly benefits from the large variety of appearances that a image can take. By neglecting spurious details, simplified images focus the attention of an observer on the essential message to transmit. Stylized images, that depart from reality, can suggest subjective or imaginary
information. More subtle variations, such as change of lighting in a photograph can also have a dramatic effect on the interpretation of the transmitted message.

The goal of this thesis is to allow users to manipulate visual content and create images that corresponds to their communication intent. We propose a number of manipulations that modify, simplify or stylize images in order to improve their expressive power.

We first present two methods to remove details in photographs and videos. The resulting simplification enhances the relevant structures of an image. We then introduce a novel vector primitive, called Diffusion Curves, that facilitates the creation of smooth color gradients and blur in vector graphics. The images created with diffusion curves contain complex image features that are hard to obtain with existing vector primitives. In the second part of this manuscript we propose two algorithms for the creation of stylized animations from 3D scenes and videos. The two methods produce animations with the 2D appearance of traditional media such as watercolor. Finally, we describe an approach to decompose the illumination and reflectance components of a photograph. We make this ill-posed problem tractable by propagating sparse user indications. This decomposition allows users to modify lighting or material in the depicted scene.

The various image manipulations proposed in this dissertation facilitates the creation of a variety of visual representations, as illustrated by our results.

Adrien Bousseau

Adrien Bousseau just graduated from Grenoble University where he did his PhD under the supervision of Joëlle Thollot and François X. Sillion. During his PhD he also spent 6 months in Seattle as an intern at Adobe's Advanced Technology Labs under the supervision of David Salesin, and 3 months in Cambridge at MIT CSAIL under the supervision of Frédo Durand and Sylvain Paris.
His work deals with non photorealistic rendering (NPR), and more generally image synthesis and image processing.

Mikio Shinya
"A Simplified Plane-Parallel Mode for Fast Multi-Scattering Simulation"
Top

12 March 2009

Abstract

Fast computation of multiple reflection and transmission among complex objects is very important in photo-realistic rendering. In this talk, the plane-parallel scattering theory is briefly introduced and its rendering applications are shown. We then present a simplified plane-parallel model that has very simple analytic solutions. This allows efficient evaluation of multiple scattering. A geometric compensation method is also introduced to cope with the infinite plane condition required by the plane-parallel model. Some early results of tree rendering are also shown.

Mikio Shinya

Mikio Shinya is a Professor of computer science at Toho University. He graduated from Waseda University, Tokyo, with a Bachelor and Master degree in Applied Physics, in 1979 and 1981, respectively, and also received his Ph.D in 1991. After the graduation, Shinya worked with Nippon Telegraph and Telephone and started up a Computer Graphics group at the NTT laboratories. From 1988 to 1989, he was a Visiting Scientist at University of Toronto, Canada. His current research topics include computational models of global illumination and multiple scattering, and medical applications of Computer Graphics.

Ricardo Marroquim
"Image Reconstruction for Point Based Rendering"
Top

11 March 2009

Abstract

Image based methods have proved to efficiently render scenes with a higher efficiency than geometry based approaches, mainly due to one of their most important advantages: the bounded complexity to the image resolution instead of the number of primitives. Furthermore, due to their parallel and discrete nature, they are highly suitable for GPU implementations. On the other hand, during the last few years point-based graphics has emerged as a promising complement to other representations. Additionally, with the continuous increase of scene complexity, solutions for directly processing and rendering of large point clouds are in demand. In this seminar I will present an approach for efficiently rendering large point models using image reconstruction techniques.

 

Anatole Lécuyer
"Using hands and brain to interact in 3D virtual environments with haptic and brain-computer interfaces"
Top

3 February 2009

Abstract

In this presentation we will describe novel techniques dedicated to real-time interaction with 3D virtual environments. We will focus on the use of two advanced types of interaction device in virtual reality : haptic interfaces (tactile and force feedback, stimulating the skin and the body) and brain-computer interfaces (enabling control via the brain activity-only, using acquisition machines such as electroencephalography). We will first detail our recent developments in haptic and visuo-haptic rendering, and notably "Spatialized Haptic Rendering" that displays contact position information using vibration patterns, and "Pseudo-Haptic Feedback" that provides haptic sensations or "haptic illusions" without a haptic device by using visual feedback. We will then detail the results obtained within the Open-ViBE project (www.irisa.fr/bunraku/OpenViBE) in the field of Brain-Computer Interfaces (BCI) and, notably, high-level interactive techniques based on BCI to navigate in virtual worlds or select virtual objects "by thoughts". We will describe briefly the OpenViBE platform: a free and open-source software for the design, test and use of brain-computer interfaces.

Anatole Lécuyer

Anatole Lécuyer received his Ph.D. in Computer Science in 2001 from University of Paris XI, France, and since 2002 he is a senior researcher a t INRIA, the French National Institute for Research in Computer Science and Control (www.inria.fr), in the BUNRAKU research team in Rennes, France. His main research interests include: Virtual Reality (VR), 3D interaction, haptic feedback, pseudo-haptic feedback and brain-computer interfaces. He is the coordinator of the Open-ViBE project on Brain-Computer Interfaces and VR (www.irisa.fr/bunraku/OpenViBE), the former leader of Working Group on Haptic Interaction of the INTUITION European Network of Excellence on VR (www.intuition-eunetwork.net), and the INRIA local representative of several national and European projects on VR (ANR PACMAN, EU FET-OPEN STREP NIW, etc). He is an expert in VR for national public bodies and member of international program committees of VR and haptic-related conferences (World Haptics, Eurohaptics, ACM VRST, etc). He is currently an associate editor of the ACM Transactions on Applied Perception, secretary of the French Association for Virtual Reality (www.afrv.fr), and secretary of the IEEE Technical Committee on Haptics (www.worldhaptics.org). Contact him at anatole.lecuyer@irisa.fr.

Wojciech Jarosz
"Efficient Monte Carlo Rendering Techniques"
Top

13 October 2008

Abstract

The overarching goal of physically-based rendering research is constructing efficient, robust and flexible algorithms for simulating the behavior of the natural world. In this talk I will discuss two areas of my research in developing more efficient Monte Carlo rendering techniques. First, I will discuss two novel techniques for simulating light transport in scattering media. Volumetric radiance caching and beam radiance estimation are general, robust, complementary, and they provide orders of magnitude speedup over previous approaches. I will also present two improved sampling techniques for stochastic ray tracing. Multidimensional adaptive sampling and wavelet importance sampling distribute sample rays more intelligently during rendering, providing significant noise reduction and faster render times. Finally, I will conclude with some possibilities avenues for future work.

Wojciech Jarosz

Wojciech Jarosz is a Post-doctoral researcher at the University of California, San Diego. His main research focus is on Monte Carlo rendering techniques, including advanced sampling, production-quality global illumination, and participating media. His current list of publications includes three SIGGRAPH papers on these topics. He received his B.S. in computer science from the University of Illinois at Urbana-Champaign in 2003 and his M.S. and Ph.D. in computer science from the University of California, San Diego in 2006 and 2008.

Ares Lagae
"Accelerating Ray Tracing using Constrained tetrahedralizations & Isotropic Stochastic Procedural Textures by Example"
Top

8 October 2008

Abstract

We will be presenting two research projects in this talk:

1. Accelerating Ray Tracing using Constrained Tetrahedralizations:
In this paper we introduce the constrained tetrahedralization as a new acceleration structure for ray tracing. A constrained tetrahedralization of a scene is a tetrahedralization that respects the faces of the scene geometry. The closest intersection of a ray with a scene is found by traversing this tetrahedralization along the ray, one tetrahedron at a time.We show that constrained tetrahedralizations are a viable alternative to current acceleration structures, and that they have a number of unique properties that set them apart from other acceleration structures: constrained tetrahedralizations are not hierarchical yet adaptive; the complexity of traversing them is a function of local geometric complexity rather than global geometric complexity; constrained tetrahedralizations support deforming geometry without any effort; and they have the potential to unify several data structures currently used in global illumination.

2. Isotropic Stochastic Procedural Textures by Example:
Procedural textures have significant advantages over image textures. Procedural textures are compact, are resolution and size independent, often remove the need for a texture parameterization, can easily be parameterized and edited, and allow high quality anti-aliasing. However, creating procedural textures is more difficult than creating image textures. Creating procedural textures typically involves some sort of programming language or an interactive visual interface, while image textures can be created by simply taking a digital photograph. In this paper we present a method for creating procedural textures by example, designed for isotropic stochastic textures. From a single uncalibrated photograph of a texture we compute a small set of parameters that defines a procedural texture similar to the texture in the photograph. Our method allows us to replace image textures with similar procedural textures, combining the advantages of procedural textures and image textures. Our method for creating isotropic stochastic procedural textures by example therefore has the potential to dramatically improve the texturing and modeling process.

Ares Lagae

Ares Lagae is a Postdoctoral Fellow of the Research Foundation - Flanders (FWO). He is doing research at the Computer Graphics Research Group of the Katholieke Universiteit Leuven in Belgium. His research interests include tile-based methods in computer graphics, ray-tracing, rendering and computer graphics in general. He received a BS and MS degree in Informatics from the Katholieke Universiteit Leuven in 2000 and 2002. He received a PhD degree in Computer Science from the Katholieke Universiteit Leuven in 2007, funded by a PhD fellowship of the Research Foundation - Flanders (FWO).

Mathieu Lagrange
"Sound Synthesis for Virtual Reality: A modal approach for the audio rendering of complex interactions"
Top

24 April 2008

Abstract

Audition is a modality that complement vision and allows the user to be better immersed in a virtual environment. The widespread of physical engine describing the environment and the interactions between the elements that compose the scene allows us to consider modal synthesis approach for the audio rendering of those interactions. This type of approach has been mostly limited to the synthesis of simple interactions like impacts where the objects are in contact for a short period of time. Yet, the modal model is physically valid for a much broader range of interaction. We will study a specific type of complex interaction: rolling and study a model of this type of sound with sustained excitation rooted by the source/filter approach. This new model is flexible and compact and allows us to efficiently synthesize sounds of complex interactions in a scalable way.

Mathieu Lagrange

Mathieu Lagrange obtained his Msc. and Phd. in Computer Science, respectively in 2001 and 2004, within the Image and Sound team of the "Laboratoire Bordelais de Rechercher en Informatique" (LaBRI), University of Bordeaux 1, France. After a post-doctoral fellowship within the Computer Science Departement of the University of Victoria (Canada), Dr. Lagrange is now managing the Audio/Haptic axis of the Enactive Project within the "Music Technology Area" of the McGill University (Canada). His expertise covers numerous aspects of the analysis and the synthesis of audio signals for the purpose of coding, indexing and human/computer interaction.

Martin Hachet
"3D User Interfaces - from immersive environments to mobile devices"
Top

27 March 2008

Abstract

The quest for efficient interactive 3D applications motivates numerous developments in the scope of computer graphics. It also feeds challenging research questions in the scope of interaction. Indeed, interacting with 3D environments remains a difficult task and adapted 3D user interfaces (3DUI) are still to be designed. In this talk, I will present some examples of 3DUI we have developed in our lab to improve the user performance in 3D interactive tasks. First, I will present hardware and software solutions for immersive environments. Then, I will show some results in the scope of mobile technologies. Finally, I will discuss a new gesture-based approach that can be used with numerous emerging platforms. Such a technique, which operates from large displays to small screens, enhances the user mobility.

Martin Hachet

Martin Hachet is a research scientist at INRIA Bordeaux – Sud-Ouest. He is member of the Iparla project-team, which focus on Computer Graphics and 3D Interaction for Mobile Users. His main research activity is about 3D User Interfaces, from immersive environments to mobile devices. Martin Hachet has joined the program committee for conferences in the scope of computer graphics (Eurographics 08), Human-Computer Interaction (IEEE 3DUI 07-08), and Virtual Reality (IPT/EGVE 07, VRIC 08). This year, he is program Co-Chair for ACM VRST 2008, which will be held in Bordeaux.
URL: http://www.labri.fr/~hachet

Kaleigh Smith
"Local Enhancement and the Cornsweet Effect"
Top
26 November 2007

Abstract
:

In this talk, I will present recent work inspired by a perceptual illustion called the Cornsweet effect. Part of my work has been to explore the illusion's connection to local enhancement techniques (namely unsharp masking), and consider its application to image processing and rendering. First, I will show how local enhancement can be used to solve a part of the colour to greyscale problem for images and video. Then, I will present ongoing research on the Cornsweet effect in 3D: why its impact is strongest when reinforced by a 3d scenario and how its introduction in scene space can be used to increase contrast in renderings.


Kaleigh Smith


Kaleigh Smith is currently a Computer Graphics Ph.D. candidate at the Max Planck Institute for Informatics (MPI) in Saarbrücken, Germany, under the supervision of Karol Myszkowski. She recently spent 6 months on a research exchange in Grenoble, France with Joëlle Thollot and the ARTIS group. Kaleigh received her Masters degree in Computer Science at McGill University in Montreal, Canada, and began her PhD work in graphics there with Allison Klein. Her main research interests are visual: perception, rendering, artistic techniques, animation and computer imagery. She is motivated by experiences, media and art in the real world.

URL: http://www.cs.mcgill.ca/~kaleigh/

 

Andrew Nealen
"Interfaces and Algorithms for the Creation and Modification of Surface Meshes"
Top

11 June 2007

Abstract
:

For the simple creation of surface meshes, we present an interface for designing freeform surfaces with a collection of 3D curves. The user first creates a rough 3D model by using a sketching interface. Unlike previous sketching systems, the user-drawn strokes stay on the model surface and serve as handles for controlling the geometry. These curves can be added, removed, and deformed easily, as if working with a 2D line drawing. For a given set of curves, the system automatically constructs a smooth surface embedding by applying functional optimization. Our system provides real-time algorithms for both control curve deformation and the subsequent surface optimization.

For further surface modification, we present a silhouette over-sketching interface, which automates the processes of determining both the deformation handle, as well as the region to be deformed. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool.

Overall, these algorithms have been designed to enable interactive creation and modification of the surface, yielding a surface modeling and editing system that strives to come close to the experience of sketching 3D models on paper.


Andrew Nealen


PhD Student
Computer Graphics Laboratory
TU berlin, Germany

URL: http://www.nealen.com/prof.htm

 

Adrien Bousseau
"Interactive watercolor rendering with temporal coherence and abstraction"
Top
31 May 2006

Abstract
:

This paper presents an interactive watercolor rendering technique that recreates the specific visual effects of lavis watercolor. Our method allows the user to easily process images and 3d models and is organized in two steps: an abstraction step that recreates the uniform color regions of watercolor and an effect step that filters the resulting abstracted image to obtain watercolor-like images. In the case of 3d environments we also propose methods to produce temporally coherent animations that keep a uniform pigment repartition while avoiding the shower door effect.


Adrien Bousseau



Adrien Bousseau est actuellement étudiant en Master Image Vision Robotique ŕ Grenoble, et effectue son stage sous la direction de Joëlle Thollot. Il travaille sur le rendu non-photoréaliste, et plus particuličrement sur le rendu aquarelle pour l'animation.
Il est issus d'une formation technique (IUT imagerie du Puy en Velay puis IUP Math Informatique de La Rochelle) complétée par une formation plus théorique (ENSIMAG et Master IVR).

Il a effectué un stage au laboratoire L3I de La Rochelle sur la modelisation de formes humaines texturées et un stage dans l'équipe Sigmedia du Trinity College de Dublin sur l'analyse de vidéo pour le diagnostique de Dyslexie.

Sylvain Paris
"Filtre bilateral et photographie algorithmique"
Top
23 May 2006

Abstract
:

Aprčs un bref résumé de mes travaux de thčse, je présenterai mes résultats récents en traitement d'images et en photographie algorithmique.

Je commencerai par décrire le filtre bilatéral qui est ŕ la base de nombreuses techniques de manipulation de photographies et vidéos numériques. En reformulant ce filtre dans un espace de dimension supérieur, je montrerai qu'il est possible de calculer de maničre extręmement rapide une approximation visuellement similaire au calcul exact. Dans une seconde partie, j'utiliserai cette technique pour manipuler l'apparence de photographies  numériques en transférant automatiquement ŕ une photographie amateur les qualités visuelles d'un cliché d'artiste.

Ces travaux ont été publiés cette année ŕ European Conference on Computer Vision et ŕ la conférence ACM SIGGRAPH, en collaboration avec Soonmin Bae et Frédo Durand du MIT.


Sylvain Paris


Sylvain Paris est diplomé de l'École polytechnique et a suivi le DEA Algorithmique ŕ Paris. Il a préparé sa thčse avec François Sillion ŕ l'INRIA Rhône-Alpes ŕ Grenoble. Durant cette période, il a aussi collaboré avec Long Quan ŕ l'Université de Science et Technologie de Hong-Kong oů il a séjourné pendant six mois. Depuis novembre 2004, il effectue un séjour post-doctoral au Massachusetts Institute of Technology ŕ Boston oů il travaille en collaboration avec Frédo Durand. Ses centres d'intéręt sont ŕ l'interface entre la vision par ordinateur, le traitement d'image et l'informatique graphique. Ses travaux portent plus précisément sur la reconstruction 3D ŕ partir d'images, la capture tridimensionnelle de chevelures et la photographie algorithmique.

 

Katerina Mania
"
Fidelity Metrics for Immersive Simulations based on Spatial Cognition"
Top
3 February 2006

Abstract

A goal of simulation systems for training is to provide users with appropriate sensory stimulation so that they interact in similar ways with the virtual world as in the natural world. Visual fidelity is often a primary goal of computer graphics imagery which strives to create scenes that are perceptually indistinguishable from an actual scene to a human observer. Interaction fidelity refers to the degree the simulator technology (visual and motor) is perceived by a trainee to duplicate the operational equipment and the actual task situation. The research community is challenged to establish functional fidelity metrics for simulations mainly targeting positive transfer of training in the real world.

In this talk, I will explore the effect of visual and interaction fidelity on spatial cognition focusing on how humans mentally build spatial representations. I will then discuss on-going research relevant to the effect of memory schemas on spatial memory and relevant results' application towards a real-time selective rendering engine which endeavors to simulate a cognitive process rather than physics. We will conclude with a brief presentation of other projects relevant to simulation of subjective impressions of illumination and work on determining perceptual sensitivity to tracking latency.

Dr Katerina Mania
Department of Informatics
University of Sussex, UK
Falmer, BN1 9QT Brighton, UK

T: +44 1273 678964

URL: http://www.sussex.ac.uk/Users/km3

 


Carsten Dachsbacher
"Reflective Shadow Maps and Beyond "
Top
16 November 2005

Abstract

Indirect illumination is a subtle, yet important aspect for realistic rendering. Due to its global nature the computation of indirect illumination is notoriously slow. On the other hand, approximations for indirect light are usually satisfactory. Reflective Shadow Maps are an efficient means to add one-bounce indirect illumination of diffuse surfaces to dynamic scenes. Recent improvements provide an extension for non-diffuse surfaces and caustics and achieve real-time rendering speed.

Carsten Dachsbacher:

Carsten Dachsbacher is a Ph.D. student in computer graphics at the University of Erlangen. His research focuses on interactive, hardware-assisted computer graphics; in particular he is working on interactive global illumination techniques, procedural models for rendering photo-realistic terrains and point-based rendering.

 

Marcus Magnor 
"Video-based rendering"
Top
28 September 2005

Abstract

 Expectations on computer graphics performance are rising continuously: whether in flight simulators, surgerical planning systems, or computer games, ever more realistic rendering results are to be achieved at real-time frame rates. In fact, thanks to progress in graphics hardware as well as rendering algorithms, today visual realism is within reach of off-the-shelf PC graphics boards.  With rapidly advancing rendering capabilities, the modeling process is becoming the limiting factor towards realistic rendering.  Higher visual realism can be attained only by having available more detailed and accurate scene descriptions. So far, however, modeling 3D geometry and object texture, surface reflectance characteristics and scene illumination, character animation and emotion is a labor-intensive, tedious process. The cost of authentic content creation using conventional approaches increasingly threatens to stall further progress in realistic rendering applications.  In my talk, I will present an alternative modeling approach: ``Video-based Rendering'' is about how real-world scenes and events may be acquired from the ``real thing''. Given a handful of synchronized video recordings, complex, time-varying scenes and natural phenomena can be modeled from reality to be incorporated into time-critical 3D graphics applications.  Photo-realistic rendering quality and truly authentic animations can be obtained. Besides offering a solution for realistic rendering applications in computer graphics, research into video-based modeling and rendering algorithms also leads to tools for video editing and may even pave the way towards new forms of visual media.

Marcus Magnor

Marcus Magnor is head of the Independent Research Group NWG3: Graphics-Optics-Vision  at the Max-Planck-Institut für Informatik in Saarbrücken, Germany. He received his B.A. in 1995 from the  University of Würzburg, Germany, and his M.S. in Physics in 1997 from the University of New Mexico, USA. He then joined  Bernd Girod's Telecommunications research group at the University of Erlangen, Germany, where he received his Ph.D. in Electrical Engineering in 2000. For his post-graduate work, he joined  Stanford University's  Computer Graphics Lab  as Research Associate, before coming to the MPI für Informatik in early 2002.
His research interests in computer graphics include video-based rendering, realistic and interactive visualization, as well as dynamic geometry processing. Beyond graphics, he is working on interdisciplinary research topics such as dynamic scene analysis, multimedia coding and communications, and physics-based modeling.


Eugene Fiume
"The next 40 years of computer graphics"
Top
13 June 2005

Abstract
:

 The year 2003 marked the fortieth anniversary of the inception of computer graphics.  In 1963, Ivan Sutherland demonstrated the potential of interactive computer graphics with his remarkable project called Sketchpad.  In the intervening forty years, our field has made astonishing scientific and technological progress.  However, we are experiencing a realisation of Sutherland's ideas only now.  After reviewing our progress to date, I will explore the inevitable growth over the next forty years of disparate technologies such as embedded systems, computer graphics, human-computer interfaces, artificial intelligence, broadband wireless communication, and intelligent data storage to posit a future for computer graphics that is already changing how we think about computation and visual depiction.  It may well be that the science fiction writers were correct.  Future computer systems will allow people to create convincing virtual worlds of their own making.  Computational visual depiction will soon achieve a state of being able to fool most of the people most of the time.  The problems and opportunities of such a future are important to contemplate now.  Our potential to tell big lies will be just as great as our potential to tell big truths.  How will we and our children distinguish one from the other?


Eugene Fiume


Eugene Fiume is Professor and past Chair of the Department of Computer Science at the University of Toronto, where he also co-directs the Dynamic Graphics Project. Following his B.Math. degree from the University of Waterloo and M.Sc. and Ph.D. degrees from the University of Toronto, he was an NSERC Postdoctoral Fellow and Maitre Assistant at the University of Geneva, Switzerland. He was awarded an NSERC University Research Fellowship in 1987 and returned to the University of Toronto to a faculty position. He was Associate Director of the Computer Systems Research Institute, and was a Visiting Professor at the University of Grenoble, France. He is or was a member of various boards, including the Scientific Advisory Board of GMD, Germany, and the Max-Planck Center for Visual Computing and Communication; the Board of Directors of TrueSpectra, Inc. in Toronto; the Board of Directors of CITO; the Advisory Boards of CastleHill Ventures, PlateSpin, BitFlash, TrueSpectra, OctigaBay Systems and NGRAIN Corporation; and the Executive Advisory Board of the IBM Lab in Toronto.

Eugene has participated in many task forces and reviews of research institutes around the world. He has had a long association with the computer graphics and electronic media industries in Canada and the U.S., notably with Alias|wavefront, where he was Director of Research and Usability Engineering while on leave from the university. He now works with several companies in an advisory capacity on both technological and business issues. He also works with venture capital companies on due diligence and strategy.

Eugene's research interests include most aspects of realistic computer graphics, including computer animation, modelling natural phenomena, and illumination, as well as strong interests in internet based imaging, image repositories, software systems and parallel algorithms. He has written two books and (co-)authored over 90 papers on these topics. Eleven doctoral students and twenty master's students have graduated under his supervision. He has won two teaching awards, as well as Innovation Awards from ITRC for research in computer graphics and Burroughs-Wellcome for biomedical research. He was also the Papers Chair for SIGGRAPH 2001, and is Chair of the SIGGRAPH Awards Committee.

His industrial interests include technology transfer in the Information Technology area, internet-based applications, wireless and multimedia systems, web-based services, large-scale computation, and the interaction of information technology and business.

 

Isabelle Viaud-Delmon -
"La réalité virtuelle en neurosciences comportementales:
du paradigme expérimental à l'objet d'études"
Top
29 March 2005

Abstract

Les dispositifs de réalité virtuelle (RV) ont permis la mise en place de nombreux paradigmes de recherche en neurosciences comportementales ces dernières années. La facilité avec laquelle il est possible de manipuler expérimentalement les différentes informations sensorielles à la disposition du sujet fait de la RV un outil de choix pour l'étude de l'intégration multisensorielle chez l'homme et de ses troubles. Par ailleurs, dans le domaine de la psychopathologie clinique,  l'exposition de patients à des environnements virtuels permet de mettre en oeuvre de nouvelles formes de thérapie présentant de nombreux intérêts.
Cependant, l’utilisation de ces dispositifs pose au moins deux problèmes majeurs, en particulier en psychopathologie clinique et expérimentale. Le premier est lié au nombre limité de modalités sensorielles sollicitées par l'outil, qui se limite le plus souvent à intégrer les modalités visuelles et idiothétiques (ensemble des informations proprioceptives et vestibulaires). Le deuxième est lié au caractère "déréalisant" de la réalité virtuelle, et renvoie à la notion de présence. Au plan psychopathologique, un certain nombre de questions se pose. Par conséquent, si la RV représente un dispositif expérimental en neurosciences comportementales,  elle se doit également de devenir un objet d'étude.

Isabelle Viaud-Delmon

CNRS UPMC, UMR 7593
Hôpital de la Salpêtrière – Paris


Pat Hanrahan
Informal seminar presenting ideas on current research

Top
1 February 2005


Sylvain Lefebvre
"Modèles d'habillage de surfaces pour la synthèse d'images"
Top

3 December 2004

Abstract

La complexité des objets ne tient pas seulement dans leur forme mais également dans l'apparence de leur surface.  En synthèse d'image, les modèles d'habillage de surface permettent de définir et de faire varier le long des surfaces les propriétés du matériau (couleur, brillance, rugosité, etc .). Par exemple, le placage de texture permet d'appliquer une image (la texture) à la géométrie d'un objet.  Cependant la taille des mondes représentés dans les applications actuelles ne cesse d'augmenter. Créer des textures détaillées capturant la richesse et la complexité du monde réel sur de si larges domaines est devenu une tâche longue et difficile. En outre le placage de texture n'est pas adapté à toutes les situations. Les textures contenant de petits motifs distribués sur un fond homogène (feuilles mortes, pierres, fleurs, ...) gaspillent de la mémoire. Les textures animées ou dynamiques (impacts, traces de pas, gouttes d'eau, ...) sont difficiles à représenter. La taille mémoire disponible, relativement faible en comparaison de la taille des données utilisées, contraint les artistes qui doivent trouver des raccourcis. Aspects flous ou répétitions évidentes sont des défauts visuels souvent observés. Dans les applications interactives ces contraintes sont encore plus fortes. Or l'interactivité est devenue un élément clé de la synthèse d'image: que ce soit dans les simulateurs, les jeux vidéos, ou pour pré--visualiser rapidement des résultats avant de longs calculs d'images.  Nous proposons de nouveaux modèles d'habillage qui permettent, tout comme le placage de texture,de faire varier les propriétés d'un matériau le long de la surface d'un objet. Nos modèles répondent aux besoins des applications modernes: vastes domaines texturés, détails appliqués localement et textures animées ou dynamiques. Ils sont basés sur des approches procédurales mais également sur des structures de données qui permettent de s'affranchir des limitations du placage de texture. La plupart de nos modèles sont conçus pour les processeurs graphiques actuels, sur lesquels ils sont directement implémentés.

 

Nathan Litke
"A variational approach to optimal surface parametrization"
Top
28 September 2004

Abstract

 In this talk I will present a variational approach to the construction of low-distortion parameterizations for surfaces of disc topology. Our approach is based on principles from rational mechanics, whereby the parameterization is described in terms of a minimizer of an energy functional which is well understood from the theory of elasticity. In particular, we use the axioms of isotropy and frame indifference to derive an energy based on a unique set of measures. These capture the usual notions of area, length and angle preservation in a single energy functional, allowing for a trade-off between these measures without sacrificing mathematical guarantees such as well-posedness. This makes is possible to simultaneously optimize the parameterization for multiple criteria. For instance, one may choose the parameterization with the least area distortion amongst all conformal parameterizations, etc. Due to its foundation in mechanics, numerical methods for minimizing the energy based on finite element discretizations are well understood, leading to a straightforward implementation. Throughout this talk I will demonstrate the flexibility of our method with numerous examples.
Nathan Litke

http://www.cs.caltech.edu/~njlitke/
Jim Hanan
"Modelling of processes in dynamic environments: From Cells to Ecosystems"
Top
4 June 2004

Abstract

Computational science has a large role to play in helping the biologist deal with the complexities of the systems they study.   This presentation will have a look at how an individual-based simulation and visualisation approach can be used in studying a range of processes from cellular to ecosystem level.  A central theme is the modelling of dynamic processes in dynamic structures.
Jim Hanan:

Dr Jim Hanan
Principal Research Fellow
ARC Centre of Excellence for Integrative Legume Research,
ARC Centre for Bioinformatics, ARC Centre for Complex Systems,
and Advanced Computational Modelling Centre
The University of Queensland
Brisbane, Australia
Education
B.Sc. (Hons) (1977) University of Manitoba, Canada
M.Sc. (1988) University of Regina, Canada
Ph.D. (1992) University of Regina, Canada


Kari Pulli (Nokia Mobile Phones)
"Mobile 3D Graphics APIs"
Top


6 February 2004

Abstract

The last few years have seen dramatic improvements in how much computation power and visual capabilities can be packed into a device small enough to fit in your pocket. Real-time 3D graphics on mobile phones is now a reality, and there are two new standards for a mobile 3D API.

This talk will cover a brief history of mobile 3D graphics and presents two new APIs: OpenGL ES for C/C++, an immediate mode low-level API subsetting OpenGL 1.3, and Mobile 3D (for Java MIDP), an API that supports scene graphs, animation, and a binary file format.

Bio

Kari Pulli is a Principal Scientist at Nokia Mobile Phones where he heads research activities ensuring that mobile devices allow visually interesting communication and entertainment, from the input (cameras) to the output (displays and graphics). He is also a Docent (adjunct faculty) at University of Oulu, where he teaches computer graphics.

Kari has studied in U.Oulu (86-89,91-93), U.Minnesota (89-90), U.Paderborn (90-91), and U.Washington (93-97), receiving the degrees of B.Sc., M.Sc., Lic.Tech, and Ph.D., all in computer science or engineering. He also received an Executive MBA degree at Oulu in 01. During the doctoral studies Kari worked at Microsoft, SGI, and Alias|Wavefront.




Michael Gleicher (University of Wisconsin)
"Animation by Example"
Top

5 February 2004

Abstract

The motion of animated human characters is notoriously difficult to create. Motion synthesis methods must achieve expressiveness, subtlety and realism. The current techniques for creating such quality motions, such as capturing it by observing real performers, can achieve these qualities in short, specific clips of motion. However, while these clips provide examples of what a character can do, a set of clips by itself does not provide sufficient flexibility to animate all of the things we might require of a character. We need methods that are capable of synthesizing new motions that have the qualities of the examples.

In this talk, I will survey our efforts to create high-quality motion for animation in a flexible manner. I will describe four recent projects from our group:
- Motion Graphs - an approach to creating new motions by assembling pieces of existing motions;
- Snap Together Motion - an approach to using Motion Graphs in interactive systems;
- Registration Curves - an approach to creating new motions that are combinations (blends) of existing motions;
- Match Webs - an approach to searching and organizing a large database of motions so that it can be used for synthesis tasks.

Bio

Michael Gleicher is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin, Madison. Prof. Gleicher joined the University in 1998 to start a computer graphics group within the department. The overall goal of his research is to create tools that make it easier to create pictures, video, animation, and virtual environments; and to make these visual artifacts more interesting, entertaining, and informative. His current focus is on tools for character animation and for the automatic production of video. Prior to joining the university, Prof. Gleicher was a researcher at The Autodesk Vision Technology Center and at Apple Computer's Advanced Technology Group. He earned his Ph. D. in Computer Science from Carnegie Mellon University, and holds a B.S.E. in Electrical Engineering from Duke University.




Ken Perlin (New York University Media Research Laboratory and Center for Advanced Technology)
"Recent Graphics Research at NYU"
Top


2 February 2004

Abstract

I will be showing a wide variety of research results from the last year. One of them is our recent work in methods for capturing the full eight dimensions of data interaction between light and textured surfaces. Another is in designing interfaces for working with "smart" virtual characters that can convey emotion and take direction at a high level. I will also be talking about techniques for true volumetric display in open air and distributed robotic display devices. In addition, time permitting, I will show a number of small web-based interactive art and technology projects.

Bio

Ken Perlin is a Professor in the Department of Computer Science, and Director of the New York University Media Research Laboratory and Center for Advanced Technology. Ken Perlin's research interests include graphics, animation, and multimedia. In 2002 he received the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU. In 1997 he won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television. In 1991 he received a Presidential Young Investigator Award from the National Science Foundation.

Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY. TRON was the first movie for which his name got onto the credits. He has served on the Board of Directors of the New York chapter of ACM/SIGGRAPH, and currently serves on the Board of Directors of the New York Software Industry Association.


 
Holger Regenbrecht (DaimlerChrysler AG, Research and Technology)
"Mixed Reality Research and Applications at DaimlerChrysler"
Top

6 May 2003

Abstract

The talk gives an overview of ongoing MR research activities for development, production, and service applications in the automotive and aerospace industries at DaimlerChrysler.

We first introduce the organizational structure of research at DaimlerChrysler and the Virtual and Augmented Reality research activities at the Virtual Reality Competence Center (VRCC).

Then, we present selected MR research applications and discuss in more detail some recent results and ongoing MR research at the VRCC. We illustrate our developments with our "MagicMeeting" system, a collaborative AR system for design review scenarios.

Finally, we discuss a prototypical implementation of an Augmented Virtuality based video conferencing system.

Bio

Holger Regenbrecht is a scientist at the DaimlerChrysler Research Center in Ulm, Germany. His research interests include interfaces for virtual and augmented environments, virtual reality aided design, perception of virtual reality, collaboration, and AR/VR in the automotive and aerospace industry. Regenbrecht received a doctoral degree from the Bauhaus University Weimar, Germany.


 
Ken Perlin (New York University Media Research Laboratory and Center for Advanced Technology)
"
Virtual actors that can act"
Top

7 February 2003

Abstract

In a computer game, characters exist mainly to provide choices for a player who advances through the experience, testing himself against the game or against other players. Players do not feel that Laura Croft or Mario actually exist as psychological beings.

In contrast, in a play or movie the audience expects to vicariously experience the emotional choices of characters. Those choices have been carefully created by an author, interpreted by a director, and embodied by actors to promote a willing "suspension of disbelief."

Could this dichotomy be turned into a continuous dialectic? Could there be an interactive narrative artform in which agency is amorphous, floating between audience and character? The question is timely, because many of the enabling technologies to make such a medium are only now emerging, much as the enabling technologies to create cinema began to emerge roughly a century ago.

Such a medium would require the equivalent of at least three elements: story, direction, and acting. Of these, the third constitutes a creative bottleneck, since it is not interesting to create interactive story and direction without actors who can breathe life into them.

I will focus on recent work on creating embodied virtual actors that can take direction, express attitude and emotion, and convincingly portray characters with inner lives. I will also discuss various other related work that you can see at http://mrl.nyu.edu/~perlin/

Bio

Ken Perlin is a Professor in the Department of Computer Science, and Director of the New York University Media Research Laboratory and Center for Advanced Technology. Ken Perlin's research interests include graphics, animation, and multimedia. In 2002 he received the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU. In 1997 he won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television. In 1991 he received a Presidential Young Investigator Award from the National Science Foundation.

Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY. TRON was the first movie for which his name got onto the credits. He has served on the Board of Directors of the New York chapter of ACM/SIGGRAPH, and currently serves on the Board of Directors of the New York Software Industry Association.




Ronen Barzel (Pixar Animation Studio)
"Choreographing dynamics"
Top
15 November 2002

Abstract

Dynamic simulation can generate complex and realistic motion automatically, freeing the modeler or animator from worrying about the details. But what if you care about the details? For Pixar's meticulously choreographed animations, pure dynamics simulation alone doesn't necessarily deliver what the director wants. This talk will discuss three different approaches to provide the controls we need for dynamic (or dynamic-seeming) behavior to satisfy production aesthetics. First, "Faking Dynamics" -- a non-dynamic technique we used to animate the Slinky Dog and other Toy Story models. Next, "Pseudo Dynamics", a partially-dynamic technique we used to animate the rain drops in "A Bug's Life". Finally, "Plausible Motion", a speculative technique that is an area of current research.

 

Bio

Ronen Barzel joined Pixar in 1993 to work on Toy Story in various roles, in particular as a modeler with an emphasis on ropes, cords & the Slinky Dog, and as a member of the lighting team and engineer of lighting methodology and software. He has since worked on R&D of modeling, lighting and animation tools. He has a bachelor's in math/physics and a masters in computer science from Brown University, and a PhD in computer science from Caltech, where he worked on "dynamic constraints" and physically-based modeling. He is the editor-in-chief of the Journal of Graphics Tools. Starting January 2003 he will be visiting at Ecole Polytechnique, teaching a course about CG animation.

Victor Ostromoukhov (Dept.Comp.Sc.& Op.Res. / University of Montreal)
"Color in technology, psychology and plastic arts"
Top

14 June 2002

Abstract

Color plays important role in many human activities, in our everyday life. Since the Antiquity, people tried explain the complex phenomenon of color vision. In my talk, I will present different facets of color, as seen by painters and art critics, by computer graphics people, and by experimental psychologists and neuroscientists. We shall overview the basic concepts of modern color science, including the latest standards for color appearance models used in the imaging technology.

Bio

Victor Ostromoukhov studied mathematics, physics and computer science at Moscow Phys-Tech (MIPT). After graduating in 1980, he spent several years with prominent European and American industrial companies (SG2, Paris; Olivetti, Paris and Milan; Canon Information Systems, Cupertino, CA) as a research scientist and/or computer engineer. He completed his Ph.D. in CS at Swiss Federal Institute of Technology (EPFL, Lausanne, 1995), where he continued to work as a lecturer and senior researcher. Invited professor at University of Washington, Seattle, in 1997. Research scientist at Massachusetts Institute of Technology, Cambridge, MA, in 1999-2000. Associate Professor at University of Montreal, since August 2000. His research interests are mainly in computer graphics, and more specifically in non-photorealistic rendering, texture synthesis, color science, halftoning, and digital art.

 
Sébastien Roy (Laboratoire de Vision 3D / Université de Montréal, Canada)
"3D Vision: To an automatic scene reconstruction"
Top

8 April 2002

Abstract

This seminar will focus on the problem of 3d reconstruction from multiple images taken from arbitrary point of view. Actually stereoscopic algorithms, who need similar point of view, can not be easily generalized with the case of "arbitrary cameras". The reason is that the occlusions, minor problem in stereoscopic, become an apparently insurmountable obstacle when, for example, two cameras are face to face.

We will describe a generalization of the approach by maximum-flow, able to minimize globally and effectively certain functions of mapping, towards the multiple case of images from arbitrary cameras. It differs from the other methods (space carving, planes sweep...) by its global solution rather than local to the occlusions problem.

Sébastien Roy
 (Laboratoire de Vision 3D / Université de Montréal, Canada)

Oliver Deussen (University of Dresden)
"Modeling and rendering of complex botanical scenes"
Top

11 December 2001

Abstract

Creating scenes with complex vegetation is a challenging task for two reasons: very complex geometry has to be handled and complex light interaction has to be simulated.

The talk will cover some aspects in this environment. First, a modelling method is presented that allows the user to interactively generate plant models using a small set of components. These plants are then combined to form complex ecosystems using interactive tools. Efficient realistic rendering algorithms are given. In the last part of the talk, some non-realistic rendering methods for vegetation are discussed.

Oliver Deussen:
(University of Dresden)


Simon Gibson (Advanced Interfaces Group, Department of Computer Science / University of Manchester, UK.)
"Recovering Geometric and Illumination Data from Image Sequences"
Top

16 December 2001

Abstract

Building realistic models of real-world environments is a complex task, involving the recovery of geometric representations of objects and descriptions of surface reflectance and illumination characteristics.

In this talk, I will present several algorithms we are developing to reconstruct such models from sequences of images, and video footage of real scenes. First, I will discuss automatic and semi-automatic methods for camera calibration and geometry reconstruction. Following that, I will give details of a novel and flexible approach to estimating surface reflectance and illumination characteristics that uses high dynamic-range images and sets of virtual light-sources.

Simon Gibson

(Advanced Interfaces Group, Department of Computer Science / University of Manchester, UK.)



Copyright 2003, reves | Design by makebelieve.gr                                                                                                                francais INRIA