Renato Martins

I am a post-doctoral researcher in the CHORALE group at INRIA, Sophia Antipolis/France. My research is carried out in the context of the project MOBIDEEP ("Technology-aided MOBIlity by semantic DEEP learning"). Previously I was with Department of Computer Science of the Universidade Federal de Minas Gerais/Brazil, working in the VeRLab group, with whom I maintain a tight research collaboration.

My research interests lie in Computer Vision, Robot Vision and applied Machine Learning, more specifically in the topics of 3D vision, geometric deep learning, human motion analysis, video prediction, and unconventional image understanding and processing (RGB-D, omnidirectional).

CV  /  GitHub  /  Short Bio  /  Google Scholar  /  Email

Recent Papers

Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio
Joao P. Ferreira, Thiago M. Coutinho, Thiago L. Gomes, Jose F. Neto, Rafael Azevedo, Renato Martins, Erickson R. Nascimento
Elsevier Computers and Graphics (CAG) , 2020
arXiv / project webpage / bibtex / github code

In this project, we design a novel human motion generation method based on graph convolutional networks (GCN) to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles.

Do As I Do: Transferring Human Motion and Appearance between Monocular Videos with Spatial and Temporal Constraints
Thiago L. Gomes, Renato Martins, Joao Ferreira, Erickson R. Nascimento
IEEE Winter Conference on Applications of Computer Vision (WACV) , 2020
arXiv / project webpage / bibtex / github code

In this paper, we propose a unifying formulation for transferring appearance and retargeting human motion from monocular videos that regards all these aspects. Our method is composed of four main components and synthesizes new videos of people in a different context where they were initially recorded. Differently from recent appearance transferring methods, our approach takes into account body shape, appearance and motion constraints.


Template of this webpage.