Dr. Freddy Lecue

Dr Freddy Lecue (PhD 2008, Habilitation 2015) is AI Research Director at J.P.Morgan in New York. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis - France.

His research area is at the frontier of intelligent i.e., learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts cf. recent research / industry presentation. In particular he is interested in: Cognitive Computing, Knowledge Representation and Reasoning, Machine (particularly Deep) Learning, Large Scale Processing, Software Engineering, Service-Oriented Computing, Information Extraction and Integration, Recommendation System, Cloud and Mobile Computing.

Short bio

Before joining J.P.Morgan he was Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) @Thales in Montreal, Canada from 2019 till 2022.

Before his leadership role at the new R&T lab of Thales dedicated to AI, he was AI R&D lead at in Accenture Technology Labs, Dublin - Ireland from 2016 to 2019

Before joining Accenture in January 2016, he was a research scientist and lead investigator in large scale reasoning systems at IBM Research - Ireland from 2011 to 2016..

His research has received Accenture internal recognition: Accenture Technology Star award in 2017, IBM internal recognition: IBM research division award in 2015 and IBM Technical Accomplishment award in 2014. His research received external recognition: best paper awards from ISWC (International Semantic Web Conference) in 2014, and ESWC (Extended Semantic Web Conference) in 2014, as well as semantic Web challenge awards from ISWC in 2013 and 2012. He has moved AI research assets from prototype (management of 5 - 7 researchers / engineers) to production (management of 15 - 20 engineers).

Prior to joining IBM Research he was Research Fellow at The University of Manchester from 2008 to 2011 and Research Engineer at Orange Labs (formerly France Telecom R&D) from 2005 to 2008.

He received his Research Habilitation (HdR - Accreditation to supervise research) from the University of Nice (France) in 2015, and a PhD from École des Mines de Saint-Etienne (France) in 2008. His PhD thesis was sponsored by Orange Labs and was awarded by the French Association in Artificial Intelligence.

Public Presentations

3rd Conference on Automated Knowledge Base Construction. (Virtual). October 8, 2021

Explaining Deep Neural Networks: The Good, The Bad and The Ugly. (long)

CNRS / FIIA - Confiance AI : Responsability, Robustness, Transparency. October 7, 2021

Deep Semantic Explanation: Explaining and Manipulating Neural Network Architectures with Knowledge Graphs.

Distinguished seminars on Explainable AI. (Virtual). July 13, 2021

XAI - Explanation in AI: Watch the Semantic Gap!

Stochastic Approaches for Certification of Machine Learning Algorithms Forum. (Virtual). June 11, 2021

On the Role of Domain Knowledge in Explainable Machine Learning.

Knowledge Graph Conference 2021. (Virtual). May 5, 2021

On the Role of Knowledge Graphs in Explainable Machine Learning.

ODSC Data Science Virtual Conference, Boston, USA. (Virtual). April 1, 2021

XAI - What is the best Explanation for your Machine Learning System? Let's review, code and test! | video | code

AAAI Workshop on Explainable AI, Vancouver, Canada. (Virtual). February 8, 2021

XAI Panel Discussion with IBM, Google and Thales. | video [6:29:00 - end]

AAAI Tutorial on Explainable AI, Vancouver, Canada. (Virtual). February 3, 2021

AAAI21 - On Explainable AI: From Theory to Motivation, Industrial Applications and Coding Practices. | video | code

Future Technologies Conference, Amsterdam, The Netherlands. (Virtual). September, 2020

Explainable Machine Learning: Mind the Users and their Knowledge. | video

International Conference on Advance in Ambient Computing and Intelligence, Ottawa, Canada. (Virtual). September 13, 2020

Enhancing Language and Vision with Knowledge -The Case of Visual Question Answering. | video

AI Everything, Dubai, Qatar, UAE. March 11, 2020 (Postponed to a later date)

Toolkits for Explaining your Machine Learning Models.

AI Everything, Dubai, Qatar, UAE. March 10, 2020 (Postponed to a later date)

Thales Embedded Explainable AI System: Towards the Adoption of AI for Autonomous Train.

Centech-Accelerate Event on TRUE AI - Transparent, Understandable and Explainable AI, Montreal, Canada. March 4th, 2020

XAI - Explanation in AI: From Machine Learning to Knowledge Representation & Reasoning and Beyond.

AAAI Tutorial on Explainable AI, New-York, USA. February 8, 2020

Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned.

Seoul Copyright Forum, Seoul, South Korea. November 20th, 2019

On the Impact of Machine Learning on Copyright.

Alberta Machine Intelligence Institute, Edmonton, Canada. November 8th, 2019

XAI - Explanation in AI: From Machine Learning to Knowledge Representation & Reasoning and Beyond.

International Semantic Web Conference, Auckland, New Zealand. October 27th, 2019

On The Role of Knowledge Graphs in Explainable AI.

AI Accelerator Summit, Boston, USA. October 17th, 2019

Thales Embedded Explainable AI: Towards the Adoption of AI in Critical Systems.

Sungkyunkwan University, Seoul, South Korea. August 29th, 2019

Explainable AI - The Story so Far.

Inha University, Seoul, South Korea. August 26th, 2019

Explainable AI - The Story so Far.

World Summit AI Americas, Montreal, Canada. April 10th, 2019

How Thales Uses AI to Accelerate Adoption in Critical Systems.

AAAI Tutorial on Explainable AI, Hawaii, USA. January 27th, 2019

On Explainable AI: From Theory to Motivation, Applications and Limitations.

Recent Projects

Explainable AI (2011 - now) @IBM Research, @Accenture (Dublin), @Inria (Sophia Antipolis), @Thales (Montreal)

The Explainable AI project aims at understanding and explaining how decisions are captured through intelligent systems (e.g., mathematical models, machine learning systems). This project does not only focus on systems that give the right (optimal, cheapest, fastest) answer but to systems that can explain why and how it is the right answer. Our work aimed at explaining decision to business owners and is addressing the issues raised by The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679). We target the general audience and business owners (any third part who needs to understand Machine Learning decisions e.g., models, prediction, recommendation) i.e., simple answers to complex questions. To this end we combine Artificial Intelligence techniques from statistics and logics-based inference models i.e., learning and reasoning. Real-work applications have been focusing towards the explanation of (i) financial risks (frauds, travel expenses, project delivery), (ii) flight delay and cancellation in airline companies (work in IBM and Accenture) and (iii) road traffic delay in Dublin (Ireland), Bologna (Italy) and Rio (Brazil) (work in IBM Research). All the previous applications have been sucessfully validated if not deployed to production in some of Top 500 Fortune companies.

Randy Cogill, Simone Tallevi-Diotallevi, Jer Hayes, Marco Luca Sbodio, Pierpaolo Tommasi (IBM Research)
Alejandro Cabello Jiménez, Eugene Eichelberger, Gemma Gallagher, Christophe Gueret, Peter McCanney, Nicholas McCarthy, Jadran Sirotkovic, Sara van de Moosdijk, Jer Hayes, Jiewen Wu, Irene Zihui (Accenture Labs)
Huajun Chen, Jiaoyan Chen, Jeff Z. Pan (External Collaborators)
Knowledge-based Transfer Learning Explanation. KR 2018: ??-??.
Learning from Ontology Streams with Semantic Concept Drift. IJCAI 2017: 957-963.
Explaining and Predicting Abnormal Expenses at Large Scale using Knowledge Graph based Reasoning. J. Web Sem. 2017.
Personalizing Actions in Context for Risk Management using Semantic Web Technologies. ISWC 2017.
Diagnosing Changes in An Ontology Stream: A DL Reasoning Approach. AAAI 2012.

Predictive Reasoning (2011 - now) @IBM Research, @Accenture (Dublin), @Inria (Sophia Antipolis)

The Predictive Reasoning project ingests, combines, and correlates a large volume of heterogenous real-time data (e.g., traffic data, city data such as events, road works, and weather related data) through a knowledge graph based model. Data mining, machine learning, knowledge representation and reasoning techniques are combined to obtain scalable and accurate prediction. The system outperforms state-of-the-art predictive analytics technologies by making sense out of context e.g., weather, city events, incidents and road works. One direct application has been traffic delay prediction in Dublin (Ireland), Bologna (Italy) and Rio (Brazil).

Jeff Z. Pan, Jiewen Wu
Predicting Knowledge in an Ontology Stream. IJCAI 2013.

Cognitive Driving (2014 - 2015) @IBM Research

The Cognitive Driving project provides cognitive mobility enabling a new generation of vehicles to recommend (and justify) personalized routes based on an analysis and interpretation of (i) open data from real-time traffic and various IoT devices (e.g., weather station, car sensors), (ii) social data from tweets feeds, (iii) driver-related data such as her/his body information (e.g., anxiety) from wearables and also (iv) calendar data. The application will then suggest personalized routes that fit drivers' ability while ensuring safer and secure traffic for other vehicles in the city.

Michael Barry, Randy Cogill, Rodrigo Ordóñez, Joe Naoum-Sawaya, Mark Purcell, Martin Stephenson

STAR-CITY (2012 - 2015) @IBM Research

STAR-CITY (Semantic Traffic Analytics and Reasoning for CITY) is a system supporting semantic traffic analytic and reasoning for city. It fuses (human and machine-based) sensor data streams using variety of formats, velocities and volumes. The system provides insight on historical and real-time traffic conditions, supporting efficient urban planning. STAR-CITY demonstrates how the severity of road traffic congestion can be smoothly analyzed, diagnosed, explored and predicted using knowledge graph technologies. The system is being experimented in Dublin (Ireland), Bologna (Italy), Miami (USA), Rio (Brazil) across various engagements.

Simone Tallevi-Diotallevi, Jer Hayes, Robert Tucker, Veli Bicer, Marco Luca Sbodio, Pierpaolo Tommasi
Best In-Use paper award at ISWC (International Semantic Web Conference) in 2014
Best In-Use paper award at ESWC (Extended Semantic Web Conference) in 2014
Semantic Web challenge awards at ISWC (International Semantic Web Conference) in 2013
Smart traffic analytics in the semantic web with STAR-CITY: Scenarios, system and lessons learned in Dublin City. J. Web Sem. 27: 26-33 (2014)

Publications