Dr Freddy Lecue (PhD 2008, Habilitation 2015) is AI Research Director at J.P.Morgan in New York. He is also a research associate at Inria, in WIMMICS, Sophia Antipolis - France.
His research area is at the frontier of intelligent i.e., learning and reasoning systems. He has a strong interest on Explainable AI i.e., AI systems, models and results which can be explained to human and business experts cf. recent research / industry presentation. In particular he is interested in: Cognitive Computing, Knowledge Representation and Reasoning, Machine (particularly Deep) Learning, Large Scale Processing, Software Engineering, Service-Oriented Computing, Information Extraction and Integration, Recommendation System, Cloud and Mobile Computing.
Before joining J.P.Morgan he was Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) @Thales in Montreal, Canada from 2019 till 2022.
Before his leadership role at the new R&T lab of Thales dedicated to AI, he was AI R&D lead at in Accenture Technology Labs, Dublin - Ireland from 2016 to 2019
Before joining Accenture in January 2016, he was a research scientist and lead investigator in large scale reasoning systems at IBM Research - Ireland from 2011 to 2016..
His research has received Accenture internal recognition: Accenture Technology Star award in 2017, IBM internal recognition: IBM research division award in 2015 and IBM Technical Accomplishment award in 2014. His research received external recognition: best paper awards from ISWC (International Semantic Web Conference) in 2014, and ESWC (Extended Semantic Web Conference) in 2014, as well as semantic Web challenge awards from ISWC in 2013 and 2012. He has moved AI research assets from prototype (management of 5 - 7 researchers / engineers) to production (management of 15 - 20 engineers).
Prior to joining IBM Research he was Research Fellow at The University of Manchester from 2008 to 2011 and Research Engineer at Orange Labs (formerly France Telecom R&D) from 2005 to 2008.
He received his Research Habilitation (HdR - Accreditation to supervise research) from the University of Nice (France) in 2015, and a PhD from École des Mines de Saint-Etienne (France) in 2008. His PhD thesis was sponsored by Orange Labs and was awarded by the French Association in Artificial Intelligence.
Explaining Deep Neural Networks: The Good, The Bad and The Ugly. (short)
Explaining Deep Neural Networks: The Good, the Bad and the Ugly, ... and Where Every Little Knowledge Helps. | video
AAAI22 - On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices. | video | code
Explaining Deep Neural Networks: The Good, The Bad and The Ugly. (long)
Explainable AI: A Focus on Narrative, Machine Learning and Knowledge Graph-based Approaches.
On the Role of Domain Knowledge in Explainable Machine Learning.
On the Role of Knowledge Graphsin Explainable Machine Learning.
XAI - What is the best Explanation for your Machine Learning System? Let's review, code and test! | video | code
XAI Panel Discussion with IBM, Google and Thales. | video [6:29:00 - end]
AAAI21 - On Explainable AI: From Theory to Motivation, Industrial Applications and Coding Practices. | video | code
Explainable Machine Learning: Mind the Users and their Knowledge. | video
Enhancing Language and Vision with Knowledge -The Case of Visual Question Answering. | video
Toolkits for Explaining your Machine Learning Models.
Thales Embedded Explainable AI System: Towards the Adoption of AI for Autonomous Train.
XAI - Explanation in AI: From Machine Learning to Knowledge Representation & Reasoning and Beyond.
Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned.
XAI - Explanation in AI: From Machine Learning to Knowledge Representation & Reasoning and Beyond.
Thales Embedded Explainable AI: Towards the Adoption of AI in Critical Systems.
On the Role of Knowledge Graphs for the Adoption of Machine Learning Systems in Industry.
How Thales Uses AI to Accelerate Adoption in Critical Systems.
On Explainable AI: From Theory to Motivation, Applications and Limitations.
The Explainable AI project aims at understanding and explaining how decisions are captured through intelligent systems (e.g., mathematical models, machine learning systems). This project does not only focus on systems that give the right (optimal, cheapest, fastest) answer but to systems that can explain why and how it is the right answer. Our work aimed at explaining decision to business owners and is addressing the issues raised by The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679). We target the general audience and business owners (any third part who needs to understand Machine Learning decisions e.g., models, prediction, recommendation) i.e., simple answers to complex questions. To this end we combine Artificial Intelligence techniques from statistics and logics-based inference models i.e., learning and reasoning. Real-work applications have been focusing towards the explanation of (i) financial risks (frauds, travel expenses, project delivery), (ii) flight delay and cancellation in airline companies (work in IBM and Accenture) and (iii) road traffic delay in Dublin (Ireland), Bologna (Italy) and Rio (Brazil) (work in IBM Research). All the previous applications have been sucessfully validated if not deployed to production in some of Top 500 Fortune companies.
Randy Cogill, Simone Tallevi-Diotallevi, Jer Hayes, Marco Luca Sbodio, Pierpaolo Tommasi (IBM Research)The Predictive Reasoning project ingests, combines, and correlates a large volume of heterogenous real-time data (e.g., traffic data, city data such as events, road works, and weather related data) through a knowledge graph based model. Data mining, machine learning, knowledge representation and reasoning techniques are combined to obtain scalable and accurate prediction. The system outperforms state-of-the-art predictive analytics technologies by making sense out of context e.g., weather, city events, incidents and road works. One direct application has been traffic delay prediction in Dublin (Ireland), Bologna (Italy) and Rio (Brazil).
Jeff Z. Pan, Jiewen WuThe Cognitive Driving project provides cognitive mobility enabling a new generation of vehicles to recommend (and justify) personalized routes based on an analysis and interpretation of (i) open data from real-time traffic and various IoT devices (e.g., weather station, car sensors), (ii) social data from tweets feeds, (iii) driver-related data such as her/his body information (e.g., anxiety) from wearables and also (iv) calendar data. The application will then suggest personalized routes that fit drivers' ability while ensuring safer and secure traffic for other vehicles in the city.
Michael Barry, Randy Cogill, Rodrigo Ordóñez, Joe Naoum-Sawaya, Mark Purcell, Martin StephensonSTAR-CITY (Semantic Traffic Analytics and Reasoning for CITY) is a system supporting semantic traffic analytic and reasoning for city. It fuses (human and machine-based) sensor data streams using variety of formats, velocities and volumes. The system provides insight on historical and real-time traffic conditions, supporting efficient urban planning. STAR-CITY demonstrates how the severity of road traffic congestion can be smoothly analyzed, diagnosed, explored and predicted using knowledge graph technologies. The system is being experimented in Dublin (Ireland), Bologna (Italy), Miami (USA), Rio (Brazil) across various engagements.
Simone Tallevi-Diotallevi, Jer Hayes, Robert Tucker, Veli Bicer, Marco Luca Sbodio, Pierpaolo Tommasi