Skip to main content
Browse by:

Data Dialogue: Trustable AI Systems: Interpretability, transparency and criticism

Event Image
Thursday, February 22, 2018
11:45 am - 1:00 pm
Kaynar Kabul and Mustafa Kabul
Data Dialogue

Machine learning models have been used successfully in areas such as object recognition, speech perception, language modeling and automated decision optimization leveraging reinforcement learning. However, increasingly complicated nonlinear models and heavily engineered features limit transparency, slowing adoption of machine learning models in application areas where critical decisions are made. Data scientists who understand the workings of complex models, their limitations, and the reasons for individual predictions are able to use predictive models more effectively. In this talk, will focus on different machine learning and visualization techniques that can be used to make complex artificial intelligence systems interpretable, transparent and trustable. We will show how these techniques can be used in the AI life cycle, specifically in pre-modelling, modeling and post-modelling stages.

Contact: Ariel Dawn