Dr Luca Longo, a lecturer and researcher in artificial intelligence, presented at TEDxInnopolis 2019 in Russia on December 1st where the theme was too big data.
During his talk, The next leap into The Future: Explaining the Unexplainable, Dr Longo said that although Artificial Intelligence (AI) is an increasing part of our daily lives, from conversational applications to image and facial recognition systems, predictive analytics, autonomous machines and personalised systems, trusting these AI-based systems and understanding their decision making is an increasing problem.
Dr Longo asked the audience to imagine AI applications in healthcare: doctors not only must understand the decision-making process of a machine for patient diagnosis, but must also communicate to the patient why such decisions have been taken. Similarly, automatic decisions made by driverless cars or even drones being deployed during war must be transparent and explainable because of the risks involved.
He also explained that as humans, we must fully understand how machine-decisions are made. However, the lack of interpretability and explainability hampers our ability to trust them fully. Explainable Artificial Intelligence (XAI) is the new paradigm that started revolutionising the way we see and interact with machines. It will be revolutionary because it will finally empower human intelligence with artificial intelligence.
More information about TEDxInnopolis 2019 is available here.