Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Keynote Lecture
Giancarlo Guizzardi, University of Twente, Netherlands

Component-level Explanation and Validation of AI Models
Wojciech Samek, Chair for Machine Learning and Communications, TU Berlin / Head of AI Department, Fraunhofer HHI, Germany

Deep Learning Deep Feelings: Large Models, Larger Emotions
Bjoern Schuller, University of Augsburg / Imperial College London, Germany

 

Keynote Lecture

Giancarlo Guizzardi
University of Twente
Netherlands
 

Brief Bio
Giancarlo Guizzardi is a Full Professor of Software Science and Evolution as well as Chair and Department Head of Semantics, Cybersecurity & Services (SCS) at the University of Twente, The Netherlands. He is also an Affiliated/Guest Professor at the Department of Computer and Systems Sciences (DSV) at Stockholm University, in Sweden. He has been active for nearly three decades in the areas of Formal and Applied Ontology, Conceptual Modelling, Enterprise Computing and Information Systems Engineering, working with a multi-disciplinary approach in Computer Science that aggregates results from Philosophy, Cognitive Science, Logics and Linguistics. He is the main contributor to the Unified Foundational Ontology (UFO) and to the OntoUML modeling language. Over the years, he has delivered keynote speeches in several key international conferences in these fields (e.g., ER, CAiSE, BPM, IEEE ICSC). He is currently an associate editor of a number of journals including Applied Ontology and Data & Knowledge Engineering, a co-editor of the Lecture Notes in Business Information Processing series, and a member of several international journal editorial boards. He is also a member of the Steering Committees of ER, EDOC, and IEEE CBI, and of the Advisory Board of the International Association for Ontology and its Applications (IAOA). Finally, he has recently been inducted as an ER fellow.


Abstract
Available soon.



 

 

Component-level Explanation and Validation of AI Models

Wojciech Samek
Chair for Machine Learning and Communications, TU Berlin / Head of AI Department, Fraunhofer HHI
Germany
 

Brief Bio
Wojciech Samek is a Professor in the EECS Department at the Technical University of Berlin and the Head of the AI Department at the Fraunhofer Heinrich Hertz Institute (HHI) in Berlin, Germany. He earned an M.Sc. from Humboldt University of Berlin in 2010 and a Ph.D. (with honors) from the Technical University of Berlin in 2014. Following his doctorate, he founded the "Machine Learning" Group at Fraunhofer HHI, which became an independent department in 2021. He is a Fellow at BIFOLD – the Berlin Institute for the Foundation of Learning and Data and the ELLIS Unit Berlin. He also serves as a member of Germany’s Platform for AI and sits on the boards of AGH University’s AI Center, the Helmholtz Einstein School in Data Science (HEIBRiDS), and the DAAD Konrad Zuse School ELIZA. Dr. Samek's research in explainable AI (XAI) spans method development, theory, and applications, with pioneering contributions such as Layer-wise Relevance Propagation (LRP), advancements in concept-level explainability, evaluation of explanations, and XAI-driven model and data improvement. He has served as a senior editor for IEEE TNNLS, held associate editor roles for various other journals, and acted as an area chair at NeurIPS, ICML, and NAACL. He has received several best paper awards, including from Pattern Recognition (2020), Digital Signal Processing (2022), and the IEEE Signal Processing Society (2025). Overall, he has co-authored more than 250 peer-reviewed journal and conference papers, with several recognized as ESI Hot Papers (top 0.1%) or Highly Cited Papers (top 1%).


Abstract
Human-designed systems are constructed step by step, with each component serving a clear and well-defined purpose. For instance, the functions of an airplane’s wings and wheels are explicitly understood and independently verifiable. In contrast, modern AI systems are developed holistically through optimization, leaving their internal processes opaque and making verification and trust more difficult. This talk explores how explanation methods can uncover the inner workings of AI, revealing what knowledge models encode, how they use it to make predictions, and where this knowledge originates in the training data. It presents SemanticLens, a novel approach that maps hidden neural network knowledge into the semantically rich space of foundation models like CLIP. This mapping enables effective model debugging, comparison, validation, and alignment with reasoning expectations. The talk concludes by demonstrating how SemanticLens can help in identifying flaws in medical AI models, enhancing robustness and safety, and ultimately bridging the “trust gap” between AI systems and traditional engineering.



 

 

Deep Learning Deep Feelings: Large Models, Larger Emotions

Bjoern Schuller
University of Augsburg / Imperial College London
Germany
www.schuller.one
 

Brief Bio
Björn W. Schuller is a distinguished academic and researcher with extensive expertise in Machine Intelligence and Signal Processing. He earned his diploma, doctoral degree, habilitation, and Adjunct Teaching Professor title in EE/IT from TUM in Munich, where he currently holds a Full Professorship as Chair of Health Informatics. Additionally, he is a Full Professor of Artificial Intelligence and Head of GLAM at Imperial College London. Schuller co-founded audEERING, an Audio Intelligence company, and has numerous affiliations, including roles at the Munich Data Science Institute and the Munich Center for Machine Learning. He has held multiple prestigious professorships globally and served as an independent research leader at the Alan Turing Institute. He is a Fellow of several prominent societies, including the ACM, IEEE, BCS, ELLIS, ISCA, and AAAC. With over 1,500 publications, more than 70,000 citations, and an h-index exceeding 110, he is highly influential in the field of Computer Science. He has held editorial positions, including Field Chief Editor of Frontiers in Digital Health, Editor in Chief of AI Open, and the IEEE Transactions on Affective Computing. Schuller has received over 50 awards, including being named one of 40 extraordinary scientists under 40 by the WEF in 2015. Currently, he is an ACM Distinguished Speaker and an IEEE Signal Processing Society Distinguished Lecturer. His work has been widely recognized in the media, with over 300 public press appearances and contributions to various international outlets including Newsweek, Scientific American, and Times.


Abstract
As AI systems permeate every corner of modern life, a crucial frontier emerges enabling machines not just to learn, but to feel—at least enough to understand our states and communicate with us in empathic manners. This keynote takes you on a journey from the current “Affective Intelligence” largely empowered by deep learning to the rising wave of large model exploitation in “Affective Intelligence 2.0”. Blending advances in affective computing and the rapid progress in multimodal foundation models, emotionally aware AI is about to reshape human-machine interaction, digital health, multimedia, and will be a corner stone of AGI to come. Beyond algorithms, this talk explores the power—and responsibility—of creating machines that respond with empathy, adapt to individual emotional states, and navigate the complexities of real-world human affective experience. It will further highlight the potential of affective computing in “Friendly AI” and discuss potential of emotion as inspiration in deep learning. With a critical eye on ethical design and societal impact, the talk invites you to imagine an AI future that goes beyond data—into emotion, connection, and care.



footer