CogSIMA 2019 will feature the following tutorials, which will deepen attendees' knowledge on upcoming techniques supporting situation management, such as Explainable Artificial Intelligence (XAI) and social data analysis:

Tutorial 1: Conversational Explanations – Explainable AI through human-machine conversation

Instructor: Dave Braines, IBM Research UK
Time: Monday 8 April 2019: TBD

Abstract: Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The tutorial is broken down into three broad areas which are dealt with sequentially:

  1. Explainable AI
    What is it? Why do we need it? Where is the state of the art?
    Starting with the philosophical definition of explanations and the role they serve in human relationships, this will cover the core topic of explainable AI, looking into different techniques for different kinds of AI systems, different fundamental classifications of explanations (such as transparent, post-hoc and explanation by example) and the different roles that these may play with human users in a human-machine hybrid system. Examples of adversarial attacks and the role of explanations in mitigating against these will be given, along with the need to defend against bias (either algorithmic or through training data issues).
  2. Human roles in explanations
    Building on the work reported in "Interpretable to whom?" [Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems, ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Jul 2018] this section examines the different roles that a human (or machine) user within the system may be fulfilling, and why the role has an important part to play in determining what kind of explanation may be required. In almost all current AI explanation-related research the role of the user is not a primary consideration, but we assert that the ability to create a meaningful explanation must take this into account. The goals of the users will vary depending on their role, and the explanations that will serve them in achieving these goals will also vary.
  3. Conversational explanations
    The role of conversational machine agents (such as Alexa, Siri and Google) are becoming increasingly commonplace, but the typical interactions that these agents fulfil are fairly simple. Conversational interactions can be especially useful in complex or evolving situations where the ability to design a rich and complete user interface in advance may not be possible. In our ongoing research we are investigating the role of a conversational interaction with AI explanations and will report the findings so far in this section. There will also be a live interactive demo for optional use by the audience during this session.

Intended Audience: The intended audience for this tutorial are researchers in any field where complex algorithms or processes can be used to inform human decision-making. The participants will be taken through a general overview of explanation in both human and machine contexts, and how the role of the agent will have a significant impact on what kind of explanation might be useful. The workshop will then move into some ongoing research into the role of conversation as a tool to enable explanations in human-machine hybrid systems, along with an interactive demonstration of an early version of this capability.

Dave Braines

Instructor's biography: Dave Braines is the Chief Technology Officer for Emerging Technology, IBM Research UK, and is a Fellow of the British Computer Society. As a member of the IBM Research division he is an active researcher in the field of Artificial Intelligence and is currently focused on Machine Learning, Deep Learning and Network Motif analysis. He has published over 100 conference and journal papers and is currently the industry technical leader for a 10 year research consortium comprised of 17 academic, industry and government organisations from the UK and US. Dave is passionate about human-machine cognitive interfaces and has developed a number of techniques to support deep interactions between human users and machine agents.

Since 2017 Dave has been pursuing a part-time PhD in Artificial Intelligence at Cardiff University, and in his spare time he likes to get outdoors for camping, walking, kayaking, cycling or anything else that gets him away from desks and screens!


Tutorial 2: Social Data Analysis for Intelligence

Instructor: Dr. Valentina Dragos, ONERA – The French Aerospace Lab, France
Time: Monday 8 April 2019: TBD

Abstract: This tutorial investigates several issues of social data analysis for intelligence. Social data is understood as information collected from social media including various networks and platforms that show what online users publish on those platforms but also how they share, view or engage with content or other users. The tutorial does not break down how to make sense of social media data, but raises questions to be addressed before exploring social media as a resource for intelligence analysis. The tutorial will be organized into seven chapters.

The first chapter introduces intelligence analysis as the application of cognitive methods to weigh data and test hypotheses within a specific socio-cultural context.
The second chapter explores some of the unique features of cyberspace that shape how people behave in this new social realm. The chapter also analyses how the virtual domain of cyberspace is unlike the environmental domains of air, land, maritime and space and how it challenges traditional understanding of concepts such as temporality, conflict, information, border, community, identity or governance.
The next chapter investigates the notions of trust and reliability for artefacts in the cyberspace, ranging from information items to sources to more sophisticated structures such as virtual communities. The chapter shows that trust may be diminished is spite of the tremendous volume of information and that the cyberspace is prone to phenomena causing harm to data completeness and credibility. Several such phenomena will be considered: opacity and information filtering (echo chambers, bubble filters), disinformation campaigns (fake news, propaganda, hoaxes, site spoofing), misleading intentions (data leaks), biased interactions (social boots, smoke screening).
Chapter 4 investigates the nature of social data content, asking the question of whether social data conveys factual and useful pieces or information or rather subjective content in the form of personal opinions, beliefs and impressions. The discussion is based on two illustrations of social data analysis. The first one tackles fake news propagation in the aftermath of terrorist attacks; the second one addresses the subjective assessment of concepts conveying extreme ideologies online.
Chapter 5 identifies pitfalls in exploring the cyberspace both in isolation and considering its interconnectedness with the real world. First, the cyberspace comes with its own riddles and pairs of opposite concepts having blurred frontiers: free speech and actions vs. online hate or cyberbullying; online privacy and personal data vs. fake profiles and identities; transparency vs. anonymity by design. Second, additional pitfalls occur when social data is analyzed in the light of events in the real life. Specific phenomena induced by white data and real-life bias induced by silent communities will be discussed.
Chapter 6 addresses the question of how gathering, processing and analyzing social data impacts the intelligence analysts, given the characteristics of those data.
The last chapter concludes the tutorial by illustrating the state of art on tools and techniques for cyberspace exploration along with several ongoing research projects, NATO research tracks and initiatives addressing the many facets of social data analysis. While showing that, from a practical standpoint, solutions are still at afterthe- fact forensics level, the chapter will highlight several initiatives adopted by various instances to counter illegal content and online hate and finally to make the Internet a safer place.

Intended Audience: This tutorial is intended for students, researchers and practitioners who are interested in cyberspace exploration and social data analysis. Thanks to illustrations based on realistic use-cases, the participants will learn major challenges of gathering, analyzing and interpreting data from social media and will discover major initiatives undertaken to offer solutions to some of those challenges and to make the cyberspace a more resilient environment.

Valentina Dragos

Instructor's biography: Dr. Valentina Dragos is a research scientist, member of the Department of Information Modeling and Systems at ONERA, The French Aerospace Lab in Palaiseau, France. Valentina received Master and PhD degrees in Computer Science from Paris V University and her research interests include artificial intelligence, with emphasis on natural language processing, semantic technologies and automated reasoning. Since joining ONERA in 2010, Valentina contributed to several academic and industrial security-oriented projects, addressing topics such as: semantic interoperability for command and control systems, heterogeneous information fusion, exploration of open sources and social data and integration of symbolic data (HUMINT, OSINT) for situation assessment.

Tutorial 3: TBD

Instructor: TBD
Time: Monday 8 April 2019: TBD

Tutorial 4: TBD

Instructor: TBD
Time: Monday 8 April 2019: TBD


Jan 7: We are pleased to announce that Dr. Benjamin A. Knott (Office of Naval Research Global, Tokyo) will present a keynote address.
Dec 23: [Final Deadline Extension] Due to multiple requests, the paper submission deadline is extended to Jan. 6, 2019
Nov 2: We would like to thank our patrons Lockheed Martin and Charles River Analytics for their continued support!
Oct 22: First tutorial speaker announcements: Dave Braines (CTO Emerging Technology, IBM Research UK) will be giving a tutorial on XAI. Dr. Valentina Dragos (ONERA – The French Aerospace Lab) will instruct a tutorial on social data analysis for intelligence.

Sponsors and Patrons

IEEE logo
SMC logo
Lockheed Martin
Charles River Analytics

Related Conferences & Organizations