Monday, April 8
Monday, April 8 8:00 - 9:00
Breakfast
Monday, April 8 9:00 - 12:00
T1: Tutorial Session 1: Conversational Explanations - Explainable AI through Human-Machine Conversation
Abstract: Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The tutorial is broken down into three broad areas which are dealt with sequentially:
Explainable AI What is it? Why do we need it? Where is the state of the art? Starting with the philosophical definition of explanations and the role they serve in human relationships, this will cover the core topic of explainable AI, looking into different techniques for different kinds of AI systems, different fundamental classifications of explanations (such as transparent, post-hoc and explanation by example) and the different roles that these may play with human users in a human-machine hybrid system. Examples of adversarial attacks and the role of explanations in mitigating against these will be given, along with the need to defend against bias (either algorithmic or through training data issues). Human roles in explanations Building on the work reported in "Interpretable to whom?" [Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems, ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Jul 2018] this section examines the different roles that a human (or machine) user within the system may be fulfilling, and why the role has an important part to play in determining what kind of explanation may be required. In almost all current AI explanation-related research the role of the user is not a primary consideration, but we assert that the ability to create a meaningful explanation must take this into account. The goals of the users will vary depending on their role, and the explanations that will serve them in achieving these goals will also vary. Conversational explanations The role of conversational machine agents (such as Alexa, Siri and Google) are becoming increasingly commonplace, but the typical interactions that these agents fulfil are fairly simple. Conversational interactions can be especially useful in complex or evolving situations where the ability to design a rich and complete user interface in advance may not be possible. In our ongoing research we are investigating the role of a conversational interaction with AI explanations and will report the findings so far in this section. There will also be a live interactive demo for optional use by the audience during this session.
Intended Audience: The intended audience for this tutorial are researchers in any field where complex algorithms or processes can be used to inform human decision-making. The participants will be taken through a general overview of explanation in both human and machine contexts, and how the role of the agent will have a significant impact on what kind of explanation might be useful. The workshop will then move into some ongoing research into the role of conversation as a tool to enable explanations in human-machine hybrid systems, along with an interactive demonstration of an early version of this capability.
T2: Tutorial Session 2: Energy Constraints in Cognitive Processing - The Role of Constraint Satisfaction in Emergent Awareness
Abstract: Recent insight on brain dynamics and cognitive processing provide important clues for the development of artificially intelligent systems with the capability of situation awareness, flexible operation, and rapid response to unpredictable events in dynamically changing and potentially hostile environments. The focus of this tutorial is to analyze the consequences of constraint satisfaction in developing new AI technologies. Embodiment is a key feature of biological intelligence, which finds its manifestation in embodied robotics and situated intelligence. Energy-awareness can be viewed as the ultimate expression of embodied intelligence; without energy supply from the environment, no intelligence is possible. Energy constraint is often ignored, or has just a secondary role in typical cutting-edge AI approaches. For example, Deep Learning Convolutional Networks often require huge amount of data/ time/ parameters/ energy/ computational power, which may not be readily available in various scenarios.
Our approach proposes solutions to several pitfalls observed in cutting edge AI solutions, such as unsustainable, exponentially growing computational and resource demand; catastrophic deterioration of performance to minute changes in input data, random or intentional; susceptibility to malicious, deceptive actions of adversaries. By learning from neurosciences and cognitive sciences, we outline mathematical and computational models of neurodynamics and their implementations in practical problems. The tutorial covers the following topics:
-
Overview of insights from neurobiology and advanced brain imaging on the dynamics of higher cognition and intentionality. Describing the cinematic model of cognition and sequential decision-making. Aspects of embodiment and situated cognition, consciousness, including technical and philosophical issues.
-
Mathematical and computational models of experimentally observed neurodynamics. Describing the Freeman K (Katchalsky) model hierarchy (K0-KIV) of cortical structures, dynamics, and functions.
-
Practical implementations of embodied cognition, including multisensory percept formation, the intentional action-perception cycle. Examples of self-organized development of behaviors using reinforcement in the NASA Mars Rover SRR-2K robotics test bed.
-
Energy-aware implementation of AI designs motivated by brain metabolism, using computational units coupled with their energy units (metabolic subsystems). Principles of dynamical pattern-based computation through activation sequences in arrays of oscillators.
-
Practical illustrations of the energy-aware computing approach, including distributed sensing with limited bandwidth. Comparing with cutting-edge AI results in computer gaming (e.g., ATARI); show that leading deep learning reinforcement results can be reproduced very efficiently. Neuromorphic H/W implications (Loihi, TrueNorth, etc).
Intended Audience: The tutorial is intended for those interested in better understanding the advantages and shortcomings of today's leading deep learning AI, and possible ways to resolve the mounting bottleneck due to exponentially increasing demand for resources. The tutorial does not require thorough knowledge of the topics covered, rather it provides a comprehensive overview how cognitive, neural, computational, and engineering aspects of intelligence can be combined in a unified framework. It is self-contained, and it will be accessible to researchers and students with basic math and engineering background.
Monday, April 8 10:00 - 10:30
Coffee Break
Monday, April 8 12:00 - 1:30
Lunch (On your own)
Monday, April 8 1:30 - 4:30
T3: Tutorial Session 3: Social Data Analysis for Intelligence
Abstract: This tutorial investigates several issues of social data analysis for intelligence. Social data is understood as information collected from social media including various networks and platforms that show what online users publish on those platforms but also how they share, view or engage with content or other users. The tutorial does not break down how to make sense of social media data, but raises questions to be addressed before exploring social media as a resource for intelligence analysis. The tutorial will be organized into seven chapters.
The first chapter introduces intelligence analysis as the application of cognitive methods to weigh data and test hypotheses within a specific socio-cultural context. The second chapter explores some of the unique features of cyberspace that shape how people behave in this new social realm. The chapter also analyses how the virtual domain of cyberspace is unlike the environmental domains of air, land, maritime and space and how it challenges traditional understanding of concepts such as temporality, conflict, information, border, community, identity or governance. The next chapter investigates the notions of trust and reliability for artefacts in the cyberspace, ranging from information items to sources to more sophisticated structures such as virtual communities. The chapter shows that trust may be diminished is spite of the tremendous volume of information and that the cyberspace is prone to phenomena causing harm to data completeness and credibility. Several such phenomena will be considered: opacity and information filtering (echo chambers, bubble filters), disinformation campaigns (fake news, propaganda, hoaxes, site spoofing), misleading intentions (data leaks), biased interactions (social boots, smoke screening). Chapter 4 investigates the nature of social data content, asking the question of whether social data conveys factual and useful pieces or information or rather subjective content in the form of personal opinions, beliefs and impressions. The discussion is based on two illustrations of social data analysis. The first one tackles fake news propagation in the aftermath of terrorist attacks; the second one addresses the subjective assessment of concepts conveying extreme ideologies online. Chapter 5 identifies pitfalls in exploring the cyberspace both in isolation and considering its interconnectedness with the real world. First, the cyberspace comes with its own riddles and pairs of opposite concepts having blurred frontiers: free speech and actions vs. online hate or cyberbullying; online privacy and personal data vs. fake profiles and identities; transparency vs. anonymity by design. Second, additional pitfalls occur when social data is analyzed in the light of events in the real life. Specific phenomena induced by white data and real-life bias induced by silent communities will be discussed. Chapter 6 addresses the question of how gathering, processing and analyzing social data impacts the intelligence analysts, given the characteristics of those data. The last chapter concludes the tutorial by illustrating the state of art on tools and techniques for cyberspace exploration along with several ongoing research projects, NATO research tracks and initiatives addressing the many facets of social data analysis. While showing that, from a practical standpoint, solutions are still at afterthe- fact forensics level, the chapter will highlight several initiatives adopted by various instances to counter illegal content and online hate and finally to make the Internet a safer place.
Intended Audience: This tutorial is intended for students, researchers and practitioners who are interested in cyberspace exploration and social data analysis. Thanks to illustrations based on realistic use-cases, the participants will learn major challenges of gathering, analyzing and interpreting data from social media and will discover major initiatives undertaken to offer solutions to some of those challenges and to make the cyberspace a more resilient environment.
T4: Tutorial Session 4: Self-Modeling for Adaptive Situation Awareness
This tutorial is about how to build systems that can be trusted to be appropriately situation aware, so they can act as our information partners for complex tasks or in complex environments, including hazardous, distributed, remote, and / or incompletely knowable settings. There is a rich and growing literature on Self-Aware and Self-Adaptive systems, but few of them allow the system to build its own models. Nonetheless, many of the specific properties our systems exhibit have been implemented in other ways, and we will describe many of those other choices.
We show how to build systems that have enough selfinformation to decide when and how to construct, analyze, and communicate models of their operational environments, of their history of interaction with that environment, and of their own behavior and internal decision processes. They can assess their own models and also evaluate and improve the preliminary models we may provide them. These systems will explore their environment, using active experimentation to assess hypotheses and adjust their models. Each of these capabilities has been proposed for other systems, and we compare and contrast those choices with ours.
If we are going to build systems that can act as information partners, we will expect them to communicate among themselves and with us. We will expect them at least to interpret models suggested by us and communicate to us the models that they construct. That means that we need mechanisms of mutually compatible model interpretation (we are specifically not trying to reach mutual understanding at this stage of development, only cooperative behavior based on communicated models). In order for the system to build or assimilate those models, it will need to probe its environment, looking for gaps or errors in the models. This active experimentation is an essential part of situation awareness, since it provides some of the context within which situations can be interpreted. This process of identifying model weaknesses and using them to improve the models is called Model Deficiency Analysis, which is an active area of research.
To that end, we draw principles from theoretical biology and show how to use them in our computational processes. We use the Wrapping integration infrastructure that implements this style of reflective computing, with all computational resources implemented as limited-scope functions, explicit descriptions of all of these functions and when it is appropriate to use them, and powerful Knowledge-Based integration support processes, all of which are themselves computational resources with explicit descriptions.
We have shown that the Wrapping approach is ideal for adaptive and autonomous systems, including Self-Modeling Systems, in many previous papers, and in full day tutorials in previous SASO conferences. To understand the necessary choices, we start by introducing some of the basic elements of creating a reflective system, i.e., one that reasons about its own resources. For the many relevant design questions, we will describe approaches to addressing them other than ours, and why we chose to do what we did. We will present and discuss examples in developing situation awareness capabilities using a testbed for embedded real-time systems, called CARS (Computational Architectures for Reflective Systems).
Monday, April 8 3:00 - 3:30
Coffee Break
Monday, April 8 6:00 - 9:00
Welcome Reception
Tuesday, April 9
Tuesday, April 9 8:00 - 9:00
Breakfast
Tuesday, April 9 9:00 - 9:10
Conference Opening
Tuesday, April 9 9:10 - 10:00
K1: Keynote by Dr. Rand Waltzman, Deputy CTO at RAND Corporation, Santa Monica, CA, "MD: Multimedia Disinformation - Is there a doctor in the house?!"
So-called "deep fake" technologies have the potential to create audio and video of real people saying and doing things they never said or did. The technology required to create these digital forgeries is rapidly developing. It is on the verge of becoming commoditized to the point where anybody with a laptop computer, very modest investment in software and minimal technical skill will have the capability to create realistic audiovisual forgeries. Machine learning and Artificial Intelligence techniques are making deep fakes increasingly realistic and resistant to detection. Individuals and businesses will face novel forms of exploitation, intimidation, and sabotage. Things have been bad with disinformation polluting the information environment. They are about to get a lot worse. What is to be done?
Tuesday, April 9 10:00 - 10:30
Coffee Break
Tuesday, April 9 10:30 - 12:00
S1: Information Fusion
Tuesday, April 9 12:00 - 1:30
Lunch (On your own)
Tuesday, April 9 1:30 - 3:00
S2: Decision Support
Tuesday, April 9 3:00 - 3:30
Coffee Break
Tuesday, April 9 3:30 - 5:00
S3: Modeling and Simulations
Tuesday, April 9 5:30 - 7:30
CogSIMA 2020 Planning Meeting
Wednesday, April 10
Wednesday, April 10 8:00 - 9:00
Breakfast
Wednesday, April 10 9:00 - 10:00
K2: Keynote by Dr. Doug Riecken, Air Force Office of Scientific Research (AFOSR), USA
Wednesday, April 10 10:00 - 10:30
Coffee Break
Wednesday, April 10 10:30 - 12:00
S4: Applications
Wednesday, April 10 12:00 - 1:30
Lunch (On your own)
Wednesday, April 10 1:30 - 3:30
P1: Poster Session
- P1.1 Information Fusion for Maritime Domain Awareness: Illegal Fishing Detection
- P1.2 Simulation-Based Reduction of Operational and Cybersecurity Risks in Autonomous Vehicles
- P1.3 Markov Decision Processes with Coherent Risk Measures: Risk Aversity in Asset Management
- P1.4 Mapping the Information Flows for the Architecture of a Nation-Wide Situation Awareness System
- P1.5 Evaluating Improvement in Situation Awareness and Decision-Making Through Automation
- P1.6 Uncovering Age Progression in Wireless Signal Propagation Modeling Using Decisions of Machine Learning Classifiers
Wednesday, April 10 3:30 - 4:00
Coffee Break
Wednesday, April 10 4:00 - 5:30
Industry Panel
Wednesday, April 10 6:00 - 9:00
Conference Banquet Dinner
Thursday, April 11
Thursday, April 11 8:00 - 9:00
Breakfast
Thursday, April 11 9:00 - 10:00
K3: Keynote by Prof. Nancy J. Cooke, Arizona State University, AZ, USA, "Human-Autonomy Teaming: Can Autonomy be a Good Team Player?"
Abstract: A team is an interdependent group of three or more people who have different roles and who interact with one another toward a common goal. Teams can engage in physical activities as a unit (e.g., lifting a patient from bed), as well as cognitive activities (e.g., specialists coordinating on a patient's diagnosis). Team cognition is the execution of these cognitive activities (e.g., perception, planning, decision making) at a team level. But do teammates need to be people? Advances in artificial intelligence and machine learning have provided machines with increasingly levels of autonomy. The human-machine relationship can shift from humans supervising machines to humans teaming with machines. But do machines have what it takes to be good teammates? In this talk I will discuss what we know about team cognition in human teams and present some findings from studies of human-autonomy teams.