2026 IEEE Conference on Cognitive and Computational Aspects
of Situation Management (CogSIMA)

March 9-12, 2026 | Tempe, Arizona

Tutorial Program

Jump to: Majumdar | Lum | Bellman / Landauer / Nelson | Yin et al.


Tutorial 1: Responsible and Ethical Agentic AI in Public Safety Communication: Building Trustworthy Systems for Police and Fire Operations

Instructor

Swarnamouli Majumdar - Concordia University and Zenext AI, Montreal, Canada

Time

Monday, March 9 - 9:00 AM - 12:30 PM

Location

TBD

Abstract

Public safety agencies are increasingly adopting agentic AI to manage the complexity of emergency communication in police, fire, and disaster response. Unlike traditional AI systems, agentic AI can autonomously observe, analyze, decide, and act within defined boundaries, enabling rapid multi-channel data processing and informed decision support. This tutorial introduces methods for building responsible, ethical, and trustworthy AI systems that enhance operational efficiency while safeguarding fairness, privacy, and public trust. Emphasizing human-AI collaboration, bias mitigation, and transparent decision-making, the session blends HCI principles with AI governance strategies for mission-critical communication networks. Attendees will gain practical frameworks, real-world case studies, and oversight models applicable to next-generation public safety communication systems.

Learning objectives

  • Understand what 'responsible AI' means in a public safety context.
  • Recognize potential risks, including bias, lack of transparency, and loss of trust.
  • Design AI workflows that support human decision-making rather than override it.
  • Learn strategies to explain AI recommendations clearly and concisely.
  • Identify steps to check for fairness and privacy in AI-supported communication systems.
  • Plan how to monitor and improve AI systems after deployment.

Intended audience

Researchers in artificial intelligence, human-computer interaction (HCI), and public safety technology. Students at undergraduate, graduate, and doctoral levels interested in applying AI to mission-critical domains. Practitioners working in AI system design, deployment, and evaluation. Public safety leaders managing police, fire, and emergency communication systems. Technology decision-makers in emergency services and inter-agency coordination. Policy makers and ethics advisors establishing guidelines for AI use in public safety. Community stakeholders seeking to understand the impact of AI on public trust and safety.

Instructor Biography

Instructor Photo

Swarnamouli Majumdar

Instructor biography will be provided when available.


Tutorial 2: Superteaming - Considering the role of animals and intelligent robots as team members with their human counterparts

Instructor

Heather Lum, PhD - Arizona State University

Time

Monday, March 9 - 9:00 AM - 12:30 PM

Location

TBD

Abstract

As human-machine collaboration becomes increasingly prevalent, the concept of teams composed of humans, animals, and intelligent robots is shifting from speculative fiction to practical reality. This tutorial examines the emerging roles of non-human agents-both biological and artificial-as cooperative team members working alongside humans in complex environments. We explore foundational principles of interspecies and human-robot teamwork, including communication dynamics, trust building, shared situational awareness, and adaptive coordination.

Drawing on insights from animal behavior, robotics, cognitive science, and teamwork research, the tutorial highlights parallels and distinctions in how animals and intelligent robots contribute to collective tasks. Practical frameworks are presented for designing effective mixed-species and human-robot teams, with attention to ethical considerations, task allocation, training, and system transparency. By integrating multidisciplinary knowledge, participants will gain a deeper understanding of how animals and intelligent robots can enhance team performance, expand human capability, and shape the future of collaborative work.

Intended audience

This tutorial is designed for participants interested in the theories, models, and practical applications of human-nonhuman teaming. A live demonstration featuring a certified search-and-rescue K9 will be included as a practical example of human-animal teaming in action. Please note that the K9 will remain secured before and after the demonstration; however, individuals with dog allergies should be aware of its presence. The demonstration may be conducted outdoors and will involve a moderate amount of walking.

Instructor Biography

Instructor Photo

Dr. Heather C. Lum

Dr. Heather C. Lum is an assistant professor and director of the Virtual Environments & Cognitive Training Research (VECToR) laboratory at Arizona State University. She earned her Ph.D. in applied experimental and human factors psychology from the University of Central Florida in 2011. Her primary research interest centers on human-technology interactions broadly, with a more specific focus on human-nonhuman teaming. She also serves as associate director for the Center for Human, Artificial Intelligence, and Robot Teaming (ASU). In addition to her research pursuits, Dr. Lum is a K9 handler and training coordinator for the Yavapai County Search and Rescue Team - Search Dog Unit and a trainer/evaluator for the National Association for Search and Rescue.


Tutorial 3: Combining Self* Capabilities and Situation Management Promise Great Benefits

Instructors

Kirstie Bellman, Ph.D. - TopcyHouse Consulting and Pulser
Christopher Landauer, Ph.D. - TopcyHouse Consulting and Pulser
Phyllis R. Nelson, Ph.D. - California State Polytechnic University, Pomona

Time

Monday, March 9 - 2:00 PM - 5:30 PM

Location

TBD

Abstract

Self-organization, Self-assembly, Self-management, Self-monitoring, Self-adaptation, Self-healing, Self-awareness, Self-evaluation and many more new fields of research show up under the rubric of "Self-*." Over the last two decades, this exciting area of research and new applications has gained deserved recognition. In this tutorial, we examine what Self* means and look at a variety of diverse applications from food safety to autonomous river boats and from medical applications to power distribution ones. Then we ask the important question of how self* might strengthen situation management and situation awareness applications and vice versa.

In this part of the discussion, we will first make it clear that Situational Management (SM) has developed many overlapping capabilities with Self-* systems (S*). In both SM and S* there is the need to bring together diverse types of information and to make plans and actions on that basis. Often there is a high-level controller or perceiver and decision maker that, in addition to the active processes and multiple agents that may exist, has the responsibility for resetting goals and objectives and instigating overall actions. The difference in focus and emphasis in SM leads to an emphasis on how a high-level controller / perceiver assesses the external situation and what we would like to have understood or changed in the situation, with maybe a special emphasis on problem spots or areas of concern requiring a redirection of sensors. In S* systems, the high-level controller / perceiver focuses on what resources a specific agent has to do its goals, what decisions it must make and how it operates within that environment. If properly integrated, all the different entities of the system together will greatly impact what types of plans the overall system can consider making. It is a system engineering decision to determine how one would want to combine the activities of both an SM and S* system. We will give several examples of how this is done in computational and biological systems where they occur in parallel.

Intended audience

This tutorial is meant for anyone interested in how we can expand Situation Management and Self-Awareness with ideas and research from autonomic computing, organic computing and self-aware computing systems. Many example applications will be provided and there will be time for lively discussion. For those who come to the tutorial with current projects, we will address how they might strengthen their use of self* capabilities in their own applications.

Instructor biographies

Instructor Photo

Dr. Kirstie L. Bellman

Dr. Kirstie L. Bellman retired in 2019 as a Principal Scientist at the not-for-profit FFRDC (federally funded think tank) in the Computers and Software Division of The Aerospace Corporation. When she returned to the Aerospace Corporation after four years as a DARPA program manager in 1997, she started a new bi-coastal research and development center, called the Aerospace Integration Sciences Center (AISC). The center focused on the development of advanced system and model integration methods, new analytic techniques, and evaluation tools for assessing the impacts of new technologies. Upon completion of her term at DARPA as a Program Manager for the Domain-Specific Software Architectures (DSSA) program, Prototech (rapid prototyping technology), the Formal Foundations program, and the large Computer-Aided Education and Training Initiative (CAETI), she received an award from the Office of the Secretary of Defense for excellence in her programs. She received the 2008 Award in Technology from the Telluride Technology Festival. Other past awardees include Vint Cerf, Murray Gellman, Charles Townes, and Freeman Dyson.

Dr. Christopher Landauer

Dr. Christopher Landauer is a mathematician (Ph.D., 1973, Caltech) and computer programmer (since high school 1964). He retired from The Aerospace Corporation on 01 October 2019, after 38 years as a system analyst assisting in the specification, design, and testing of military space programs, including helping to diagnose and correct several mission threatening anomalies. He now looks forward to studying or more likely inventing methods that might support more intelligent behavior in computing systems. He is very interested in complex system modeling, simulation and representation, and in fundamental properties in language and semiotics. His main research has been in developing integration infrastructure for complex software-intensive systems, including self-aware, self-adaptive, and self-modeling systems, in advanced mathematical methods for data analysis and interpretation, and in computational methods that can support, enhance, or provide intelligent behavior. He has over sixty years of experience in applying mathematical methods to computing problems, with over 200 related publications.

Dr. Phyllis R. Nelson

Dr. Phyllis R. Nelson is a professor in the Department of Electrical and Computer Engineering at California State Polytechnic University, Pomona. She holds an MSEE from Caltech and a PhD from UCLA. Prior to her academic career, Dr. Nelson was a systems engineer in the aerospace industry and a research staff member at leading universities in both the United States and France. She is currently exploring methods for designing trustworthy functioning of complex systems of systems, motivated by an interest in complexity as its own technical challenge.


Tutorial 4: Multimodal sensor integration for detecting situational dynamics - applications for human and human-machine teams

Instructors

Xiaoyun Yin; Jamie C. Gorman; Elmira Zahmat Doost; Garima (Arya) Yadav; Matthew J. Scalia; Ryan Renwick; Ray Hao; Shiwen Zhou - Arizona State University

Time

Monday, March 9 - 2:00 PM - 5:30 PM

Location

TBD

Abstract

Biosensors such as fNIRS, EEG, and EKG have been studied in relation to team performance (Liu et al., 2021; Cha & Lee, 2019; Chickersal et al., 2017) and team workload (Grassmann et al., 2016). As multimodal biosensor systems become more sophisticated and accessible, the real-time observation of team states-such as situational awareness and adaptation to perturbation-has emerged through the combination of physiological signals and dynamical systems modeling. Physiological signals provide a unique perspective on real-time team observation, while dynamic systems theory can capture patterns such as non-stationary changes and evolving team states, which differ fundamentally from traditional static team assessment approaches.

This tutorial introduces practitioners to advanced sliding window techniques for analyzing continuous multimodal biosensor streams to predict two fundamental metrics of team functioning: Influence and Adaptation. These complementary measures provide a computational window into the temporal organization of team cognition and coordination that can be applied to human and human-machine teams. Average mutual information (AMI) quantifies the influence between individual team members and overall team states, revealing how information flows through the team structure and identifying who drives team adaptation as operational tempos vary situationally. Entropy quantifies physiological and behavioral reorganization within the body and between team members, with higher entropy indicating greater adaptation, and lower entropy indicating more routine or highly synchronized states.

Intended audience

This tutorial is designed for researchers, data scientists, and engineers working at the intersection of team science, computational psychophysiology, and human performance optimization.

Prerequisites

Participants should have a working knowledge of Python programming and basic familiarity with time-series analysis concepts. Prior exposure to information theory (entropy, mutual information) is helpful but not required, as the tutorial provides conceptual grounding. Experience with physiological data or team research is beneficial but not mandatory.

Tutorial materials

Access to all code implementations, example datasets (anonymized team biosensor data), and supplementary documentation will be provided to registered participants. The tutorial includes both guided walkthroughs and independent exercises to accommodate different learning styles and experience levels.

Instructor biographies

Instructor Photo

Xiaoyun Yin

Xiaoyun Yin is a PhD candidate in Human Systems Engineering at Arizona State University's Polytechnic School, Ira A. Fulton Schools of Engineering. Her research investigates the dynamic nature of human-AI team interactions, with a particular focus on bidirectional adaptation between human and artificial teammates. Through innovative approaches to real-time cognitive modeling, she examines how team members mutually influence and adapt to each other's behaviors and states. Her work utilizes multimodal data streams, including biosensor inputs and communication patterns, to understand and enhance the collaborative dynamics in human-AI teams. This research contributes to the broader discourse by exploring how AI systems can evolve beyond traditional adaptive automation to become more responsive and effective synthetic team members.

Jamie C. Gorman

Jamie C. Gorman is an expert in modeling and measuring coordination dynamics in human and human-machine teams in The Polytechnic School. His thriving research portfolio includes psychology topics, investigating human and artificial intelligence dynamics and human-machine teaming for space-based missions. His research has been funded by the National Science Foundation, the Office of Naval Research, the Air Force Research Laboratory, the Air Force Office of Scientific Research and the U.S. Department of Education, among others.

Elmira Zahmat Doost

Dr. Elmira Zahmat Doost is a Postdoctoral Research Scholar in Human Systems Engineering at Arizona State University, where she conducts research on team dynamics in human-machine collaboration. She earned her Ph.D. in Management Science and Engineering from Tsinghua University in 2023. Her research expertise spans ergonomics, human-system simulation, and psychophysiology, with particular focus on biobehavioral patterns in teams and human-AI interaction. Dr. Zahmat Doost's current work involves developing multimodal measurement systems for real-time team performance assessment and examining coordination dynamics in space-based human-machine teams.

Garima (Arya) Yadav

Garima (Arya) Yadav is a Computer Science and Mathematics student at Arizona State University. Her research interests lie at the intersection of multimodal machine learning, human-AI interaction, and cognitive-inspired artificial intelligence. She aims to develop computational models that explain how humans and intelligent systems perceive, coordinate, and reason together. As an AI Research Aide at ASU's Center for Human, Artificial Intelligence, and Robot Teaming (CHART), she works on building multimodal ML pipelines to model team dynamics from video, audio, and physiological data. She also conducts research in the Active Perception Group, focusing on machine perception and abstract perceptual reasoning.

Matthew J. Scalia

Matthew J. Scalia is a Ph.D. candidate in Human Systems Engineering at Arizona State University. He received his M.S. in Engineering Psychology from Georgia Institute of Technology. His research examines human and human-AI team performance, coordination, and trust, with an emphasis on modeling team dynamics and adaptive behavior through a dynamical systems perspective to support real-time team assessments.

Ryan Renwick

Ryan Renwick is a graduate student in Human Systems Engineering at Ira Fulton's school of Engineering within Arizona State University. He studies collaborative effort with the Army Research Lab to improve upon human-computer interactions utilizing dynamical systems analyses. He is the former research assistant for the Dynamics of Perception, Action, and Cognition Laboratory.

Shiwen Zhou

Shiwen Zhou is a Ph.D. candidate in Human Systems Engineering at Arizona State University. Her research examines how teams-human-human and human-AI-build and sustain effective collaboration, focusing on trust and distrust dynamics, team communication and coordination, and complex adaptive system behavior towards uncertainty.

Ray Hao

Ray Hao is a Ph.D. student in Human Systems Engineering at Arizona State University. She is interested in human teammates' responses to automation failures in high-stakes environments and developing systems that support team error management. She also investigates multimodal metrics for monitoring and facilitating team adaptation and self-organization in dynamically changing environments.


News

Registration is now open! Please note that the early bird rate ends on January 15.

The Tutorial Program is now available — Featuring Four Tutorials!

Paper submission has now closed.

Keynote Speaker Updates:
Dr. Joseph Lyons, Dr. Maia Cook and Professor Subbarao Kambhampati have agreed to be our keynote speakers. We’re also delighted to share that Dr. Mica Endsley has agreed to serve as our Banquet Keynote Speaker on March 11.

June 27: We are thrilled to announce that we have IEEE SMC Sponsorship for CogSIMA 2026.

Sponsors and Patrons

IEEE logo
 
SMC logo

Related Conferences & Organizations