Panels
Panel 1: Cognitive and Computational Issues in Situation Management/Control (16 years later)
SIMA2009 (precursor of CogSIMA) held a panel entitled Situation Management. The current panel is motivated by the goal of the CogSIMA community for maintaining an active dialog on the foundational principles of situation management/control and the related cognitive dimensions and factors involved in this multidimensional process. It will present what is new in Situation Management/Control after 16 years and discuss future challenges and possible approaches.
Panelists:
Gabriel Jacobson, CyberGem Consulting, USA:
Cognitive Multi-Agent Situation Control: Approach to Collective Intelligence
Scott Fouse, Fouse Consulting Services, USA:
Establishing and Maintaining Shared Situation Knowledge to support Distributed Situation Management when confronting an Adversary
Prof. Ann Bisantz, Department of Industrial and Systems Engineering, School of Engineering and Applied Sciences, The State University of New York at Buffalo, USA:
Revisiting Ironies of Automation in the Era of AI
Edward Waltz, Naval Postgraduate School, USA:
Means and Mechanisms of Situation Control
Prof. Jim Llinas, Collaborative Institute for Multisource Information Fusion (CIMIF), The State University of New York at Buffalo, USA:
Information Fusion Processes and Situation Control
Prof. Galina Rogova, Collaborative Institute for Multisource Information Fusion (CIMIF), The State University of New York at Buffalo, USA:
Trust And Distrust in Situation Control
Presentations:
Cognitive Multi-Agent Situation Control: Approach to Collective Intelligence
Gabriel Jacobson, CyberGem Consulting, USA
Abstract
Despite the long-known human practice of facing the resolution of complex tasks with group efforts, only recently has it become an increasingly accepted position that intelligence cannot be considered as a purely individual attribute outside of any social context. Such positions in favor of the social (collective) nature of intelligence have been backed by several research directions now known under uniting umbrella notion called collective intelligence A congruent concept to collective intelligence is the phenomenon of emergence of qualitatively new patterns that can be observed in the behavior of complex systems. These new patterns functionally surpass the simple mechanical sum of functional behaviors of the components in the overall system.
There are many known emergent behavioral patterns, for example, system resilient behavior to withstand external disturbances, the emergence of life in a eukaryote cell, the emergence of ant colonies, the emergence of machine learning in neural networks, and many others. Important results in understanding and modeling collective intelligence were achieved in several scientific disciplines, including biology, evolutionary science, psychology, social science, AI, and computer science. Probably, most clearly the idea of considering collective behavior as an emergent source for superior intelligence was expressed in swarm intelligence research, a fast-growing field of biologically inspired artificial intelligence. Another research project in collective intelligence was started at MIT by Prof. T. Malone. Distinct from swarm intelligence, this research project studies computational and social aspects of human group intelligence that is mediated and "magnified" by intelligent computer agents.
In this presentation, we will show how the proposed principles of cognitive multi-agent situation control can lead to collective intelligence and emergent behavior.
Establishing and Maintaining Shared Situation Knowledge to support Distributed Situation Management when confronting an Adversary
Scott Fouse, Fouse Consulting Services, USA
Abstract
To effectively control a situation in the face of an adversary, a commander must have a model of the situation and a model of the adversary. The plan to manage the situation should be based primarily on the intent and behaviors of the adversary, as well as knowledge about your forces, their forces, and natural and social terrain affordances. We will call all of those considerations the Model. During the course of the operation, the commander is taking in data and asking the question, "Are we on plan?" If the answer is yes, then continue with the operation as planned. If the answer is no, then the commander must figure out what part of the Model is off, and the adjust the Model based on what they have observed. The ability to predict future situations during the operation is commonly referred to as Situation Understanding.
Many military operations are operated at a scale that requires multiple decision makers. It is critical that those decision makers have a shared situation understanding, so that they can respond to situation changes consistent with the overall plan. Many times those distributed decision makers will be put in the situation where they need to respond to enemy actions without the benefit of collaborating and coordinating with the other decision makers. In those situations, they need to have a higher level of situation understanding, where they can predict how the other decision makers will respond. I refer to this level of situation understanding as Shared Situation Knowledge, which was used in the DOD Transformation Study in 2000.
My presentation will cover examples of shared situation knowledge and also consider how AI might be useful in helping to establish and maintain Situation Knowledge. I will also discuss why decision support tools that are not aligned with the shared situation knowledge is unlikely to be of much value.
Revisiting Ironies of Automation in the Era of AI
Prof. Ann Bisantz, Department of Industrial and Systems Engineering, School of Engineering and Applied Sciences, The State University of New York at Buffalo, USA
Abstract
Technological advancement has always brought with it questions of changes to human work, and necessary adaptations (for better, or worse). For example, Zuboff (1988) described the dramatic changes in work content and the resultant impact on workers as hands-on process control plants moved from manual to automated control. Sheridan (1992) described the potential for "alienation" of operators who are increasingly working as supervisory controllers in computer mediated, automated environments. Recent, rapid advances in artificial intelligence - both predictive, and generative - are no exception (Narayanan and Kapoor, 2024). People will use, team with, and incorporate new AI technologies into their work, including work related to situation assessment and management. Even widely implemented AI technologies, however, may fall short of expectations, challenging decision-makers to accurately assess situations and take appropriate action (e.g., Wong et al, 2021). Of interest, is the degree to which the nature of new AI technologies changes fundamental principles in human-technology interaction, relative to successful design of systems, interfaces, and training. How will ever more powerful predictive, or generative, algorithms impact the way humans supervise, interact with, and control complex systems?
This presentation will explore these questions through the lens of a classic human-automation principles, such as the "ironies" espoused by Bainbridge in her seminal 1983 paper. While the specific, physical manifestation of the ironies have changed in the intervening decades, the fundamental nature of the ironies have not. How can people control (that is, monitor and exert responsible authority over) systems which process amounts of data and at speeds far exceeding human capacity? How can people understand and detect system failures, when the algorithms making decisions and exerting control are opaque and non-congruent with human reasoning? How can people maintain enough awareness to appropriately intervene, when they have not been actively engaged in decision making or control?
References
Bainbridge, L. (1983). Ironies of Automation. IFAC/IFORS Proceedings.
Narayana, A. and Kapoor, S. (2024). AI Snake Oil: What artificial intelligence can do, what it can't, and how to tell the difference. Princeton University Press.
Sheridan, T. (1992). Telerobotics, Automation, and human supervisory control. MIT Press.
Wong, A., Otles, E., et al (2021). External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Internal Medicine. 2021; 181(8):1065-1070.
Zuboff, S. (1988). In the Age of the Smart Machine. Oxford University Press.
Means and Mechanisms of Situation Control
Edward Waltz, Naval Postgraduate School, USA
Abstract
Recent technologies are changing the potential for situation control and the implementation of
methods of control in the areas of human and social influence. Situation control can range from
mass-opinion and control as influence operations to targeted tactical or strategic information
operations. This presentation describes the current technical means of implementing situation
control, a term described by Llinas [2023], in influence and information operations. These closed
loop control mechanisms are described for three areas:
• Commercial consumer monitoring and influence (marketing, targeted advertising)
• Political campaign population targeting and influence
• Strategic information operations
The presentation focuses on the use of technology to implement large scale closed loop
systems described by Llinas including Situation detection and recognition (SD,SR), Situation
Understanding (SU), Situation Prediction (SP) and influence operations to achieve Situation
control (SC). We address influence messaging, targeting, channel allocation, and delivery
mechanisms. Examples are provided in the exploitation of Publicly Available Information (PAI)
for consumer and political influence campaigns. The threats imposed by misinformation (denial,
misdirection, deception) in influence operations and ethical issues are also introduced.
References
(Llinas 2023) "Fusion Processes and Situation Control", ISIF Perspectives On Information
Fusion, June 2023, pp. 15-23. See also "An Expanded Framework for Situation Control",
Frontiers in Systems Neuroscience, Volume 16, Article 79610011, July 2022, pp. 1-11.
Information Fusion Processes and Situation Control
Prof. James Llinas, Collaborative Institute for Multisource Information Fusion (CIMIF), The State University of New York at Buffalo, USA
Abstract
It can be argued that the COGSIMA community grew out of a realization that situational state estimates were dynamic in time and needed to be managed, and that management was needed of both the estimation process as well as the estimated state in the world; that is, to move or control the situation to an acceptable state, in the context of this control process. Several of Jakobson's papers address many of the issues associated with Situation Management and were the inspiration for the current work. This paper reexamines the Situation Control process, and the nature of situational state estimates and offers a variety of thoughts that expand these prior views. The perspective is largely from an Information Fusion process point of view, often the core of the estimation operations, and suggests several new fusion processes necessary for a more holistic Situation Control process, especially for adversarial-type environments. Aspects of the decision-making operations involved with a closed-loop Situation Control process are also addressed, and important dynamic interactions between estimation and decision-making operations are pointed out.
Trust And Distrust in Situation Control
Prof. Galina Rogova, Collaborative Institute for Multisource Information Fusion (CIMIF), The State University of New York at Buffalo, USA
Abstract
Situation control involves managing the situation to achieve the stakeholder's goals. It requires situation understanding. tracking and monitoring natural and man-made activities for building dynamic situational picture for making decisions and curry out actions. Designing a system for building a situational picture consists of gathering and fusing a large amount of heterogeneous multimedia and multispectral information of variable quality and data rates coming from geographically distributed sources. The elements of such a system have different requirements, functions, objectives, goals and cognitive biases of human agents. The quality of the system's decisions is the result of interaction between them and depends on the quality of information produced by the agents. One of the important and most difficult to evaluate characteristics of information quality is trust. Trust represents a subjective level of belief of a user (either human or automatic) that the obtained information can be admitted in the system, transferred between system processes, or used for making decisions and executing actions. There are multiple interrelated dimensions of trust to be considered such as communication trust, trust in information produced by human and automatic sources, and cognitive trust. Trust assessment requires consideration of an interplay of these dimensions and factors defining them in a specific dynamic context to provide for effective decision making and actions. The presentation will discuss major challenges and suggest possible approaches addressing the problem of trust and distrust in situation control.
Panel 2: Challenges, Perspectives, and Constraints in Human-AI Teaming Processes
The panel will explore the challenges and dynamics of integrating complex systems with human expertise in Human-AI teaming, also focusing on optimizing decision-making strategies and addressing ethical concerns like transparency, accountability, and trust. It will also evaluate real- world applications, technological limitations, and the need for future research. Finally, the panel invites further discussion to bridge the gap between emerging AI technologies, for example LLM-based systems, and principles of human-centered design in decision-making automation.
Panelists:
Annette Kluge, Dr. rer. nat., Full Professor for Industrial, Organizational and Business
Psychologie, Faculty for Psychology, Ruhr University Bochum, Germany
Research Topics: Human centred AI, Change Management and ethical leadership for AI
Implementation, Human Technology Interaction
Stefan Kopp, Dr.-Ing., Full Prof. of Cognitive and Socially Interactive Systems
Faculty of Technology, Center for Cognitive Interaction Technology (CITEC),
Bielefeld University
Research Topics: Human-AI collaboration, Human-Aware AI, Computational Theory of
Mind, Socially Interactive Assistants, Multimodal Conversational Agents
Michael Kozak, Senior Research Engineer
Trusted Intelligence Lab - Lockheed Martin Advanced Technology Laboratories
Research Topics: C2 of Autonomous Systems, Ethical Autonomy, Crewed-Uncrewed Teaming,
Detection of Generative Media
Axel Schulte, Dr.-Ing., Full professor of Flight Mechanics and Flight Guidance,
University of the Armed Forces, Munich since 2002
Director of the "Humans, Missions, and Cognitve Systems Laboratory (HuMiCS Lab)"
of the Military Aviation Research Center (MARC).
Research Topics: Cognitive and cooperative automation for military cockpits and unmanned vehicle
mission management as well as Human-AI Teams (HAT).
Dirk Söffker, Dr.-Ing., Full Professor for Dynamics and Control
Engineering Faculty, U Duisburg-Essen, Germany
Research Topics: Automation and Control, Safe and Reliable Systems, Human-Machine
Interaction and Systems
Andreas Wendemuth, Dr. rer. nat., Professor for Cognitive Systems
Faculty of Electrical Engineering and Information Technology
Otto-von-Guericke University, Magdeburg, Germany
Research Topics: Human-Machine Interaction and Systems, Speech and Dialog Processing,
Affective Computing
Statements (in alphabetic order):
Michael Kozak:
As autonomous systems become increasingly capable of reasoning on the edge, the debate around the ethicality of using such devices, especially future Lethal Autonomous Weapons Systems (LAWS), has only grown. The United States has taken a proactive approach towards identifying what criteria such a system would have to fulfill to be considered ethical in deploying, formally encoded in DoD Directive 3000.09 "Autonomy in Weapon Systems". Internationally, the UN General Assembly followed suit with a resolution promoting "safe, secure and trustworthy" artificial intelligence (AI) systems. In this panel, Mr. Kozak will discuss insights from two ongoing programs, one to develop an ethical autonomy and one to benchmark such a system, and the challenges facing both as they seek to advance the state of the art in the field of machine ethics on the battlefield.
Annette Kluge:
AI-Systems to support teamwork are accepted as valuable team members and tools, when two prerequisites are given: a reasonable task technology fit, in which the AI System fits the requirement of the team tasks and ethical managerial that considers the human-side of AI implementation before and while technology introduction. Both concepts ( team task technology fit and ethical managerial behavior) will be introduced and elaborated on its characteristics and what needs to be considered in organizations to make use of the full potential of these concepts to avoid the technology paradox (no positive correlation between investment in technology and increased productivity) and increase job identity and wellbeing.
Dirk Söffker:
Ethical challenges are arising within the HUMAN-AI decision making processes based on experimental findings detecting that it might be the case the human as well as the AI-based human decision making is less reliable than pure AI-based decision making. The resulting questions are: How can AI support really increase the human decision making performance as well as under which conditions unreliable human decisions are required? Dirk Söffker discusses reliability-based measurement results and the question of why humans should make the final decision even though it is statistically less reliable.
Stefan Kopp:
Stefan Kopp explores how collaborative activities between AI and humans unfold over time and what cognitive and communicative mechanisms are needed (both from the human and the AI) to achieve efficient and robust collaboration. We thereby focus on tasks that require dynamic decision-making, such as hypothesis-driven medical diagnoses with an AI-based reasoning support system, or fluid collaboration in (simulated) physical environments like kitchen.
Axel Schulte:
Axel Schulte will focus on Cognitive Automation, i.e., the use of cognitive (AI-driven) agents that meditate human purpose to automated machines in the context of work processes. Hence, two distinct design pattern of human agent workshare are the basis for further design consideration, i.e. delegation and assistance. In delegation mode the human offloads certain portions of the cognitive work to the agent to reduce their workload or increase the span of control. In assistance mode the agent provides a structural redundancy to the human to help avoiding errors, workload peaks, or situation awareness (SA) degradation. These HAT (Human-Agent Teaming) modes provide the technological basis to the control of multiple unmanned vehicle control. Axel Schulte investigates issues of attention, workload, SA, trust, decision-making, and adaptive automation along the lines of so called Manned-Unmanned Teaming (MUM-T) missions where pilots control numerous aerial vehicles from aboard a manned vehicle by use of AI-based agents, he will provide and discuss examples relevant to cognitive situation management.
Andreas Wendemuth:
In Human-AI Teaming, team performance relies on common grounding of humans and AI components: Do partners share the same cognitive models and procedural approaches? Can they interact based on mutual understanding of both technical workflows and signals of interaction? To this end, appropriate objective technical and affective measures ("key performance indicators") and cognitive constructs ("key quality indicators") have to be named to evaluate the teaming process and the degree of success. Involved behavioral parameters and interaction patterns have to be identified in both a supervised and self-learned manner.
Four years ago, generative media was the subject of academic research and realistic media manipulation the work of professionals. Three and half years ago, ChatGPT released to the public around the same time as DALL-E 2. The text in images was garbled, people looked more like movie monsters, and it was often easy to tell when media was generated. Flash forward to today, and the general populace can now create full video, with matching audio, entirely from a script that was itself produced from a prompt. Manipulating imagery is now as simple as using Adobe Generative Fill and the lasso tool to select what you want done and where, leaving AI to handle the rest. Pandora's Box has been opened, and in less than half a decade photorealistic generative media is at the whole world's fingertips.
This technology has already begun to transform society, in some ways for the better and in other ways for the worse. As researchers and stewards of a better tomorrow, it is our responsibility not to fear and reject new technology, but understand the good AND the bad, embracing the helpful parts and educating others on the harmful ones. This panel brings together academia and industry to provide a diverse set of perspectives on how generative media can be a tool for good, how society will need to adapt to protect itself from the bad, and take a sneak peek into how this rapidly advancing field may look in the near future.
Panelists:
Prof. Daniel Moreira, Department of Computer Science, Loyola University Chicago, USA
Daniel Moreira received a Ph.D. degree in computer science from the Universidade Estadual de Campinas (UNICAMP), Brazil, in 2016, and joined the University of Notre Dame for the following six years, first as a post-doctoral fellow and later as an assistant research professor. He is currently a tenure-track assistant professor with the Department of Computer Science at Loyola University Chicago. He is an associate editor of the IEEE Transactions on Information Forensics and Security (T-IFS) and Elsevier Pattern Recognition journals, and is a former member of the IEEE Information Forensics and Security Technical Committee (IFS-TC), 2021-2023 term, and IEEE Signal Processing Society Education Center Editorial Board, 2022-2023 term. He was also the General Chair of the 11th ACM Workshop on Information Hiding and Multimedia Security. His research interests include scientific integrity, media forensics, machine learning, and biometrics. More at: https://danielmoreira.github.io.
Dr. David Luebke, Vice President of Graphics Research, NVIDIA Corporation
David Luebke helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. His principal research interests are computer graphics, generative neural networks, and virtual reality. Luebke is a Fellow of the IEEE and a recent inductee into the IEEE VR Academy; other honors include the NVIDIA Distinguished Inventor award, the IEEE VR Technical Achievement Award, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award". Dr. Luebke has co-authored a book, a major museum exhibit, and over two hundred papers, articles, chapters, and patents. More at: https://luebke.us/
Dr. Jill Crisman, Executive Director, Digital Safety Research Institute (DSRI), UL Research Institutes, USA
With a career that spans government, the private sector and academia, Crisman joins UL Research Institutes from the Office of the Undersecretary of Defense for Research and Engineering. There, she served as principal director for artificial intelligence and machine learning, responsible for developing the department's AI and machine learning research and development road map. Prior to this role, she served as chief scientist at the Department of Defense's Joint Artificial Intelligence Center. Earlier in her career, Crisman served as chief scientist at Next Century Corp.; as a senior program manager for the Intelligence Advanced Research Projects Activity; and as senior research scientist at Virginia-based Science Applications International Corp., where she received the organization's Technical Excellence Award. She also served as a founding faculty member at the Franklin W. Olin College of Engineering and an associate professor at Northeastern University in Massachusetts. Crisman earned a Ph.D. in electrical and computer engineering from Carnegie Mellon University in Pennsylvania. She is based in the Washington, D.C. area.
Panel Moderator:
Michael Kozak, Lockheed Martin Advanced Technology Laboratories, USA
Mr. Michael Kozak is a Senior Software Engineer within Lockheed Martin's Advanced Technology Laboratories (LM ATL). Michael has nearly 20 years of experience managing, designing, and developing technologies related to single and multi-vehicle autonomy, planning and optimization, mission management, contingency management, crewed-uncrewed teaming, and media forensics. His research programs span across DARPA, ONR, ARL, ARO, CTTSO, IARPA, and AFRL, across low TRL basic research programs and high TRL demonstrations on live hardware at major events.
Most recently he worked as the Co-PI on the DARPA Semantic Forensics (SemaFor) program as part of the Systems Integration (TA2) team. This team was responsible for building the software architecture and user interfaces that connect analytics capable of detection, attribution, and characterization of generated and manipulated media across multiple modalities. Michael holds a Master's Degree in Computer Science from Drexel University.