Tutorial Proposals

Proposals are invited for half-day tutorials. Tutorial proposal must include tutorial title, outline and description, and biography of the tutorial instructor. Tutorial proposal length is limited to 1 page and should be submitted by March 29, 2021.

For inquiries, please contact admin@cogsima.org.

 

Tutorial Program

Tutorial 1: Getting Support Right: User and Use System Testing using a Work-centered Approach

Instructors:
Ann Bisantz, Ph.D., Professor of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY
Emilie Roth, Ph.D., Owner and Principal Scientist, Roth Cognitive Engineering, Stanford, CA

Time: TBD

Abstract: A critical component of any system design process is verification that the designed system meets operational objectives. Decision-support and similar aiding systems are designed based on goals of how they will improve human performance. In the cognitive engineering tradition, these goals are regarded as hypotheses that need to be tested. Robust user testing is required to uncover whether or not hypothesized benefits are realized, to identify unsupported aspects of performance, and finally to reveal unanticipated side-effects of introducing the new technology that need to be addressed. This need is particularly important in evaluating systems which support high-consequence, challenging work in environments characterized by both risk and uncertainty. This tutorial will introduce attendees to a robust method of system evaluation, Work-Centered Evaluation, that has been developed and refined through the design and evaluation of aiding systems in domains including military command and control, and health care. The methodology includes evaluations of the underlying model of work support as well as the surface features of the interface, through in-depth, scenario-based testing with system experts. The tutorial will provide detailed descriptions, as well as examples, of how to deploy Work-Centered Evaluation as part of the iterative systems design process.

Ann Bisantz

Instructor's biography: Ann M. Bisantz, PhD is a Professor of Industrial and Systems Engineering at the University at Buffalo, State University of NY. She has over twenty years of experience in research and applications in areas of cognitive engineering and interface design, particularly in domains of health care and military command-and-control. Her focus includes methods of cognitive engineering; cognitive work analysis; trust in automation; decision-making modeling and support; and displaying uncertain information. Most recently, she has collaborated with the National Center for Human Factors in Healthcare on the design of displays intended to support shared communication and awareness among Emergency Medicine Clinicians. She is a past recipient of an NSF Career Award and a SUNY Chancellor’s Award for Research and Creativity; and is a Fellow of the Human Factors and Ergonomics Society. Dr. Bisantz is the past chair of the Industrial and Systems Engineering department and since 2018 has served as Dean of Undergraduate Education for the University at Buffalo.

Emilie M. Roth

Instructor's biography: Emilie M. Roth, Ph.D. is owner and principal scientist of Roth Cognitive Engineering. She is a cognitive psychologist by training, and has over 30 years of experience in cognitive analysis and design in a variety of domains including nuclear power plant operations, railroad operations, military command and control, and healthcare. She has supported design of first-of-a-kind systems including next-generation nuclear power plant control rooms; and work-centered support systems for airlift planning and monitoring for USTRANSCOM and the Air Mobility Command. This has included development and execution of work-centered evaluations of prototype cognitive support systems across multiple domains. She is an associate editor of the Journal of Cognitive Engineering and Decision Making; a fellow of the Human Factors and Ergonomics Society; and currently serves as a member of the Board on Human-Systems Integration at the National Academies.


Tutorial 2: Interdependence and vulnerability in systems: Applying theory to define situations for autonomous systems

Instructor:
William F. Lawless, Professor of Mathematics, Sciences and Technology and Professor of Social Sciences, Paine College, USA

Time: TBD

Abstract:  Interdependence is an umbrella term for the phenomena that transmit all social effects in the form of interference (e.g., the synergism that produces emergence; the dysergism that produces divorce, splits and conflict; and the asynergesism that destabilizes opponent defenses; together, the phenomena combine into forces that drive local change, organizational restructuring, or even political and possibly social evolution). In contrast to our theory of interdependence, social interdependence theory has a long and hopeful history, culminating in the homilies of peace and harmony that replace Darwin’s “survival of the fittest” with new ageism’s “survival of the friendliest.” Unfortunately, the results of this theory cannot be generalized. The primary weakness of social interdependence theory, developed largely with aggregations that sum the choices of individuals, is its limited ability to predict outcomes in natural social settings and, more relevant, it’s inability to establish fundamental science and engineering relationships for the design, operational guidance and metrics for the rapidly approaching age of autonomous human-machine teams and systems.
In contrast, by relying on the interdependent effects found in human-team studies (e.g., the best science research teams are highly interdependent), by relying on field studies, and by adopting state-dependent effects in theory (quantum-like), including SchroŐądinger’s and Lewin’s separately derived concept of the “whole being greater than the sum of its parts,” which fits nicely in modern Systems Engineering, our revised theory of interdependence has guided us to make several predictions along with these supporting findings in the field that:

  • redundant team members in teams and organizations impede performance and increase the likelihood of corruption and accidents;
    • boundaries mathematically distinguish Shannon’s information theory for factorable entities (HA,B ≥ HA, HB) from Von Neumann’s non-factorable entities, known as subadditivity (SA,B ≤ SA + SB), accounting for Schrodinger’s and Lewin’s speculations;
    • an intelligent organism can act independenty as an individual, or as the member of a group, but not both simultaneously, accounting for the failure of complementarity theory in social science and game theory, forcing traditional game theory to adapt implicit preferences, reducing their ability to generalize;
  • the value of intelligence in the form of a nation’s higher education for all of its citizens increases the nation’s ability to innovate as indicated by the number of productive patents it produces;
  • facing uncertainty, humans weigh the choice of a path forward by engaging in debates, supporting AI scientists that machines must be able to express their intentions and actions in a causal language humans understand (viz., using artificial intelligence, or AI);
    • Most situations are either well-defined, defined before hand, or defined for competitors, but what steps should be taken when all paths forward are uncertain?
  • the over-reliance of convergence processes, especially in computational systems, leads to poorer decisions, misleading conclusions, or possibly more accidents; and, with the discovery of an entirely new field of research, that:
  • a sense of vulnerability motivates teams and organizations to pursue avoidance behaviors (e.g., mergers or spin-offs), to engage in exploitative behavior (e.g., direct attacks among competitors), or to create vulnerability in opponents (e.g., with the use of deception).

The latter discovery, the sense of vulnerability in the “self” or promoted in the “other,” appears to be key to the survival of teams and systems in nature by promoting resilience, leanness, and adaptiveness.
As part of our ongoing program of research, we propose to advance the theory of interdependence by providing a mathematical model of vulnerability in a team or system, how it is identified or created, how it is exploited, and how to avoid the vulnerability arising with a false sense of security derived from relying on convergence processes alone, both socially and computationally.
For this CogSIMA tutorial, we will cover the definition and uniqueness of interdependence, why it is difficult to handle in the laboratory, details for the supporting research noted above, mathematical models of autonomous human-machine teams, the new field of structural vulnerability, and some of the known problems that remain to be solved.

William Lawless

Instructor's biography: William F. Lawless was a mechanical engineer in charge of nuclear waste management in 1983 when he blew the whistle on the Department of Energy’s (DOE) mismanagement of its military radioactive wastes. For his PhD topic on group dynamics, he theorized about the causes of tragic mistakes made by large organizations with world- class scientists and engineers. After his PhD in 1992, DOE invited him to join its citizen advisory board (CAB) at DOE’s Savannah River Site (SRS), Aiken, SC. As a founding member of DOE's SRS CAB, he coauthored numerous recommendations on environmental remediation from radioactive wastes (e.g., the regulated closure in 1997 of the first two high-level radioactive waste tanks in the USA, possibly the world). He was the SRS CAB co-technical advisor on incineration, 2000-03, and technical advisor in 2009. He was a member of the European Trustnet hazardous decisions group. He is a senior member of IEEE. His research today is on the metrics for, and entropy generation by, autonomous human-machine teams (A-HMT). He is the lead editor of five books (Springer 2016; 2017; CRC 2018; Elsevier 2019; 2020). He was the lead organizer of a 6-article special issue on “human-machine teams and explainable AI” by AI Magazine (2019). He was a co-editor for the Naval Research & Development Enterprise (NRDE) Applied Artificial Intelligence Summit, October 2018, San Diego. He is on the Office of Naval Research's two Advisory Boards for the Science of Artificial Intelligence and Command Decision Making (2018-current). He has authored or co-authored over 80 articles and book chapters, over 150 peer-reviewed proceedings and received almost $2 million in research grants. He has co-organized twelve AAAI symposia at Stanford (2020: AI welcomes Systems Engineering: Towards the science of interdependence for autonomous human-machine teams; https://aaai.org/Symposia/Spring/sss20symposia.php#ss03).

News

We are accepting submissions! Please submit your paper here.

Nov 19: Dr. William D. Casebeer (Riverside Research’s Open Innovation Center, USA) will present a keynote address.
Nov 16: Deadline Extension: Paper submissions are due Jan 11, 2021.
Nov 16: Prof. Susan Stepney (University at Buffalo, USA) will present a keynote address on Computation as a dynamical system.
Nov 15: Prof. Ross Anderson (University of Cambridge, UK) will present a keynote address on Situational awareness and adversarial machine learning.
Nov 02: Prof. Ann Bisantz (University at Buffalo, USA) and Dr. Emilie Roth (Roth Cognitive Engineering, USA) will instruct a tutorial on Getting Support Right: User and Use System Testing using a Work-centered Approach.
Oct 21: Prof. William Lawless (Paine College, USA) will instruct a tutorial on Interdependence and vulnerability in systems: Applying theory to define situations for autonomous systems. 
Sep 24: The IEEE CogSIMA official website is online! 

 

 

Sponsors and Patrons

IEEE logo
 
SMC logo
 
ISIF Logo


Related Conferences & Organizations