Keynotes

Keynote 1: Situational Awareness and Adversarial Machine Learning

Time: Monday, May 17 14:25-15:35 (GMT)

Ross Anderson
Professor of Security Engineering, Cambridge University, UK

Abstract: As the large complex systems used in applications from autonomous vehicles through network defence to factory automation acquire components that use machine-learning algorithms, situational awareness will become ever more important. Modern machine-learning systems are vulnerable to adversarial inputs, just as humans are vulnerable to deception. Humans, and animals too, manage such risks by being sensitive to the presence of adversaries and taking extra care when an attack is more likely. In recent research, we have been exploring how adversarial samples can be detected more easily than they can be blocked, allowing systems to fall back to more cautious modes of operation. The interaction between machine learning components and service-denial attacks is a fascinating subject that few have studied so far. In short, while classical system resilience may be seen in terms of layered defence and redundancy, that of machine-learning systems may be much more human. Combining the two intelligently could be a new frontier for research.

Ross Anderson

Bio: Ross Anderson is Professor of Security Engineering at Cambridge University. He was one of the founders of the discipline of security economics, and leads the Cambridge Cybercrime Centre, which collects and analyses data about online wickedness. He was also a pioneer of prepayment metering, powerline communications, peer-to-peer systems, hardware tamper-resistance and API security. He is a Fellow of the Royal Society and the Royal Academy of Engineering, as well as a winner of the Lovelace Medal – the UK's top award in computing. He is also the author of the standard textbook "Security Engineering – A Guide to Building Dependable Distributed Systems"


Keynote 2: Computation as a Dynamical System

Time: Tuesday, May 18 14:25-15:35 (GMT)

Susan Stepney
Professor of Computer Science and Director of the York Cross-disciplinary Centre for Systems Analysis, University of York, UK

Abstract: Computation is often thought of as a branch of discrete mathematics, using the Turing model. That model works well for conventional applications such as word processing, database transactions, and other discrete data processing applications. But much of the world’s computer power resides in embedded devices, sensing and controlling complex physical processes in the real world. Other computational models and paradigms might be better suited to such tasks. I will discuss regarding computation as an open dynamical system, with a particular focus on reservoir computing in non-silicon devices, including our recent work on using magnetic materials as computational substrates. This approach can support smart processing „at the edge“, allow a close integration of sensing and computing in a single conceptual model and physical package, and provides a uniform approach to embodying computation in other dynamical systems.

Susan Stepney

Bio: Susan Stepney is professor of Computer Science at the Department of Computer Science, University of York and Director of the York Cross-disciplinary Centre for Systems Analysis. First class honours degree in Natural Sciences (Theoretical Physics) and a PhD in Astrophysics from the University of Cambridge. Industrial experience at GEC-Marconi, and at Logica UK. Joined the University of York in 2002: research in unconventional computing, complex systems (complex systems modelling and simulation, their emergent properties), artificial life (application of biological principles to engineering domains), natural computing, reservoir computing in materio.


Keynote 3: Human-Machine Teaming: Evolution or Revolution, and the Ethical Dimensions of Cyborgs

Time: Wednesday, May 19 14:25-15:35 (GMT)

William D. Casebeer, PhD
Director, AI & ML Laboratory, Riverside Research Institute

Abstract: Whether as part of an "offset" strategy or as a subset of the machine learning revolution, human-machine teams are and will be a growing force in the delivery of capability for the Department of Defense community. Here, I examine "three waves" of human-machine teaming, examining what technological developments are enabling more integral cooperation between human soldiers and their smart tools than ever before. By looking at a few concrete examples of human-machine teaming technologies currently or soon to be fielded, we can also drive a much-needed conversation about the ethical dimensions of the warrior-autonomy teaming enterprise. In particular, I will argue that in addition to thinking about the "ethics of autonomy," we need to give serious consideration to whether autonomous agents and the soldiers they serve can not only engage in morally praiseworthy conduct, but also actually improve decision quality from the prudential and moral perspectives both. The development of an "artificial conscience" to ensure that US Department of Defense and Allied human-machine teams make the best joint decisions they can remains an outstanding research priority. I conclude by briefly discussing what an artificial conscience would look like, and how we could develop one. Our defense needs and morality both require that we do so.

William D. Casebeer

Bio: William D. Casebeer, PhD, MA, is Director of Artificial Intelligence and Machine Learning in Riverside Research’s Open Innovation Center. Bill’s lab uses next-generation technology to advance human-machine teaming, neuromorphic computing, object and activity classification and recognition, and defensive and offensive cyberwarfare capabilities. Bill has decades of experience leading interdisciplinary teams of scientists and engineers to creative solutions to pressing national security problems, including Director, Senior Director, and Program Manager roles at Scientific Systems Company, Inc., the Innovation Lab at Beyond Conflict, the Human Systems and Autonomy Lab at Lockheed Martin’s Advanced Technology Laboratories, and at the Defense Advanced Research Projects Agency. Bill retired from active US Air Force duty as a Lieutenant Colonel and intelligence analyst in August 2011 and is a graduate of the Air Force Academy, the University of Arizona, the Naval Postgraduate School, and the University of California at San Diego.


Keynote 4: Towards Theory of Mind for Effective Human-Autonomy Teaming

Time: Thursday, May 20 14:25-15:35 (GMT)

Katia Sycara
Edward Fredkin Research Chair in Robotics, School of Computer Science at Carnegie Mellon University, USA

Abstract: Recent technological advances are increasingly enabling the introduction of intelligent autonomous systems in many areas of human endeavor ranging from home and work to city streets. Humans interact with those systems in a variety of ways. This talk will focus on interactions that can be categorized as teamwork. A prevalent definition characterizes human teamwork as a set of interrelated reasoning, actions and behaviors of each team member that adaptively combine to fulfill team goals. Experimental evidence from high performance human teams has resulted in a set of drivers of team effectiveness, such as team leadership, mutual performance monitoring and predictability, helping behaviors, adaptability, shared mental models and mutual trust. As technology enables increased machine autonomy, human-machine teaming could acquire the same characteristics as human-human teaming. In this talk, I will emphasize the importance of reasoning based on Theory of Mind, namely the ability to infer and predict latent characteristics, such as intent, situation awareness and mental states of team mates. I will discuss how Theory of Mind is a crucial underpinning of teamwork drivers. I will also present issues and challenges for approaches on Theory of Mind on the part of the autonomy, Theory of Machine as part of human reasoning and present our work in this area.

Katya Sycara

Bio: Katia Sycara holds the Edward Fredkin Research Chair in Robotics in the School of Computer Science at Carnegie Mellon University. She is also affiliated faculty in the departments of Machine Learning, Human Computer Interaction and Language Technologies at Carnegie Mellon. She is the Director of the Laboratory for Advanced Agents and Robotics Technology. She holds a B.S in Applied Mathematics from Brown University and PhD in Computer Science from Georgia Institute of Technology. She holds an Honorary Doctorate from the University of the Aegean. Her research interests are in multi-agent and multi-robot systems, human robot teaming and machine learning. She is a Fellow of the Institute of Electronic and Electrical Engineers (IEEE), Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the recipient of the ACM/SIGART Agents Research Award and the recipient of the Lifetime Research Award of the Institute of Operations Research and the Management Sciences (INFORMS), GDN section. She has led multimillion dollar research efforts sponsored by industry and multiple government agencies and has participated on several government studies, evaluation panels for large government programs, industry scientific advisory boards as well as multiple standards committees and scientific proposal review panels. She has received 2 Influential 10-year paper awards and multiple best paper awards. She has given numerous invited talks, chaired or participated on the program committee of a large number of conferences, authored more than 700 technical papers in top journals and peer reviewed conferences. She is a founding member of the International Foundation of Agents and Multi-Agent Systems (IFAAMAS), and a founding member of the Semantic Web Science Association. She is a founding Editor-in-Chief of the journal “Autonomous Agents and Multi-Agent Systems” and currently serving on the editorial board of 5 additional journals.

News

Registration is now open.
Registration fees and info are available here
Author registration deadline is March 31 Please register here

New Conference Dates: May 14-22, 2021.

Mar 26: Prof. Peeter Lorents will instruct a tutorial on The journey from Similar to Plausible Situations: Human and Mathematical Aspects.
Feb 22: Dr. Gabriel Jakobson (CyberGem Consulting, USA) will instruct a tutorial on Introduction to Mission-Centric Cyber Security Situation Management.
Jan 7: Prof. Katia Sycara (Carnegie Mellon University, USA) will present a keynote address. 
Nov 19: Dr. William D. Casebeer (Riverside Research’s Open Innovation Center, USA) will present a keynote address on Human-Machine Teaming: Evolution or Revolution, and the Ethical Dimensions of Cyborgs.
Nov 16: Deadline Extension: Paper submissions are due Jan 11, 2021.
Nov 16: Prof. Susan Stepney (University of York, UK) will present a keynote address on Computation as a dynamical system.
Nov 15: Prof. Ross Anderson (University of Cambridge, UK) will present a keynote address on Situational awareness and adversarial machine learning.
Nov 02: Prof. Ann Bisantz (University at Buffalo, USA) and Dr. Emilie Roth (Roth Cognitive Engineering, USA) will instruct a tutorial on Getting Support Right: User and Use System Testing using a Work-centered Approach.
Oct 21: Prof. William Lawless (Paine College, USA) will instruct a tutorial on Interdependence and vulnerability in systems: Applying theory to define situations for autonomous systems. 
Sep 24: The IEEE CogSIMA official website is online! 

 

 

Sponsors and Patrons

IEEE logo
 
SMC logo
 
ISIF Logo
TALTECH LOGO


Related Conferences & Organizations