For full conference details, visit the IEEE SysCon 2017 website: http://2017.ieeesyscon.org

Program for 2017 Annual IEEE International Systems Conference (SysCon)

Time Le Caf Conc Le Caf Conc, "A" Level Maisonneuve B Maisonneuve C Maisonneuve E (tutorial Attendees Only) Maisonneuve F Salle De Bal Foyer Salon De Bal, "A" Level Of Marriott Hotel Salon Neufchatel Salon Terasse Salon Viger A Salon Viger B Salon Viger C Session

Monday, April 24

07:00-17:00         Tutorial Registration                  
08:00-10:00     1A1: Effective SE Communication through Models and Representations 1A2: Services Science and Services Computing                  
09:00-12:00     INCOSE Exam                
10:00-10:15                       Break
10:15-12:00     1B1: Effective SE Communication through Models and Representations 1B2: Services Science and Services Computing                
12:00-13:00                          
13:00-15:00     1C1: System Security Engineering 1C2: Building Learning Experiences with the SERC SE Experience Accelerator 1C3: Future Communication Networks Modeling and Analysis Tools inspired by Complex Systems Science                
15:00-15:15                         Break
15:15-17:00     1D1: System Security Engineering 1D2: Building Learning Experiences with the SERC SE Experience Accelerator 1D3: Future Communication Networks Modeling and Analysis Tools inspired by Complex Systems Science                

Tuesday, April 25

08:15-08:30 Opening Remarks                          
08:30-09:30 Keynote Speaker                          
09:30-10:00 Break                          
10:00-12:00 Executive Plenary Panel                          
12:00-13:30               Lunch            
13:30-15:00             2C5: Modeling and Simulation I   2C4: Research in Systems Engineering I 2C3: Model-Based Systems Engineering I 2C6: INCOSE 2C2: Space and Communication Systems 2C1: Transportation Systems  
15:00-15:30                           Break
15:30-17:00 2D1: Medical Systems           2D5: Defense Systems   2D4: Research in Systems Engineering II 2D3: Model-Based Systems Engineering II 2D6: INCOSE 2D2: Cloud Computing    
17:30-20:30   Reception                 Young Professionals Networking Event      
18:30-19:00                     Analytics and Risk Technical Committee Meeting    

Wednesday, April 26

08:00-09:30                 3A4: Research in Systems Engineering II 3A3: Sensors Integration and Applications I 3A6: INCOSE 3A2: Decision Making Systems I 3A1: System Architecture  
09:30-10:00                           Break
10:00-11:30             3B5: Systems Thinking   3B4: Aerial Systems 3B3: Sensors Integration and Applications II 3B6: INCOSE 3B2: Decision Making Systems II 3B1: Robotic Systems  
11:30-13:00               Best Papers Awards Luncheon            
13:00-14:30             3C5: Gaming, Entertainment and Sensor Systems     3C3: Sensors Integration and Applications III 3C6: INCOSE 3C2: Decision Making Systems III 3C1: Cyber Security Issues  
14:30-15:00                           Break
15:00-16:30             3D5: Model-Based Systems Engineering III   3D4: Energy Management and Sustainability II 3D3: Modeling and Simulation II 3D6: THEFOSE 3D2: Systems Verification and Validation 3D1: Autonomous Systems I  

Thursday, April 27

08:00-09:30             4A4: Engineering Systems-of-Systems I   4A3: Energy Management and Sustainability III   4A5: THEFOSE 4A2: Complex Systems Issues I 4A1: Autonomous Systems II  
09:30-10:00                           Break
10:00-11:30             4B4: Engineering Systems-of-Systems II       4B5: INCOSE 4B2: Complex Systems Issues II 4B1: Large-Scale Systems Integration  

Monday, April 24

Monday, April 24, 07:00 - 17:00

Tutorial Registration

Room: Maisonneuve E (tutorial attendees only)

Monday, April 24, 08:00 - 10:00

1A1: Effective SE Communication through Models and Representations

Ronald Kratzke, Vitech Corporation
Room: Maisonneuve B

Models and representations have always been cornerstones of engineering, systems engineering included. Regrettably, rather than bringing clarity, the rise of model-based systems engineering has brought increased confusion and conflict regarding models and representations. Given the inherent breadth of systems engineering as we connect stakeholders and technical experts, we require the richest representation set possible. Rather than engaging in religious wars, we must continuously seek to expand our engineering toolkit to better understand, analyze, and communicate. And we must seek to integrate these seemingly diverse representations as perspectives of an underlying systems model rather than as distinct products and endpoints themselves. Surveying the multitude of system representations available - SysML and traditional, logical and physical, contextual and technical, systems and beyond - we will connect a diverse set of representations to each other and, most importantly, to the common underlying model. We will highlight various representations, each with their specific content and strengths. These strengths lead to preferred usage contexts and scenarios as part of a continuum of perspectives on the systems model. Understanding the contexts and scenarios, we will review content, notation, usage, analytical value, communication value, and target audiences. Leveraging the strengths of each representation, we will learn the constructive role these representations can play in a customizable, coherent, and powerful toolkit to address the systems challenges of today.

1A2: Services Science and Services Computing

Shrisha Rao, International Institute of Information Technology, Bangalore, India
Room: Maisonneuve C

New models of computation such as cloud computing, Big Data, and the Internet of Things have fundamentally upended common assumptions about the nature and purposes of computation. One thing that may be said about these and some other such paradigms is that they almost always require computation to be provided as a service to some entity seeking a larger end, rather than regarding the computation as being an end in itself. Services also come with their own set of challenges; e.g., services often require extremely complex systems to work very well, and are also typically more difficult to create and manage well than product creation. Services almost always require humans in the loop in critical functions. Yet, classical thinking and research directions give few insights into how practitioners may understand how computation may be fashioned to work as a component in a service existing in a larger business or social context.

Monday, April 24, 09:00 - 12:00

INCOSE Exam

Room: Maisonneuve F

Monday, April 24, 10:00 - 10:15

Break

Monday, April 24, 10:15 - 12:00

1B1: Effective SE Communication through Models and Representations

Ronald Kratzke, Vitech Corporation
Room: Maisonneuve B

Models and representations have always been cornerstones of engineering, systems engineering included. Regrettably, rather than bringing clarity, the rise of model-based systems engineering has brought increased confusion and conflict regarding models and representations. Given the inherent breadth of systems engineering as we connect stakeholders and technical experts, we require the richest representation set possible. Rather than engaging in religious wars, we must continuously seek to expand our engineering toolkit to better understand, analyze, and communicate. And we must seek to integrate these seemingly diverse representations as perspectives of an underlying systems model rather than as distinct products and endpoints themselves. Surveying the multitude of system representations available - SysML and traditional, logical and physical, contextual and technical, systems and beyond - we will connect a diverse set of representations to each other and, most importantly, to the common underlying model. We will highlight various representations, each with their specific content and strengths. These strengths lead to preferred usage contexts and scenarios as part of a continuum of perspectives on the systems model. Understanding the contexts and scenarios, we will review content, notation, usage, analytical value, communication value, and target audiences. Leveraging the strengths of each representation, we will learn the constructive role these representations can play in a customizable, coherent, and powerful toolkit to address the systems challenges of today.

1B2: Services Science and Services Computing

Shrisha Rao, International Institute of Information Technology, Bangalore, India
Room: Maisonneuve C

New models of computation such as cloud computing, Big Data, and the Internet of Things have fundamentally upended common assumptions about the nature and purposes of computation. One thing that may be said about these and some other such paradigms is that they almost always require computation to be provided as a service to some entity seeking a larger end, rather than regarding the computation as being an end in itself. Services also come with their own set of challenges; e.g., services often require extremely complex systems to work very well, and are also typically more difficult to create and manage well than product creation. Services almost always require humans in the loop in critical functions. Yet, classical thinking and research directions give few insights into how practitioners may understand how computation may be fashioned to work as a component in a service existing in a larger business or social context.

Monday, April 24, 12:00 - 13:00

Lunch

Room: Maisonneuve E (tutorial attendees only)

Monday, April 24, 13:00 - 15:00

1C1: System Security Engineering

Logan Mailloux, Air Force Institute of Technology & United States Air Force, USA
Room: Maisonneuve B

This tutorial provides a detailed introduction to System Security Engineering (SSE), a specialty domain of systems engineering responsible for identifying and managing security vulnerabilities through the application of SSE processes, activities, and tasks. An approach to SSE is presented which focuses on integrating security throughout the entire system life cycle based on the recently released NIST Special Publication 800-160 (final scheduled for December 2016). Participants will be taught the basic concepts of SSE with a focus on the applicability of NIST's SSE processes, activities, and tasks for different types of systems. This tutorial is applicable to those involved in systems engineering, and more generally, anyone involved in the acquisition of complex systems.

1C2: Building Learning Experiences with the SERC SE Experience Accelerator

Richard Turner, Stevens Institute of Technology, USA Doug Bodner, Georgia Institute of Technology, USA Yvette Rodriguez, Defense Acquisition University, USA Jon Wade, Stevens Institute of Technology, USA Peizhu Zhang, Stevens Institute of Technology, USA
Room: Maisonneuve C

Building Learning Experiences with the SERC SE Experience Accelerator (1 day tutorial) Traditionally SEs develop deep knowledge by working for extended periods of time with people from multiple domains, systems, subsystems and disciplines. However, due to changing demographics, it is now more likely that many SEs are young, with little experience, or more mature, but without sufficient specific experience in some domains. It may take years and many projects for a new systems engineer to encounter, consider, attempt to solve and see the results of implemented solutions. The type and depth of the learning experience is constrained by the type of projects and lifecycle phases available when transition begins. The Systems Engineering Research Center's (SERC) Systems Engineering Experience Accelerator (SEEA) project has been addressing this problem through the use of immersive, game-like experiences where a learner can encounter a variety of realistic situations and attempt to resolve them using their existing experience as well as "fail-fast, learn-fast" experimentation. The framework is directed toward rapidly building experience for systems engineers. It builds on and amplifies the mentoring activities of normal technical management by providing new ways to identify, characterize, and transfer experiences from mentor to mentee. The EA software is an open source platform available at no cost in a variety of formats. While there are some experiences that have been developed and shared, to most effectively use the tool, organizations need to understand the framework and tools and be able to create and modify experiences. This tutorial consists of a short introduction to the EA concept, history and operational framework, followed by a hands-on workshop that allows attendees to try out an existing experience and to use the SEEA framework and tools to create a short experience for their own environment.

1C3: Future Communication Networks Modeling and Analysis Tools inspired by Complex Systems Science

M. Majid Butt, Trinity College Dublin, Ireland Irene Macaluso, Trinity College Dublin, Ireland Nicola Marchetti, CTVR Trinity College, Ireland
Room: Maisonneuve F

The main goal of the tutorial is to introduce the audience to a framework that draws on concepts of an information theoretical and complex systems science nature (e.g., excess entropy, signalling complexity, neural complexity) to underpin a new approach to communication networks. Through this framework we will discuss possible modeling tools to expedite innovation throughout telecommunications, by revitalising thinking in this area through the influx of methods from complex systems science to revamp the conceptualization of wireless networks. Development of our framework will proceed in a layered fashion, with a modelling layer forming the foundation of the framework, supporting an analysis layer. The modelling phase will introduce techniques to capture the significant attributes of telecommunications networks and the interactions that shape them through the application of tools such as agent-based modelling and graph theory abstractions to derive new metrics that holistically describe a network. The analysis phase completes the core functionality of our framework by linking the complex systems science inspired metrics to overall network performance. In order to maximize the relevance of our framework to the telecom research and industry communities, the scenarios and use cases we will discuss are rooted in the most relevant, near-future architectures and use cases in 5G communication networks, such as dense small cell deployments, cognitive mobile broadband networks, Internet of Things and sensor networks.

Monday, April 24, 15:00 - 15:15

Break

Monday, April 24, 15:15 - 17:00

1D1: System Security Engineering

Logan Mailloux, Air Force Institute of Technology & United States Air Force, USA
Room: Maisonneuve B

This tutorial provides a detailed introduction to System Security Engineering (SSE), a specialty domain of systems engineering responsible for identifying and managing security vulnerabilities through the application of SSE processes, activities, and tasks. An approach to SSE is presented which focuses on integrating security throughout the entire system life cycle based on the recently released NIST Special Publication 800-160 (final scheduled for December 2016). Participants will be taught the basic concepts of SSE with a focus on the applicability of NIST's SSE processes, activities, and tasks for different types of systems. This tutorial is applicable to those involved in systems engineering, and more generally, anyone involved in the acquisition of complex systems.

1D2: Building Learning Experiences with the SERC SE Experience Accelerator

Richard Turner, Stevens Institute of Technology, USA Doug Bodner, Georgia Institute of Technology, USA Yvette Rodriguez, Defense Acquisition University, USA Jon Wade, Stevens Institute of Technology, USA Peizhu Zhang, Stevens Institute of Technology, USA
Room: Maisonneuve C

Building Learning Experiences with the SERC SE Experience Accelerator (1 day tutorial) Traditionally SEs develop deep knowledge by working for extended periods of time with people from multiple domains, systems, subsystems and disciplines. However, due to changing demographics, it is now more likely that many SEs are young, with little experience, or more mature, but without sufficient specific experience in some domains. It may take years and many projects for a new systems engineer to encounter, consider, attempt to solve and see the results of implemented solutions. The type and depth of the learning experience is constrained by the type of projects and lifecycle phases available when transition begins. The Systems Engineering Research Center's (SERC) Systems Engineering Experience Accelerator (SEEA) project has been addressing this problem through the use of immersive, game-like experiences where a learner can encounter a variety of realistic situations and attempt to resolve them using their existing experience as well as "fail-fast, learn-fast" experimentation. The framework is directed toward rapidly building experience for systems engineers. It builds on and amplifies the mentoring activities of normal technical management by providing new ways to identify, characterize, and transfer experiences from mentor to mentee. The EA software is an open source platform available at no cost in a variety of formats. While there are some experiences that have been developed and shared, to most effectively use the tool, organizations need to understand the framework and tools and be able to create and modify experiences. This tutorial consists of a short introduction to the EA concept, history and operational framework, followed by a hands-on workshop that allows attendees to try out an existing experience and to use the SEEA framework and tools to create a short experience for their own environment.

1D3: Future Communication Networks Modeling and Analysis Tools inspired by Complex Systems Science

M. Majid Butt, Trinity College Dublin, Ireland Irene Macaluso, Trinity College Dublin, Ireland Nicola Marchetti, CTVR Trinity College, Ireland
Room: Maisonneuve F

The main goal of the tutorial is to introduce the audience to a framework that draws on concepts of an information theoretical and complex systems science nature (e.g., excess entropy, signalling complexity, neural complexity) to underpin a new approach to communication networks. Through this framework we will discuss possible modeling tools to expedite innovation throughout telecommunications, by revitalising thinking in this area through the influx of methods from complex systems science to revamp the conceptualization of wireless networks. Development of our framework will proceed in a layered fashion, with a modelling layer forming the foundation of the framework, supporting an analysis layer. The modelling phase will introduce techniques to capture the significant attributes of telecommunications networks and the interactions that shape them through the application of tools such as agent-based modelling and graph theory abstractions to derive new metrics that holistically describe a network. The analysis phase completes the core functionality of our framework by linking the complex systems science inspired metrics to overall network performance. In order to maximize the relevance of our framework to the telecom research and industry communities, the scenarios and use cases we will discuss are rooted in the most relevant, near-future architectures and use cases in 5G communication networks, such as dense small cell deployments, cognitive mobile broadband networks, Internet of Things and sensor networks.

Tuesday, April 25

Tuesday, April 25, 08:15 - 08:30

Opening Remarks

Room: Le Caf Conc

Tuesday, April 25, 08:30 - 09:30

Keynote Speaker

Room: Le Caf Conc

Tuesday, April 25, 09:30 - 10:00

Break

Room: Le Caf Conc

Tuesday, April 25, 10:00 - 12:00

Executive Plenary Panel

Room: Le Caf Conc

Tuesday, April 25, 12:00 - 13:30

Lunch

Room: Salon de Bal, "A" Level of Marriott Hotel

Tuesday, April 25, 13:30 - 15:00

2C1: Transportation Systems

Room: Salon Viger C
Chair: Aldo Fabregas (Florida Institute of Technology, USA)
13:30 Segmented Arrival Graph based Evacuation Plan Assessment Algorithm Using Linear Programming
Manki Min (South Dakota State University, USA); Sunho Lim (Texas Tech University, USA)
The evacuation routing problem is NP-hard and hence no polynomial-time algorithm exists for the problem. There have been many studies on the evacuation planning heuristic algorithms but the in-depth comparison between them is not available especially in terms of the performance ratio. Without knowing the heuristic solution quality with respect to the optimal solution, we cannot accurately determine each heuristic algorithm's strengths and weaknesses that can lead to a better combination of heuristic algorithms. In this paper we present a Linear Programming (LP) based iterative algorithm to assess the evacuation planning algorithms in terms of the evacuation time. The proposed algorithm takes the paths that are found and/or used in a heuristic algorithm as the input and finds the minimum evacuation time using only those paths. To get the optimal solution, our algorithm repeatedly solves relaxed LP formulations formed from the concept of segmented arrival graphs. We segment the arrival graphs of paths based on the edge-sharing and construct the LP formulations along the segmented edges on the graphs. The computational experiment shows that the proposed algorithm computes the solution using time that is comparable to that of the heuristic algorithms as long as the number of paths is adequately maintained. Using the algorithm we perform the comparison between the existing heuristic algorithms, CCRP++, SMP, EET, and FBSP.
14:00 A High-Speed Color-Based Object Detection Algorithm for Quayside Crane Operator Assistance System
Xiang Gao and Hen-Geul Yeh (California State University Long Beach, USA); Panadda Marayong (California State University, Long Beach, USA)
Improvement to user interface technology for port crane operators can lead to safer and more ergonomic environments for cargo transport. An accurate and responsive container-handling guidance system can increase productivity and reduces costs. In this work, a vision-based assistive system for quayside crane operator is developed for collision warning. The system applies a new object edge detection algorithm, called Edge Approaching, to achieve faster detection rate in real-time using a stand-alone embedded system that can be easily integrated to an existing crane interface. Experiments are conducted on a scaled testbed to validate the concept. The proposed algorithms significantly increase the detection rate from as compared to the conventional Canny edge detection and Hough transform method, while maintaining a high accuracy rate of 99%.
14:30 Application-Driven Traffic Sensor System Acceptance Tests for Intelligent Transportation Systems
Thiago Goncalves Pereira Mendonca, Aldo Fabregas and Troy Nguyen (Florida Institute of Technology, USA)
Transportation agencies strive to keep people and goods moving. Operation and maintenance of transportation infrastructure is key to accomplish their objectives. Intelligent Transportation Systems (ITS) applications rely on massive detection grids that collectively demand significant maintenance resources. Resource constraints force transportation agencies to look for innovative ways to optimize their operational and maintenance costs while serving their users at intended performance levels. This paper presents a Systems Engineering (SE) view to derive traffic detection sensor requirements for components and subsystems based on application-specific needs. The goal of the approach is to obtain stakeholders view of an acceptable performance based on the top-level functionality for a given ITS application. The proposed approach uses a simulation model to obtain quantitative evidence of the minimum performance for the detection system.

2C2: Space and Communication Systems

Room: Salon Viger B
Chair: Paul C. Hershey (Raytheon, Inc., USA)
13:30 System for Small Satellite Onboard Processing
Paul C. Hershey (Raytheon, Inc., USA); William Wolpe, Jeffrey Klein and Charlotte Dekeyrel (Raytheon, USA)
Small satellites (Small Sats) (i.e., satellites weighing less than 100 kg) are attracting interest in the Department of Defense (DoD), Intelligence Community (IC) and commercial market for space-based data collection (e.g., imaging) and communication (e.g., links or relays). In fact, many emerging missions use constellations of many (e.g., greater 100) Small Sats for these purposes. Small Sats can be used for communications, space situational awareness, and Intelligence, Surveillance, and Reconnaissance (ISR). Advantages of Small Sats over larger commercial and military satellites include cost and deployment time. For example, larger satellites costs range from $50M to $500M versus Small Sat costs that range from $1M to $10M. Deployment timelines for larger commercial and military satellites can extend to 8 years, as compared to Small Sat deployment timelines that are achievable within a 24 hour period. The problem addressed by this paper is that of deriving a system to enable Small Sats, considering size weight and power (SWAP) constraints, to collect mission critical sensor data, and then efficiently process these data and transit the derived decision-relevant information to operators. We call this systems the System for Small Sat Onboard Processing (S3OP). S3OP results show that reducing data to information onboard the Small Sat saves collection costs, downlink transponder bandwidth costs, and time and expense for operators on the ground who are responsible for time critical decisions for their respective missions. The approach presented in this paper is to equip the Small Sat with an ISR sensor (such as an EO/IR camera that collects raw imagery) that observes objects of interest, collects the desired data (e.g., imagery relate to these objects]), and then processes these data onboard the Small Sat platform in order to reduce the amount of data for transmission over communications links (therefore reduce bandwidth) to end users on the ground. These components comprise S3OP. Enabling technologies to achieve S3OP included low SWAP sensors, Field Programmable Gate Arrays that provide the flexibility and adaptability of software with the speed of hardware, and algorithm chaining to efficiently apply data reduction algorithms and optimize algorithm processing through High Performance Computing (HPC) principles. S3OP concatenates multiple diverse algorithms into a logical algorithm chain and derives a reduced set of data that is then transmitted to end-users on the ground - requiring lower-bandwidth communications links. An example of algorithm chaining is to concatenate 5 diverse algorithms together sequentially with respect to their ability to screen image metadata efficiently. For this example, the algorithm sequence could be determined by factors such as the ability of the algorithm to screen images based on: 1. the size of the image (e.g., bytes), 2. the quality of the image (e.g., NIIRs rating), 3. the ability to process object tracking data (e.g., path of object between source and destination), 4. whether the image includes land-based objects (e.g., vehicles on a road) versus water-based objects (e.g., ships on the ocean), and 5. The ability to determine change detection between successive images. Control inputs to this algorithm chain include accuracy thresholds (acceptable for mission) for each algorithm. These yield confidence levels and other metrics. The first implementation of the S3OP system uses a small electro-optical (EO) camera, along with a processing board that includes a Xilinx Zynq-7000 FPGA-based SoC. This family of FPGAs combines a dual ARM® Cortex™-A9 MPCore™ with a Xilinx Kintex-7 FPGA. Testing was done with a set of 100 images from a diverse set of databases to create a blend of collected sensor images. These images included small 200x200 pixel images and large 14000x16000 pixel images.
14:00 Battlespace Communications Network-of-Network Interface Modelling
Mansoor Syed (Capability, Acquisition and Sustainment Group (CASG), Australia); Peter Pong (Jacobs Australia, Australia); Bob Hutchinson (Capability, Acquisition and Sustainment Group (CASG), Australia)
This work focuses on the proposed Network-of-Network (NoN) interface model being applied in the development of Australian Defence Force Battlespace Communication System Land (BCS(L)). The interface model seeks to resolve interfaces of the Network Node being an essential network construct in a Network-of-Networks. The notion of Network-of-Networks is similar to System-of-Systems, but with additional challenges to incorporate Information Exchange as well as OSI model in the interfaces dealt by Systems Engineering. Modeling with Data repository was employed to capture proposed BCS(L) Network-of-Network interfaces. Some executable results will be reported.
14:30 Inter-Satellite Communication MBSE Design Framework for Small Satellites
Awele AnyanHun and William Edmonson (North Carolina A&T State University, USA)
Inter-Satellite Communication MBSE Design Framework for Small Satellites

2C3: Model-Based Systems Engineering I

Room: Salon Terasse
Chairs: Cody Fleming (University of Virginia, USA), Thomas A McDermott, Jr (Georgia Tech Research Institute & Georgia Tech Sam Nunn School of International Affairs, USA)
13:30 Approximation Effects and User-Controllable Design Space Exploration
Haifeng Zhu (UTRC, USA)
System Design Space Exploration (DSE) is useful in finding new designs or architectures, however, often generates an untenably large space that is both unintuitive and difficult to analyze. When adding restrictive rules with human experience and approximations, some generated designs may not be realizable, but users cannot distinguish them. How bad these approximation effects can be is unknown in the systems engineering area. This paper provides a thorough analysis to this problem, and studies the intrinsic meaning of approximation in DSE, theoretically. We provide a rigorous design method for complex DSE systems, which allows quantitative evaluation of the impact from approximations. Traditional DSE methods have been reconstructed for generation and navigation of the design space of user-controllable sizes, without unknowingly generating erroneous designs. Results, including theory, algorithms, and a tool are demonstrated using an example electrical power system design.
14:00 Reuse of SysML Model to Support Innovation in Mechatronic Systems Design
Mizuki Shinozaki (University of Electro-Communications, Japan); Faïda Mhenni and Jean-Yves Choley (SUPMECA, France); Aiguo Ming (University of Electro-Communications, Japan)
Mechatronic systems are inherently complex as they are multidisciplinary, integrating both hardware (electronical, mechanical…) and software components. A huge effort is needed in the early design stages to take into account the requirements and constraints of the different stakeholders. A system model is needed to facilitate the communication between the collaborators from different domains. These efforts may be considered as a time loss by several companies. However, when a system model is available (i.e. electrical actuator for aircraft industry), it seems obvious that designing a new variant of the system with a modification of a set of requirements requires much less human effort, as it profits from the reuse and change of the already existing models of the previous system. In this paper, we propose to show the usefulness of the capitalization and reuse of system models by showing how a SysML model of an E-Taxiing (Electric Taxiing) aircraft system can be reused in order to design a new innovative HEPS (Hybrid Electric Propulsion System).
14:30 Web Notebooks as a Knowledge Management Tool for System Engineering Trade Studies
Thomas A McDermott, Jr (Georgia Tech Research Institute & Georgia Tech Sam Nunn School of International Affairs, USA); Jack Zentner (Georgia Tech Research Institute, USA)
In this paper we discuss the use of web notebooks and an associated process for conducting, managing, and presenting results of engineering trade studies in a form that both captures trade study design and provides interactive exploration of trade study results. We use the emerging Jupyter Notebook technologies as a basis for trade study execution and documentation, and describe how that can be integrated into larger tradespace evaluation and requirements definition tools. As a core systems engineering process, engineering trade studies serve develop information for decision making, provide design documentation to support the decision, use a communicable format for presentation to stakeholders, and provide the customer visibility into the design development. Trade studies are usually led by systems engineers, who are responsible for defining the trades, supporting the analysis, communicating the results, and capturing the output into design requirements. The Jupyter Notebook is an open source software project inspired by existing proprietary and open-source multi-model execution tools such as OpenModelica and IPython notebooks. The notebook architecture, in addition to executing software code, stores the source code and output with text notes and figures in Hypertext Language Markdown (HTML) format, into an editable document. Jupyter Notebooks have been adapted to use a broad range of software and engineering design languages. The core notebook viewing options include text, HTML, native code, equations, and slide show forms in a cell-based arrangement. Each notebook cell can be thought of as a presentation slide that integrates text and math, pictures, executable software, or graphical visualizations. The novelty of the Jupyter Notebook approach as applied to engineering design trades is the inherent power of the environment to develop, refine, present, and retain the knowledge associated with engineering trades and decisions. Traditional engineering trade studies include textual discussion, figures, mathematical analysis, spreadsheet analysis, and a results presentation. The notebook integrates all of these formats into a single executable document which is configuration managed and publishable to a website. In this paper we discuss content of engineering trade studies, features of web notebooks that improve trade study design and execution, and examples of completed system trades using notebooks. In the process we show how the notebook content is designed, how notebooks are stored and executed, and how they can be used as a knowledge management form in the system engineering process and across an enterprise. We also discuss how notebooks can support and be integrated with other model-based systems engineering tools to integrate decision data between model-based design and detailed engineering design. We believe that web notebooks will emerge over time as a key component of the systems engineering process, possibly eventually removing "PowerPoint engineering" from the language of systems engineers. We use the Jupyter Notebook implementation of a web notebook for our example. The Jupyter Project is an open-source software project with a wide base of users/developers and numerous third party open source software tool and integration platform providers. The Jupyter Notebook framework is written in the Python programming language which eases rapid development and integration of tools and data integration frameworks. Jupyter's native web application programming and execution environment encourages collaboration and linking of other notebooks, data sources, and tools. Most Jupyter implementations support a document relational data storage concept which eases notebook setup and management. Jupyter Notebooks use an open document format based on JavaScript Object Notation (JSON) which supports embedded text, software code, equations, and rich text or graphical output. The notation also maintains a complete record of all user actions for configuration management and traceability, which can be shared between users. The open standard promotes integration of a number of third party tools that maintain both notebook execution environments and embedding of software tools. Trade study analysis and execution: The Jupyter Notebook is able to capture the whole trade study analysis in one document. In the same notebook one can develop and document a model, execute the model to generate results, explore the model via interactive widgets and share the complete analysis with colleagues. Collaboration: Notebooks can be shared with others as executable files, using the Jupyter Notebook Viewer, or via third party execution/management environments such as Anaconda. In addition, JupyterHub is able to create a multi-user Hub which spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server. Due to its flexibility and customization options, JupyterHub can be used to serve notebooks to a class of students, a corporate data science group, or a scientific research group. Presentation: Notebooks produce rich media output that is able to combine narrative text, static images, code and dynamic, interactive output. The dynamic output is easily achieved with Jupyter's interactive widgets. These widgets allow the presenter to execute and visualize results in real-time based on code contained in the notebook. In addition to the standard notebook layout, Jupyter notebook cells can also be configured to display in a PowerPoint presentation style mode. Presentation mode Single presentation format, Interactive Configuration management: The JSON based text file format makes it easy to manage the history of a document using any configuration management tool. This make is very easy to collaborate with others using tools like git, SVN, etc. thus mitigating the common file name versioning that one sees with other document formats such as Word and Excel.

2C4: Research in Systems Engineering I

Room: Salon Neufchatel
Chair: Alberto Sols (University College of South-East Norway, Spain)
13:30 Applications of an Emulation Model towards the Preservation of Modern Computing Systems
Darin Jamraj, Shihong Huang, Perambur Neelakanta and Bassem Alhalabi (Florida Atlantic University, USA)
In today's modern computing environment we are surrounded by a vast sea of electronic computing systems and devices. These systems are now closely integrated into our everyday lives and provide us with a wealth of computing power. While these systems provide us with great computational values, these very systems we have come to rely upon now also face issues regarding their own long term longevity and the issue of how to preserve such systems for future generations. At present in the current computing environment, if we examine the longevity aspect of computing systems, we will notice that there currently exists no good constructed model or framework for preserving computing systems in their original state. We present in this paper a method for developing a requirements based model for emulation of systems that support the modeling and preservation of a given system through the use of emulation in the following application areas: video game consoles, personal computer systems, and smartphones. For each of these application areas we present an emulation preservation model, using a set of requirements that can be used to describe the original system. These models can be used for modeling both the properties and requirements of each respective system, as well as to allow for the testing of these system requirements areas. From the testing of these system model requirement areas, we will be able to measure and evaluate how well they are able to emulate and perform the properties and requirements of the original system. We lastly present a set of test methods along with preliminary results that can be used for evaluating emulation as a means for the preservation of systems, and how this model can be further extended into a general emulation model.
14:00 A Standard Based Adaptive Path to Teach Systems Engineering: 15288 and 29110 Standards Use Cases
Mohammed Bougaa (CentraleSupelec Engineering School & EISTI, France); Stefan Bornhofen (EISTI, France); Rory V O'Connor (Dublin City University & Lero, Irish Software Enginering Research Centre, Ireland); Alain Riviere (Institut Superieur De Mecanique, France)
This paper discusses the use of two different standards for teaching Systems Engineering (SE): ISO/IEC/IEEE 15288 and ISO/IEC 29110. The first one is a general and widely-used standard describing the lifecycle processes of the entire system, whereas the second one is a relatively new standard based on a reduced set of standards elements focused on lifecycle profiles for Very Small Entities (VSEs). We are especially interested in the impact that SE standards can have on teaching this discipline to engineering students. We consider the teaching of fundamental principles of systems engineering. In this paper we illustrate how our, previously developed, standard based solution for systems engineering education can be used as a framework to support these standard based teaching paths. We mainly focus on illustrating how adapting standard processes can be done, considering not only the learning goals, but also projects size and complexity, in a project-based learning environment. This paper shows that, thanks to it's adaptation from the ISO/IEC/IEEE 15288, and to it's reduced size, the ISO/IEC 29110 standard is particularly suitable for teaching systems engineering fundamental knowledge to undergraduate students, new to the discipline. While the ISO IEC/IEEE 15288 might be more suited for students that already have a good grounding in systems engineering fundamentals, especially thanks to the ability to use some from its various processes to separately teach different topics of systems engineering.
14:30 Receding Capabilities in Systems of Systems
Alberto Sols (University College of South-East Norway, Spain)
Systems of systems are characterized by their dynamic behavior, exhibiting evolving emerging capabilities as new members join the family. Yet, little has been said on the possibility of some members leaving the system of systems, with the impact that it might have on the emerging capabilities experienced by some members of the family. At the same time, growing attention is given to the need for establishing a sound foundation for the field of systems engineering, with the development of the appropriate theories and methods, which require effectiveness definitions and metrics, among others. This paper introduces the concept of receding capabilities of a system of system, as a necessary building block to further complement and develop the foundation of systems engineering. The necessary concepts and defined and a proposed formulation is presented to capture the likelihood and impact of receding capabilities. A notional example is presented to illustrate the presented concepts.

2C5: Modeling and Simulation I

Room: Salle de Bal Foyer
Chair: Joe Cecil (Oklahoma State University & Cyber Tech, USA)
13:30 Test-driven modeling and development of cloud-enabled cyber-physical smart systems
Allan Munck (Technical University of Denmark); Jan Madsen (Technical University of Denmark, Denmark)
Embedded products currently tend to evolve into large and complex smart systems where products are enriched with services through clouds and other web technologies. The complex characteristics of smart systems make it very difficult to guarantee functionality, safety, security and performance. Using test-driven modeling (TDM) is likely to be the best way to design smart systems such that these qualities are ensured. However, the TDM methods that are applied to development of simpler systems do not scale to smart systems because the modeling technologies cannot handle the complexity and size of the systems. In this paper, we present a method for test-driven modeling that scales to very large and complex systems. The method uses a combination of formal verification of basic interactions, simulations of complex scenarios, and mathematical forecasting to predict system behavior and performance. We utilized the method to analyze, design and develop various scenarios for a cloud-enabled medical system. Our approach provides a versatile method that may be adapted and improved for future development of very large and complex smart systems in various domains.
14:00 A Next Generation Collaborative System for Micro Devices Assembly
Joe Cecil (Oklahoma State University & Cyber Tech, USA); Aaron Cecil-Xavier (Soaring Eagle Program, USA); Raviteja Gunda (Oklahoma State University, USA)
This paper discusses the creation of an advanced collaborative system to support the assembly of micro devices. One of the main components is a Virtual Reality based assembly analysis environment (VAE) which is part of a larger collaborative framework for the emerging domain of Micro Devices Assembly (MDA). MDA involves the assembly of micron sized devices which cannot be manufactured by Microelectromechanicalsystems (MEMS) technologies. The VAE is comprised of several modules including an assembly plan generator, path planner and a network based cyber physical interface which allows it to support collaboration involving distributed users. As the current Internet has several limitations, a major initiative is underway to develop the Next Generation Internet which can reduce latency, increase the bandwidth of data exchange and support distributed collaboration. VAE has been implemented as part of a national initiative aimed at exploring Next Generation Internet technologies.
14:30 Performance of Constant Envelope DCT Based OFDM System with M-ary PAM Mapper in AWGN Channel
Rayan Alsisi (Western University, Canada)
Constant envelope discrete cosine transform based Orthogonal Frequency Divination Multiplexing (CE-DCT-OFDM) system is presented. The performance of such a system is examined over AWGN channel for transmission of data using M-ary pulse amplitude modulation (M-ary PAM) mapper. In the system, phase modulation (PM) is used to overcome the problem of high peak-to-average power that is typical in conventional DCT-OFDM systems. As a result the system permits high power amplifier to operate near saturation level and thus offers maximum power efficiency. Closed-form expression for bit error rate of the system is derived, illustrated, and compared to simulation results. Also, bit error rate performance of CE-DCT-OFDM and conventional DCT-OFDM systems are compared as a function of IBO and SNR using traveling-wave tube amplifier (TWTA) model. It is observed that CE-DCT-OFDM system offers a variety of advantages over conventional DCT-OFDM system.

2C6: INCOSE

Room: Salon Viger A
Chair: Claude Laporte (Ecole de Technologie Superieure, Canada)
13:30 Learning from Research about CASs operating as Enterprises
Vernon Ireland (The University of Adelaide, Australia); Larissa Statsenko (University of Adelaide, Australia); Indra Gunawan and Carmen Reaiche (The University of Adelaide, Australia)
A number of practitioners who operate in complex adaptive systems CASs) and especially system of systems (SoSs), are not aware of how these need to be managed differently to a reductionist based organization. The operation of the US Army in Iraq in the period 2004, and led by General Stanley McChrystal, is taken as one exemplar of SoSs operating within a CAS. A range of dysfunctional behaviors by individual staff within their business units are illustrated including operating on an outdated reductionist management model; the inability of most corporations to readily implement change management programs; the benefits of sharing information; the practice of blaming individuals for problems and malfunctions rather than the system; the lack of recognition of the benefits of connectivity within the company and the benefits of trust; the apparent lack of understanding of complexity concepts like weak ties.
14:00 Systems Engineering and Management Processes for Small Organizations with ISO/IEC 29110 An implementation in a Small Public Transportation Company
Claude Laporte and Nicolas Tremblay (Ecole de Technologie Superieure, Canada); Jamil Menaceur and Denis Poliquin (CSiT, Canada); Ronald Houde (Mannarino Systems & Software, Canada)
Most existing engineering standards, such as ISO/IEC/IEEE 15288, have been developed by and for large organizations not having in mind small and very small organizations. As systems are getting bigger, customers as well as systems integrators must work with small suppliers. The new systems and software ISO/IEC 29110 series can be used by small organizations such as enterprises or projects within a large company to develop quality products. CSiT, a small public transportation company has implemented the engineering and management processes of ISO/IEC 29110 and has recently been successfully audited by a third-party audit composed of 2 auditors. ISO/IEC 29110 was also implemented as a good starting point towards implementing CMMI-DEV level 2 process areas and even a few practices of Level 3.
14:30 Developing a Systems ConOp for Complex Infrastructure Upgrade Programmes
Ali G Hessami (Vega Systems & London City University, United Kingdom)
Developing ConOp for Complex Infrastructure Upgrade Programmes

Tuesday, April 25, 15:00 - 15:30

Break

Tuesday, April 25, 15:30 - 17:00

2D1: Medical Systems

Room: Le Caf Conc
Chair: Joe Cecil (Oklahoma State University & Cyber Tech, USA)
15:30 Systems Thinking and Predictive Analytics to Improve Veteran Healthcare Scheduling
Peter Whitehead (MITRE Corporation, USA); Stephen Adams, William Scherer, Hyojung Kang and Matthew Gerber (University of Virginia, USA)
As a culture, the United States acknowledges that the delivery of veteran services to those who risked their lives and suffered to protect our nation is a top national priority. Over the past several years, however, the media have reported stark examples of how these services are lacking, particularly in the case of medical appointment scheduling. At the same time, the Veterans Health Administration is plagued by strikingly high no-show rates at its medical outpatient clinics and a resulting handicap in resource allocation. We bring to bear systems thinking to address these issues. As a result, we developed a model for a dynamic overbooking system that receives the probability of a patient arriving on-time for their appointment from the patient's phone and couples this real-time probability with prior probability derived from existing VA data. Note that the system protects patient privacy by never transmitting nor sharing location data. When the arrival probability of a patient falls below a given threshold, an algorithm can automatically cancel a patient's appointment and re-assign it to another patient drawn from a pool of wait-list and other patients with high arrival probabilities given their current location. In this presentation, we share the progress to date on our approach, and our proposals for future work and implementation.
16:00 A Distributed Collaborative Simulation Environment for Orthopedic Surgical Training
Joe Cecil (Oklahoma State University & Cyber Tech, USA); Avinash Gupta (Oklahoma State University, USA); Parmesh Ramanathan (University of Wisconsin at Madison, USA); Miguel Pirela-Cruz (Texas Tech Health Sciences Center, USA)
The use of Virtual Reality (VR) simulators has increased rapidly in the field of medical surgery for training purposes. In this paper, the design and development of a Virtual Surgical Environment (VSE) for training residents in an orthopaedic surgical process called Less Invasive Stabilization System (LISS) surgery is discussed; LISS plating surgery is a process used to address fractures of the femur bone. The development of such virtual environments for educational and training purposes will accelerate and supplement existing training approaches enabling medical residents to be better prepared to serve the surgical needs of the general public. One of the important aspects of the VSE is that it is a network based simulator. Our approach explores the potential of emerging Next Generation Internet frameworks and technologies to support such distributed interaction contexts. A discussion of the validation activities is also presented which highlights the effectiveness of the VSE for teaching medical residents and students.
16:30 Network of Wireless Medical Devices to Assess the Gait of Rehabilitation in Patients for Walking and Running
Alain JG Beaulieu and Adrien Lapointe (Royal Military College of Canada, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada); Kathrine Sillins (Canadian National Defence, Canada); Alexandre Lavoie and Kyle Tilley (Royal Military College of Canada, Canada); Nicolas Le Bel (Canadian National Defence, Canada)
In this paper, we present the design of two smart sensor systems to monitor the gait of patients. These sensor systems were developed for deployment within a body worn wireless network system of medical devices. Telemetry, ambulatory and remote monitoring systems composed of micromechanical systems have gained importance in the last decade as medical and rehabilitation institutions try to reduce costs by discharging patients earlier while still requiring various levels of monitoring. Most of the systems currently on the market are bulky, closed architecture, static in configuration and use wired medical devices, all of which limit their usage. Gait monitoring is mainly done in laboratories that are fixed and expensive. The aim of the research which encompasses both systems discussed in this paper is to develop an open architecture using Real-Time Object Oriented Modeling that will allow wireless, wearable medical devices to join a dynamically configurable monitoring environment. The intent of the system is to monitor patients recovery by measuring biometrics and biomedical signals as they go about their daily activities. The sensors that are being developed as part of this research are smart sensors that can provide pre-processed information, reducing the load on the wearable computer

2D2: Cloud Computing

Room: Salon Viger B
Chair: Paul Rad (Rackspace, USA)
15:30 Transmitter Beam-forming Techniques for Indoor Millimeter wave Communication
Hen-Geul Yeh and Pravinkumar Shanmugam (California State University Long Beach, USA); Donald Chang (Spatial Digital Systems, USA); Joe Lee (SDS, Inc., USA)
This paper explains a new beamforming technique implemented in a deterministic multiple-input multiple output (MIMO) channel model. The path loss is calculated between the transmitter and the receiver antenna elements in line of sight (LOS) path. Assume that channel state information (CSI) is known as the deterministic values at the transmitter, the composite transfer function (CTF) along with the CSI (i.e. a path loss matrix), generates orthogonal beams. The orthogonal beams allows us to reuse the same frequency for multiple beams at the same time slot. Our beamforming technique controls the radiated power at the transmitter in such a way that the power at a particular receiving antenna element is 0dB and it is suppressed to -40dB or lesser at other receiving elements. The performance of our proposed beamforming technique is analyzed by transmitting the QPSK modulated signal through the orthogonal beams from the transmitter and their bit error rate (BER) curve is analyzed at the receiver. The complete simulation of our system in this paper operates at millimeter wave frequency of 60 GHz.
16:00 Resource Usage Prediction Algorithm Using Weighted Linear Regression for Virtual Machine Live Migration in Cloud Data Centers
Mohammad Ali Khoshkholghi (University Putra Malaysia, Malaysia); Azizol Abdullah (Universiti Putra Malaysia, Malaysia)
Cloud computing has become a significant research area in large-scale computing due to its ability to share globally distributed resources. Cloud computing has evolved through large-scale data centers, including thousands of servers around the world. However, cloud data centers consume huge amounts of electrical energy, contributing to high operational costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allows Cloud providers to optimize resource utilization and reduce energy consumption. However, aggressive VMs consolidation may lead to performance degradation. Therefore, an energy-performance tradeoff between providing a high quality of service to customers and reducing power consumption is desired. In this paper, we present a resource utilization prediction algorithm based on the weighted linear regression method. This approach estimates the short-time future CPU, RAM and network bandwidth utilization based on the history of usage in each server. The proposed algorithm can be employed in the live migration process to predict overloading hosts. When a host is detected as overloaded, in which case one or several VMs are reallocated to other hosts to reduce host utilization and avoid SLA violation, and then, the host is switched to the sleep mode to reduce power consumption. The efficiency of the proposed algorithms is validated by conducting simulation using CloudSim. The evaluation results clearly showed that the proposed algorithm reduce energy consumption while providing a high level of commitment to the SLA.
16:30 Performance Evaluation of Cloud Object Storage for Big Data
Swanand Mhalagi (University of Texas at San Antonio & Open Cloud Institute at UTSA, USA); Lide Duan (University of Texas at San Antonio, USA); Paul Rad (Universiity of Texas at San Antonio, USA)
The need for reliable and fast storage systems is increasingly critical in various fields including artificial intelligence and data analysis. A new architecture for large-scale data storage systems is proposed in this paper, which focuses on comparing and optimizing performance of different software/hardware-defined storage technologies that effectively reduce the computational latency and improve the performance. The main contributions of this paper are: (i) the combination of SMR (for storing data) and SSD (for storing metadata) is a viable solution for implementing large data storage systems, and (ii) the combination of CMR (for storing data) and SSD (for storing metadata) shows the highest performance for high performance computing. Our experiments are carried out on multiple settings, demonstrating that the proposed architecture successfully improves performance for sequential and random read/writes. The prototypes are evaluated with some realistic workloads, showing the superiority of the proposed data storage configurations. This provides new opportunities for efficiently processing and storing data and metadata in large-scale data analysis systems.

2D3: Model-Based Systems Engineering II

Room: Salon Terasse
Chair: Pierre de Saqui-Sannes (ISAE-SUPAERO, France)
15:30 Feasibility study of a multispectral camera with automatic processing onboard a 27U satellite using Model Based Space System Engineering
Andre P Mattei (Senai Innovation Institute for Embedded Systems, Brazil); Pierre de Saqui-Sannes (ISAE-SUPAERO, France); Luis Loures (Aeronautics Institute of Technology, Brazil); Benedicte Escudier (ISAE-SUPAERO, France)
Using Model-Based Methodology to Support the Space System Engineering, the SysML language, and the TTool software, this paper presents the feasibility study of a novel multispectral camera for agricultural monitoring with automatic image processing onboard capabilities designed to comply with a 27U microsatellite. In addition to the communications with the ground station, this innovative payload is capable of sending processed data directly to farms, critically reducing the delay between image making and its use in the field. This paper is organized to partially complying with phases 0 and A of a space project.
16:00 Can ontologies prevent MBSE models from becoming obsolete?
Uri Shani (IBM, Israel)
Model-based systems engineering provides a soft, manageable, and query-able representation of product design and lifecycle. However, at the same time, it creates a discontinuous space of data that does not integrate, constantly becomes outdated, and has to be upgraded as tools evolve. Models that are not maintained become legacy, and then become obsolete and useless. We look at how ontologies that follow the semantic web technology for data representation can create interoperability among the modeling tools, support model reuse, and fight the aging and obsolescence of models.
16:30 Model-Based Dynamic Reliability Engineering for Hybrid Electric Vehicle Design
Armin Zimmermann (Ilmenau University of Technology & Systems and Software Engineering, Germany); Thomas Dietrich (Technische Universität Ilmenau, Germany); Paulo Maciel (Federal University of Pernambuco, Brazil); Andreas Hildebrandt (Pepperl+Fuchs GmbH, Germany)
The paper analyzes the tradeoff between battery life and functional reliability for a hybrid electrical vehicle design study. Reliability of complex technical systems often depends on their dynamic behavior. Petri nets have significant advantages over classic static models of reliable systems in describing such systems, and an international standard IEC 62551 has been published recently (Analysis techniques for dependability - Petri net techniques). Simulation is the only possible method to derive reliability results for realistic systems that have a large state space or are not Markovian (memoryless). The paper shows how the very long run times for highly reliable systems can be sped up significantly by using the RESTART-method for stochastic Petri nets and its implementation in the TimeNET software tool.

2D4: Research in Systems Engineering II

Room: Salon Neufchatel
Chair: Martin Malchow (Hasso Plattner Institute, Germany)
15:30 Embedded Smart Home - Remote Lab Grading in a MOOC with over 6000 Participants
Martin Malchow (Hasso Plattner Institute, Germany); Jan Renz (Hasso Plattner Institute); Matthias Bauer (Hasso Plattner Institute, Germany); Christoph Meinel (Hasso Plattner Institute, University of Potsdam, Germany)
The popularity of MOOCs has increased considerably in the last years. A typical MOOC course consists of video content, self tests after a video and homework, which is normally in multiple choice format. After solving this homeworks for every week of a MOOC, the final exam certificate can be issued when the student has reached a sufficient score. There are also some attempts to include practical tasks, such as programming, in MOOCs for grading. Nevertheless, until now there is no known possibility to teach embedded system programming in a MOOC course where the programming can be done in a remote lab and where grading of the tasks is additionally possible. This embedded programming includes communication over GPIO pins to control LEDs and measure sensor values. We started a MOOC course called "Embedded Smart Home" as a pilot to prove the concept to teach real hardware programming in a MOOC environment under real life MOOC conditions with over 6000 students. Furthermore, also students with real hardware have the possibility to program on their own real hardware and grade their results in the MOOC course. Finally, we evaluate our approach and analyze the student acceptance of this approach to offer a course on embedded programming. We also analyze the hardware usage and working time of students solving tasks to find out if real hardware programming is an advantage and motivating achievement to support students learning success.
16:00 A systematic and practical method for selecting systems engineering tools
Allan Munck (Technical University of Denmark); Jan Madsen (Technical University of Denmark, Denmark)
The complexity of many types of systems has grown considerably over the last decades. Using appropriate systems engineering tools therefore becomes increasingly important. Starting the tool selection process can be intimidating because organizations often only have a vague idea about what they need. The tremendous number of available tools makes it difficult to get an overview and identify the best choice. Selecting wrong tools due to inappropriate analysis can have severe impact on the success of the company. This paper presents a systematic method for selecting systems engineering tools based on thorough analyses of the actual needs and the available tools. Grouping needs into categories, allow us to obtain a comprehensive set of requirements for the tools. The entire model-based systems engineering discipline was categorized for a modeling tool case to enable development of a tool specification. Correlating requirements and tool capabilities, enables us to identify the best tool for single-tool scenarios or the best set of tools for multi-tool scenarios. In both scenarios, we use gap analysis to prevent selection of infeasible tools. We used the method to select a traceability tool that has been in successful operation since 2013 at "COMPANY". We further utilized the method to select a set of tools that we used on pilot cases at "COMPANY" for modeling, simulating and formally verifying embedded systems.
16:30 Model based QFD method with the integrated sensitivity Analysis
Seyed Sina Shabestari and Beate Bender (Ruhr Universität Bochum, Germany)
This manuscript presents a methodology to determine the transfer function in the house of quality tool of the quality function deployment method. The transfer function relates the product characteristics to the customer requirements on the product. Using sensitivity analysis on a model of the product the elements of the transfer function are calculated accurately and with no need on further iterations for the transfer function. The proposed method is applied to a design model of an office paper punch and the results are discussed.

2D5: Defense Systems

Room: Salle de Bal Foyer
Chair: George L Ball (Raytheon, Inc., USA)
15:30 Research on weapon system portfolio selection based on combat network modeling
Cheng Cheng, Jichao Li and Qingsong Zhao (National University of Defense Technology, P.R. China); Jiang Jiang (NUDT, P.R. China); Lixin Yu (Beijing Simulation Center, P.R. China); Huilin Shang (National University of Defense Technology, P.R. China)
Joint operations and combat system-of-systems encounter have become major developing trends of modern warfare. Weapon system portfolio selection attracts much attention, because it is closely related to the production, deployment and operation of weapons, which is a crucial factor determining the outcome of the war. This paper presents a portfolio selection model of the weapon systems based on the combat network. Firstly, we describe the portfolio selection problem of weapon systems based on the combat network, introduce the concept of operation loop, and analyze difficulties and strategies of the optimization problem. Then on this basis, the combat network of the weapon equipment system is modeled, and the operational capability evaluation index of weapon system-of-systems based on the operational loop is constructed. Portfolio selection model based on combat network is established to maximize the combined combat capability of the weapon as an optimization target, and the capacity demand and the expense restriction as constraints. Finally, we take the missile defense system as a case to demonstrate the whole calculation process and results of the weapon system portfolio selection based on the combat network model. It shows that our proposed method can achieve a very good performance to solve weapon system portfolio selection problems.
16:00 Research on Development Strategy of Weapon Equipment in Antagonistic Environment
Chunqi Wan (National University of Defense Technology, P.R. China); Weitao Xiong (NUDT, P.R. China); Yanqing Ye (National University of Defense Technology, P.R. China); Qingsong Zhao (National University of Defense Technology & College of Information System and Management, P.R. China); Kewei Yang (NUDT, P.R. China)
Weapon equipment development strategy is an unstructured, complex multi-criteria decision problem, especially in antagonistic environment. In this paper, a game frame which combines value engineering model and game equilibrium solution is presented for the development strategy of weapon equipment, which is one of the main novelties in this paper. Considering the development strategy of the opponent, this paper analyzes the game elements in the development process of weapon equipment systems in detail, and regulates the frame of weapon equipment selection quantitatively. Multi-level fuzzy comprehensive evaluation method is used to measure the effectiveness of weapon equipment. Finally, the feasibility and validity of the model are verified by the example of the modernization of the US and Russian Army combat forces.
16:30 Simulation and Exploration of High-Density Unmanned Aerial Vehicle Systems
Tim Nysetvold and John Salmon (Brigham Young University, USA)
Systems of unmanned aerial vehicles (UAVs), such as the Amazon Air delivery system, will play a large part in future aerospace development. The design and implementation of these systems will be economically, operationally, and environmentally significant. In this paper, we use agent-based models to analyze the emergent behavior of several types of UAV systems with varying parameters and environments. We find that while increasing the number of UAVs initially has a more-than-linear effect on the reported separation incidents, this effect quickly levels off due to carrying capacity limitations. We also find that while different behaviors (objectives and movement patterns) of UAVs affects the locations of the separation incidents, the number of separation incidents is not strongly influenced by these modes. We then explore the implementation of separation-assurance mechanisms using varying levels of information, and examine effects on the emergent behavior.

2D6: INCOSE

Room: Salon Viger A
Chair: Inas S. Khayal (Geisel School of Medicine at Dartmouth, USA)
15:30 The Application of Model-Based Systems Engineering to the Practice of Clinical Medicine
Inas S. Khayal (Geisel School of Medicine at Dartmouth, USA); Amro M. Farid (Thayer School of Engineering at Dartmouth, USA)
Humanity is currently facing an unprecedented chronic disease burden. Healthcare needs have significantly shifted from treating acute to treating chronic conditions. Chronic diseases tend to involve multiple factors with complex interactions between them evidenced by the continually growing medical knowledge base. The health profession requires the ability to manage this rapidly deepening knowledge base to assimilate the lessons from research and clinical care experience by systematically capturing, assessing and translating it into the highest level of reliable care. A more systems approach to practicing medicine exists and is referred to as functional medicine. It takes into account the many subsystems in the human body and their many interactions. Although the science behind treating the patient as a system exists, the application of systems tools and techniques have not been utilized. It is only natural to begin to formalize the systems thinking using the established tools from the systems engineering field. Specifically, this paper is the first to contribute to the need for systems tools in the practice of clinical medicine and includes an example application of model-based systems engineering to clinical medicine.
16:00 A Holistic Approach for Virtual Commissioning of Intelligent Systems
Christian Henke (Fraunhofer IEM, Germany); Jan Michael (Fraunhofer Institut for Mechatronic Systems Design, Germany); Christopher Lankeit (University of Paderborn & Heinz Nixdorf Institute, Germany); Ansgar Trächtler (Universität Paderborn, Germany)
Due to the Industrial Internet of Things and rising complexity of intelligent technical systems efficient design and test processes for mechanical and plant engineering become more important. The design and test process encompasses the discipline-spanning system design in early phases, virtual prototyping as well as virtual commissioning in test phases. This contribution illustrates a holistic model-based approach, which supports these activities in terms of Model-Based Systems Engineering. The paper shows the development from the system model and especially the requirements engineering via the detailed simulation of the physical behavior to the virtual commissioning of a vertical turn-milling-center.
16:30 The future of Engineering - Scenarios of the future way of working in the engineer-to-order business
Jan Vollmar, Michael Gepp and Andreas Schertl (Siemens AG, Germany)
Companies of the engineer-to-order (ETO) business face various trends such as digitalization and globalization. These trends will radically change the way of working in engineering. First, this paper carves out four basic categories of engineering in the ETO business - 'Easy engineering', 'Zero engineering', 'Perfect engineering' and 'Pioneer Engineering' - and describes their distinctive characteristics and implications for engineering companies if their engineering follows these categories. Second, this paper elaborates engineering scenarios that show how the described trends will change the way of working in the respective engineering categories. It further outlines how the trends will change the relative importance of the engineering categories within ETO business. Finally, the contribution discusses what challenges engineering companies need to tackle on their way to the future of engineering

Tuesday, April 25, 17:30 - 18:30

Reception

Room: Le Caf Conc, "A" Level

Tuesday, April 25, 17:30 - 20:30

Young Professionals Networking Event

Room: Salon Viger A

Tuesday, April 25, 18:30 - 19:00

Analytics and Risk Technical Committee Meeting

Room: Salon Viger B

All IEEE SysCon meeting attendees with interests in risk analysis and systems engineering are invited to join the annual networking and business meeting of the ARTC http://ieeesystemscouncil.org/content/analytics-and-risk-technical-committee

The meeting is hosted by Prof. James H. Lambert, F.IEEE, University of Virginia, USA, and Prof. Dash Wu, Chair, ARTC, University of Chinese Academy of Sciences, and University of Stockholm.

Wednesday, April 26

Wednesday, April 26, 08:00 - 09:30

3A1: System Architecture

Room: Salon Viger C
Chair: Arash Khabbaz Saberi (Eindhoven University of Technology, The Netherlands)
08:00 A Centralized Enterprise Chef System and Architecture
Jianwen Chen (IBM Australia, Australia); Choong Thio (IBM, Australia); Gopal Pingali (IBM, USA); George Africa (IBM Australia, Australia); Chris Freeman (IBM, Australia)
In this paper, we propose a centralized enterprise chef system and architecture for providing enterprise capability on automation, segregation and composition, and lifecycle management of different chef services and environments. We discuss the deployment and operation model of the proposed chef architecture. We illustrate how to use this centralized chef architecture to address the key enterprise use cases especially for cloud environments.
08:30 Analysis Sharing Method by Managing Provenance of Query
Hiromitsu Nakagawa (Hitachi. Ltd., Japan); Keiro Muro (HITACHI, Japan)
Measured data analysis generally requires a large amount of resources since each analyst individually examines data in their own because their top priority is issuing the results as soon as possible. This analysis method makes sharing with others difficult since they are focused on convenience. Therefore, it is necessary to improve overall analysis efficiency by providing a means to share the analysis data and programs in reuse-friendly format without inconveniencing analysts. In this paper, we propose the Query Object Graph Analysis method for enabling analysis sharing among analysts. The proposed method can prevent duplicate processing and data explosion by managing analysis data in unified forms of time-series and range-series and by representing and sharing the analysis procedure as a graph pointing to the analysis data and a variety of analyses can be conducted by mouse operation in the browser. We evaluated this effect regarding damage analysis of a structure and found that the proposed method can reduce program execution time by 2/3 and the amount of analysis data by 1/3. This makes efficient analysis possible by several analysts using the same data source. Motivation-In time-series sensor data analyses, several analysts usually examine the same data source to aggregate various viewpoints of each component device. However, measured data analysis generally requires a large amount of resources since each analyst individually examines the data in their own way and gives priority to his/her results. Therefore, it is necessary to improve overall analysis efficiency by providing a means for analysis sharing in this field. Methods, such as Wiki Page, Workflow Tool, and Spark Framework, have been proposed for this goal. While Wiki Page makes it possible to share analysis procedures, it puts the burden of editing onto providers of analysis information. Though Workflow Tool makes it possible to share analysis programs, it must generate data by executing the programs in each analysis processes. Although Spark Framework makes it possible to share analysis data, it is difficult for analysts to reuse data of unknown origin. Therefore, it is difficult to enable analysis sharing by only a simple combination of these methods. In this paper, we propose Query Object Graph Analysis, which can be used to prevent duplicate processing and data explosion by managing analysis data in unified forms of time-series and range-series and by representing and sharing the analysis procedure as a graph pointing to the analysis data. Results-it is necessary to improve overall analysis efficiency by providing a means to share analysis programs and data in a reuse-friendly format without inconveniencing analysts. In this paper, we proposed Query Object Graph Analysis, which can prevent duplicate processing and data explosion by managing analysis data in unified forms of time-series and range-series data and by representing and sharing the analysis procedure as a graph pointing to the data. Evaluation of the effectiveness of reducing analysis data amount and execution time in a case of damage analysis of a windmill, we found that the proposed method can reduce program execution time by 2/3 and the amount of analysis data by 1/3. This makes efficient analysis possible by several analysts using the same data source.
09:00 An Architecture Pattern for Safety Critical Automated Driving Applications: Design and Analysis
Yaping Luo (Altran, Netherlands & Eindhoven University of Technology, Netherlands, The Netherlands); Arash Khabbaz Saberi (Eindhoven University of Technology, The Netherlands); Tjerk Bijlsma (TNO Technical Sciences/Automotive, Helmond, The Netherlands); Johan J. Lukkien (Eindhoven University of Technology, The Netherlands); Mark van den Brand (Eindhoven University of Technology, Netherlands, The Netherlands)
The introduction of automated driving increases the complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development. However, the existing architecture patterns may not be able to answer the needs of automated driving completely. When applying these patterns in the automated driving context, modifications or changes of these patterns have to be made and analyzed. In this paper, we present a novel architecture pattern for safety critical automated driving functions. In addition, we propose a generic approach to compare our pattern with a number of existing ones. The comparison results can be used as a basis for project specific architectural decisions. Our Safety Channel pattern will be validated by its implementation for a real-life truck platooning application.

3A2: Decision Making Systems I

Room: Salon Viger B
Chair: Steven Hoffenson (Stevens Institute of Technology, USA)
08:00 Robust System Portfolio Selection with Multi-Function Requirements and System Instability
Boyuan Xia and Yajie Dou (National University of Defense Technology, P.R. China); Qingsong Zhao (National University of Defense Technology); Bingfeng Ge (National University of Defense Technology, P.R. China); Yang Zhang (National University of Defense Technology)
System Portfolio Selection (SPS) problem is equivalent to multi-objective optimization problem, with multi-function requirements incomparable. In the paper, the SPS problem is re-specified with a more practical formulation, comparing to general portfolio problem. Then, both feasible and non-inferior solution is re-defined considering characteristics of SPS problem, specifically, four rules of system functions combination are proposed according to practical cases. Then, the instability of system performance is involved in SPS problem, and the robust theory is employed to solving the uncertainty of system function values with definition of robust non-inferior solution. Next, the paper proposes an immediately updating algorithm to solve the problem of solution space exponentially growing. Finally, a case study is used to demonstrate the usefulness and effectiveness of the proposed approach. It shows that the approach can provide an efficient guidance for decision-makers in the process of making SPS.
08:30 Goal-Seeking Framework to Empower Personal Wellness Management
Mukesh Chippa (The University of Akron, USA); Shivakumar Sastry (University of Akron, USA)
Obesity has reached epidemic proportions globally, with more than 1 billion adults overweight - at least 300 million of them clinically obese and is a major contributor to the global burden of chronic disease and disability. The challenge in keeping healthy people healthy and making them intrinsically motivated to manage their own health is at the center of Personal Wellness Management(PWM). In this paper, this problem is presented as a decision making under uncertainty where the participant takes an action at each discrete time steps and the outcome of the action is uncertain. In this setting, under reasonable set of assumptions the problem is formulated as a Partially Observable Markov Decision Process. While it may be unrealistic to find experimentally validated data required for such models, it is also known that in solving complex problems such as the PWM, good enough solutions are sufficient. In this paper, Goal Seeking framework is presented as an alternative to the above frameworks and how it is different to that of other frameworks.
09:00 Using Agent-Based Modeling to Understand Stakeholder Interactions in the Rollout of NextGen by the Federal Aviation Administration
Matthew Mosca and Steven Hoffenson (Stevens Institute of Technology, USA)
The Federal Aviation Administration's NextGen technologies are planned to modernize the National Airspace System of the United States. However, NextGen's initially proposed completion date of 2025 has been pushed back years due to rollout issues. This study offers a concept for a decision-making tool that uses agent-based modeling to make implications about the remainder of the rollout. Observing simulations with a focus on stakeholder impact on the NextGen rollout offers insights into potential rollout outcomes, both favorable and unfavorable, which may allow decision makers to more reliably select their courses of action.

3A3: Sensors Integration and Applications I

Room: Salon Terasse
Chair: Allaa R. Hilal (University of Waterloo, Canada)
08:00 Context-Aware Source Reliability Estimation For Multi-Sensor Management
Allaa R. Hilal (University of Waterloo, Canada)
Large-scale pervasive systems deal with situations that are characterized by their stochastic environment and dynamic nature. Sensor Management (SM) literature to date is based on an optimistic assumption about the reliability of the underlying models producing the beliefs associated with imperfect data. Nonetheless, different models usually have different reliabilities and are only valid for a specific sensing range. To address these shortcomings, a novel methodology for estimation of source information reliability based on contextual information of the sensor setting and environment dynamics is proposed. The proposed approach enhances the source information reliability and the detection ratio in an energy efficient manner.
08:30 A Framework for Designing Active Pan-Tilt-Zoom (PTZ) Camera Networks for Surveillance Applications
Samer Hanoun and James Zhang (Deakin University, Australia); Vu Le (Deakin University); Burhan Khan, Michael Johnstone, Michael Fielding, Asim Bhatti, Doug Creighton and Saeid Nahavandi (Deakin University, Australia)
Camera networks have become more predominant in many aspects around our society. Designing active Pan-Tilt-Zoom (PTZ) camera networks requires placing the cameras appropriately in the environment according to the designated coverage requirements as well as examining the network's operational resilience to the environment dynamics. This design process is crucial before physically establishing the network to ensure successful deployment and operation. In this paper, we present a framework that can be applied for designing practical PTZ camera networks in a realistic virtual simulation environment. The framework enables optimizing the camera network placement for coverage of specific regions of interest (ROI) in the monitored space. Also, simulating the network operation against environment dynamics in order to determine the impact on the pre-established design as an active camera network is expected to monitor additional and unknown events happening in the environment. A surveillance case study is presented where results show how the developed framework can be adequately used for experimentally designing and testing active PTZ camera networks.
09:00 Low-Cost Wireless Intelligent Two Hand Gesture Recognition System
Aswin Natesh Venkatesh and Gandhi Rajan Ramachandran (Solarillion Foundation, India); Balasubramanian Thiagarajan (Sri Venkateswara College of Engineering, India); Vineeth Vijayaraghavan (Solarillion Foundation, India)
This paper elucidates the design and implementation of a low-cost wireless intelligent two hand gesture recognition system. The proposed system consists of a primary and secondary sub-system which work in tandem to recognize static gestures signed by the user. Each of the sub-systems consists of a sensory glove, embedded with custom made, low-cost flex and contact sensors interfaced with a ATMega328 microcontroller. The two sub-systems are wirelessly inter-connected through a pair of TI's CC2541 Bluetooth Low Energy (BLE) modules. The system recognizes gestures with the help of a Dual-mode Intelligent Agent (DIA) which operates in Identification Mode (IM) and intelligently switches to Enhanced Identification Mode (EIM) when the gesture fails to get recognized in IM. The EIM incorporates a Bit Stream Error Elimination (BSEE) algorithm which enhances the gesture recognition accuracy without the addition of any external hardware. The performance of the system was evaluated using a data set comprising of 196 static gestures from eight globally used sign languages. The system efficiency was found to be 80.06% in IM and was enhanced to 93.16% in EIM. The cost of the proposed system in prototype stage is USD 22, which the authors believe could be realized at under USD 10 on commercialization.

3A4: Research in Systems Engineering II

Room: Salon Neufchatel
Chair: Jean C Domercant (Georgia Tech Research Institute & Electronic Systems Laboratory, USA)
08:00 Automated Markov-chain based Analysis for Large State Spaces
Modeling the dynamic, time-varying behavior of systems and processes is a common design and analysis task in the systems engineering community. A popular method for performing such analysis is the use of Markov chains. Additionally, automated methods may be used to automatically determine new system state values for a system under observation or test. Unfortunately, the state-transition space of a Markov chain grows exponentially in the number of states resulting in limitations in the use of Markov chains for dynamic analysis. We present results in the use of an efficient data structure, the algebraic decision diagram (ADD), for representation of Markov chains and an accompanying prototype analysis tool. Experimental results are provided that indicate the ADD is a viable structure to enable the automated modeling of Markov chains consisting of hundreds of thousands of states. This result allows automated Markov chain analysis of extremely large state spaces to be a viable technique for system and process modeling and analysis. Experimental results from a prototype implementation of an ADD-based analysis tool are provided to substantiate our conclusions.
08:30 Cloud-based Semantic Services for Pan-European Emergency Preparation and Planning
Christina Schäfer (University of Paderborn, Germany); Torben Sauerland, Jens Pottebaum and Robin Marterer (Paderborn University, Germany); Daniel Behnke (TU Dortmund University, Germany); Christian Wietfeld (TU Dortmund University & Communication Networks Institute, Germany); Peter Gray (CloudSigma, Germany); Bogdan Despotov (CloudSigma, Bulgaria)
Today's emergency management especially in cross-border incidents is still underlying communication and interoperability barriers. By the introduction of a Common Information Space (CIS) as a socio-technical system, concepts for bridging between involved first responder and Police Authorities becomes evident. Within this paper technical elements of the concepts are described to also demonstrate the practical realization of a CIS.
09:00 A Functional Modularization Approach for Architecting Defense Open Systems: In Support of Strategic Open Architecture Reuse
Jean C Domercant (Georgia Tech Research Institute & Electronic Systems Laboratory, USA)
The Department of Defense (DoD) mandates the use of a modular open system approach (OSA) to acquisition. This approach seeks to develop and use appropriate technical standards, architecting principles, and business approaches to optimize total system performance and minimize total ownership costs. This approach also emphasizes modular designs for system components in order to strategically promote reuse across multiple platforms and domains. While the benefits of open systems are well understood across the DoD, there are still challenges in implementing an OSA. Among these is a lack of a clearly defined functional modularization method suitable for open systems. The goal of this research is to aid system architects in adopting an open approach by developing a methodology to identify candidate components that can form the basis of modular architectures. This will support the development of strategic OSA reuse plans through identifying common functional components. System engineers and architects will benefit by gaining the ability to more objectively evaluate competing architectures and designs that support stakeholder objectives in terms of strategic design attributes such as flexibility, extensibility, scalability, and reconfigurability. An example problem will be formulated for the purpose of defining a proof-of-concept application.

3A6: INCOSE

Room: Salon Viger A
Chair: Marzie Tabatabaefar (Institut National de la Recherche Scientifique (INRS), Canada)
08:00 Modbus Monitoring for Networked Control Systems of Cyber-Defensive Architecture
Charles Kim and Dayne Robinson (Howard University, USA)
Modbus Monitoring for Networked Control Systems of Cyber-Defensive Architecture
08:30 Network Intrusion Detection through Artificial Immune System
Marzie Tabatabaefar (Institut National de la Recherche Scientifique (INRS), Canada); Jean-Charles Grégoire (University of Quebec, INRS, Canada); Maryam Miriestahbanati (Concordia University, Canada)
Intrusion Detection Systems (IDS) are security technologies. In this regard, Artificial Immune System (AIS) which provides distributed detection through its lymphocytes is an appealing approach for designing IDSs. In this paper, an AIS based intrusion detection is proposed in which two sets of antibodies—positive and negative—are generated for normal and attack samples respectively using negative selection and positive selection theories in primary detectors' generation. Standard Particle Swarm Optimization (PSO) is employed for training immature detectors to improve detection rate. Moreover, antibodies' radiuses is dynamically determined through generation and training algorithms. Simulation shows that the proposed algorithm achieved 99.1% true positive rate while the false positive rate is 1.9%.
09:00 Secure Solution: One Time Mobile Originated PKI
Manish Kumar and Kapil Kant Kamal (Centre for Development of Advanced Computing, India); Zia Saquib (Centre for Development of Advanced Computing, Mumbai, India); Bharat Varyani (Center for Development of Advanced Computing, India)
Secure Solution: One Time Mobile Originated PKI

Wednesday, April 26, 09:30 - 10:00

Break

Wednesday, April 26, 10:00 - 11:30

3B1: Robotic Systems

Room: Salon Viger C
Chair: Aleksandr Sergeyev (Michigan Technological University, USA)
10:00 Fuzzy Controlled Object Manipulation using a Three-Fingered Robotic Hand
Use of underactuated fingers to conduct precision, in-hand manipulation is a common topic of recent robotics research, mostly due to their relatively light weight and simplicity of use. Grasping operations are facilitated by compliant joints however precise, in-hand manipulation is more challenging since post-grasp orientation of an object varies. Underactuated, robotic-fingered hands that are capable of predictable grasping are one step closer to human-like end-effectors. This paper presents a new effort towards effective robotic manipulation using two underactuated fingers and one fully actuated robotic thumb with 3 degrees of freedom (DOF). Fuzzy grasping using tactile feedback is used to provide an enhanced stable grasp solution. The system comprises tactile feedback, orientation of underactuated phalanges using flexible joints, and thumb trajectory planning.
10:30 Promoting Industrial Robotics Education by Curriculum, Robotic Simulation Software, and Advanced Robotic Workcell Development and Implementation
The rapid growth of robotics and automation, especially during the last few years, its current positive impact and future projections for impact on the United States economy are very promising. This rapid growth of robotic automation in all sectors of industry will require an enormous number of technically sound specialists with the skills in industrial robotics and automation to maintain and monitor existing robots, enhance development of future technologies, and educate users on implementation and applications. It is critical, therefore, that educational institutions adequately respond to this high demand for robotics specialists by developing and offering appropriate courses geared towards professional certification in robotics and automation. In order to effectively teach concepts of industrial robotics, the curriculum needs to be supported by the hands on activities utilizing industrial robots or providing training on robotic simulation software. Nowadays, there is no robotic simulation software available to the academic institution at no cost which limits educational opportunities. As part of the NSF sponsored project, team of faculty members and students from Michigan Tech are developing new, open source "RobotRun" robotic simulation software which will be available at no cost for adaptation by the other institutions. This will allow current concepts related to industrial robotics to be taught even in locations without access to current robotics hardware. In addition, to teach emerging concepts of robotics, automation, and controls, authors present the design and development the state-of-the-art robotic workcell consisting of 3 FANUC industrial robots equipped with robotic vision system, programmable logic controller, a conveyer and various sensors. The workcell enables the development and programing of various industry-oriented scenerious and therefore provide students with the opportunity of gaining skills that are relevant to current industry needs.
11:00 Secure communication for the Robot Operating System
Benjamin Breiling and Bernhard M Dieber (Joanneum Research, Austria); Peter Schartner (Alpen-Adria Universität Klagenfurt, Austria)
The boom for robotics in recent years has also empowered a new generation of robotics software. The Robot Operating System (ROS) is one of the most popular frameworks for robotics researchers and makers. It is moving rapidly towards use in commercial products and industrial scenarios. Security-wise however, ROS has several vulnerabilities which may be used to attack ROS-based applications. In this paper we present a secure communication channel enabling ROS-nodes to communicate with authenticity and confidentiality.

3B2: Decision Making Systems II

Room: Salon Viger B
Chair: Jakob Axelsson (Mälardalen University & Swedish Institute of Computer Science, Sweden)
10:00 The tradeoffs between portfolios in multi-attribute project portfolio selection
Xiaoxiong Zhang (University of Waterloo, Canada); Yajie Dou (National University of Defense Technology, P.R. China); Qingsong Zhao (National University of Defense Technology & College of Information System and Management, P.R. China); Kai Zhao (University of Waterloo, Canada)
Project portfolio selection is a complex multi-attribute decision analysis problem with a wide range of considerations. A key issue in project portfolio selection is how to evaluate different portfolio solutions especially when some portfolios are a subset of some other portfolios. In this paper, a hybrid approach is proposed to help decision makers do tradeoffs between specific pairs of portfolio solutions. More specifically, each alternative project is assigned a comparable quantitative measure based on their performances on each criterion using value function. Then, weights of the projects are determined based on their relative measures. Next, specific pairs of portfolios that need to be compared are picked out among all these solutions. Finally, a selection procedure is carried out to compare these specific portfolios in terms of their overall values, the outcomes of which can aid decision makers in discarding some unnecessary solutions. A case is studied to demonstrate the utilization and effectiveness of the proposed approach.
10:30 Towards the Architecture of a Decision Support Ecosystem for System Component Selection
Jakob Axelsson (Mälardalen University & Swedish Institute of Computer Science, Sweden); Ulrik Franke (Swedish Institute of Computer Science (SICS), Sweden); Jan Carlson (Mälardalen University, Sweden); Séverine Sentilles (Mälardalen research and Technology centre, Sweden); Antonio Cicchetti (Mälardalen University & Västerås, Sweden)
When developing complex software-intensive systems, it is nowadays common practice to base the solution partly on existing software components. Selecting which components to use becomes a critical decision in development, but it is currently not well supported through methods and tools. This paper discusses how a decision support system for this problem could benefit from a software ecosystem approach, where participants share knowledge across organizations both through reuse of analysis models, and through partially disclosed past decision cases. It is shown how the architecture of this ecosystem becomes fundamental to deal with an efficient knowledge sharing, while respecting constraints on integrity of intellectual property. A concrete proposal for an architecture is outlined, together with experiences of a proof-of-concept implementation.
11:00 A Threshold Based Airspace Capacity Estimation Method for UAS Traffic Management
Vishwanath Bulusu (University of California, Berkeley, USA); Valentin Polishchuk (Linkoping University, Sweden)
This paper provides a mathematical method for airspace capacity estimation. It is motivated by the need to assess the impact of unmanned aircraft systems on low altitude airspace operations. We define capacity as a minimum of metric-specific phase transition thresholds. The definition is flexible to accommodate a wide variety of metrics defined for the airspace and hence, can be used to compare different unmanned traffic management system approaches. We provide a proof of concept using a metric based on the size of de-confliction problems. The probability of occurrence of large conflicts show phase transition as the traffic density is increased. The traffic density at phase transition i.e. the metric-specific capacity measure, increases with decreasing minimum separation tolerance. Traffic management systems which allow for a higher proximity between aircraft should therefore improve the airspace capacity. Further work must incorporate a wider range of metrics and sense and avoid algorithms for a more rigorous validation and application of our airspace capacity estimation method.

3B3: Sensors Integration and Applications II

Room: Salon Terasse
Chair: Andy Adler (Carleton University, Canada)
10:00 Decentralized Configuration of Embedded Web Services for Smart Home Applications
Web Services can be used as a communication structure for embedded devices. Out of numerous Web Service specifications there is a subset (Profile) to implement Web Services on resource constrained devices, called Devices Pro- file for Web Services (DPWS). The resulting service oriented architecture enables the user to discover new devices, e.g., in the local smart home network. The open standards for Web Services further define a metadata format for service description. Besides the simple invocation of service operations, an eventing mechanism is specified. A client which subscribes to an event will receive notifications. Those features and the inherent Plug&Play capability provided by Web Services are suitable to connect smart home devices in a user-friendly manner. However, DPWS specifies no standard procedure to combine multiple devices with each other to build more complex applications. Therefore, new concepts for embedded Web Service orchestration are needed. Some commercial solutions use a central hub, which represents a Single Point of Failure (SPoF). Hence, a failure would lead to a breakdown of all smart IoT applications and decrease the user acceptance. We propose a concept for configuring embedded devices through a smartphone or tablet PC by defining a Configuration Service. A prototype is implemented as a proof- of-concept for different smart home devices. Furthermore, a mobile Android application to find and orchestrate the devices is presented.
10:30 Biometric Permanence: Definition and Robust Calculation
Henry Harvey (Carleton University, Canada); John Campbell (Bion Biometrics, USA); Stephen J Elliott (BSPA Laboratory, Purdue University, USA); Andy Adler (Carleton University, Canada)
In this paper, we develop a novel metric , which we call biometric permanence, to characterize the stability of biometric features. First, we define permanence in terms of the change in false non-match ratio (FNMR) over a repeated sequence of enrolment and verification events for a given population. We consider how such a measure may be experimentally determined. Since changes in FNMR, for most biometric modalities, are small, any variability in the biometric capture over time will camouflage the changes of interest. To address this issue, a robust methodology is proposed which can isolate the visit-to-visit variability, and substantially improve the estimation. We develop a model for the visit biases, and provide extensive simulation results supporting the efficacy of the improved method.
11:00 Energy Harvesting in Wireless Sensor Network with Efficient Landmark Selection using Mobile Actuator
Ahmad Ibrahim Shawahna (King Fahd University of Petroleum and Minerals, Saudi Arabia); Md Enamul Haque and Mehmet Engin Tozal (University of Louisiana, Lafayette)
In this paper, we present an approach to ensure efficient energy distribution among all the nodes in a wireless sensor network by employing mobile chargers from several landmark locations within each sensor cluster with timing constraint. Mobile chargers are considered to use radio frequency signals for the charging purpose and use Hamiltonian cycle to ensure minimum time required to visit landmarks within each sensor cluster. This scheme helps increase the docking time compared to the total cycle time for the mobile chargers to get recharged from the docking station. It also increases the battery lifetime for the sensors which is one of the major bottleneck for wireless sensor networks. The optimum number of landmarks for each cluster is selected with the help of critical nodes whose energy goes below the defined threshold. Later, the mobile chargers calculate the minimum distance among the landmarks and initiate the energy transfer by certain percentage from their own reservoirs. The simulation results show that the proposed approach successfully provides prolonged lifetime for the sensor battery, less waiting time for the sensor nodes to get recharged from the mobile chargers, and maximize docking time for the mobile chargers.

3B4: Aerial Systems

Room: Salon Neufchatel
Chair: Eddie E Galarza (Universidad de las Fuerzas Armadas ESPE, Ecuador)
10:00 An Empirical Study on Generic Multicopter Energy Consumption Profiles
Thomas Dietrich (Technische Universität Ilmenau, Germany); Silvia Krug (Technische Universitaet Ilmenau, Germany); Armin Zimmermann (Ilmenau University of Technology & Systems and Software Engineering, Germany)
Unmanned Micro-aerial vehicles (MAVs) enjoy high popularity in various application fields. They are capable of flying autonomously, following the instructions received from a controlling ground entity. One physical limitation of all mobile robotic vehicles is the restricted energy storage capacity they are able to carry. All processes in a robotic system, prominently in-air movements, consume energy and are thereby defining the overall operation time limit. This paper presents results of an analysis of the energy consumption in various discrete movement states of a multicopter, measured for two different systems. Findings regarding a systematic relation between system and movement parameters and the energy consumption levels are discussed. Furthermore a generic energy consumption profile model is presented.
10:30 Control Algorithm for the Inertial Stabilization of UAVs
Eddie E Galarza (Universidad de las Fuerzas Armadas ESPE, Ecuador); Cesar A Naranjo (Universidad de las Fuerzas Armadas - ESPE, Ecuador); Octavio Guijarro (Universidad Tecnica de Ambato, Ecuador); David Basantes (Instituto Tecnológico Superior Central Técnico, Ecuador); Victor Enriquez and Diego Paredes (CIDFAE - Centro de Investigación y Desarrollo FAE Ecuador, Ecuador)
11:00 Integrating UAS Swarming with Formation Drag Reduction
John Colombi and David R Jacques (Air Force Institute of Technology, USA); Jacob Lambach (US Air Force, USA)
In the seminal research into simulated swarming, Reynolds developed a methodology that guided a flock of agents using just three rules: collision avoidance, swarm centering, and velocity matching. By modifying these rules, an algorithm is created and applied to unmanned aircraft systems (UAS) so each aircraft in a "swarm" maintains a precise position relative to the preceding aircraft. Each aircraft experiences a decrease in induced aerodynamic drag, thus reducing overall fuel consumption, increasing range and endurance and expanding UAS utility. A simulation demonstrates the feasibility of the drag reduction swarm using a drag benefit map constructed from extant research. Due to both agent interaction and wind gust variability, the optimal position for drag reduction presented a severe collision hazard, and drag reduction was much more sensitive to lateral (wingtip) position than longitudinal (stream-wise) position. By increasing longitudinal spacing, the collision hazard was acceptably reduced. For one scenario, compared to a single UAS, a swarm of 10 aircraft demonstrated a 9.7% reduction in total aerodynamic drag, decreased fuel consumption by 14.2% and an increased endurance by 14.5%.

3B5: Systems Thinking

Room: Salle de Bal Foyer
Chair: William S. Devereux (JHU/APL, USA)
10:00 The Agile Manifesto, Design Thinking and Systems Engineering
Ann Darrin and William S. Devereux (JHU/APL, USA)
New and non-traditional movements in the engineering fields potentially have a positive impact on classical systems engineering models in terms of promoting innovation. These movements include off shoots of the Agile Manifesto such as Agile Software Engineering, Agile Software Systems Engineering and Agile Project Management, the rise and popularity of Design Thinking as practiced by IDEO and others and the success in new software programs of redefining basic engineering principles such as the Zen of Python. This paper discusses two of these movements and assesses the potential of incorporating them in generic system engineering steps. The viability and strength of systems engineering models (Classic V, Waterfall, Spiral and others) has been proven by their sheer longevity and the wealth of successful outcomes. However, today's external environment including what has been termed the technology explosion has dramatically truncated time to market requiring consideration of increasing agility in our process steps. It is agility that is the common thread in these new movements. Robust systems react to external impact (and in today's software systems are required to recover quickly), adaptive systems internally respond to external or internal inputs, whereas agile systems have the property of being able to be changed rapidly. As opposed to addressing systems with the characteristics of agility and flexibility, this paper addresses agility in the systems engineering process. A distinction made by Haberfellner and de Welt in their 2005 paper cleverly titled: Agile SYSTEMS ENGINEERING versus AGILE SYSTEMS engineering. They proposed that AGILE SYSTEMS are beneficial in cases where the systems have a long lifecycle, and changes result in significant costs, coupled with substantial uncertainty in the environment (customer functional requirements, demand evolution …). Their focus on Agile Systems did suggest that, in some cases uncertainty may be resolved before product release in which case focusing on agile SYSTEMS ENGINEERING alone may be sufficient. It is this second case where we concentrate on adapting systems engineering process to increase the flexibility and agility. These cases would include products and processes where there is more uncertainty at the outset with the ability to remove uncertainty at outcome. Research and development programs where there is a short life cycle to application would therefore benefit from the agile SYSTEMS ENGINEERING. In this context these non-traditional movements have validity in the agile SYSTEMS ENGINEERING process. There are various systems engineering models in use including the Classic V, the Waterfall and the Spiral Development. According to the International Council on Systems Engineering (INCOSE) the generic steps in a systems engineering process are: • State the problem, • Investigate alternatives, • Model the system, • Integrate, • Launch the system, • Assess performance, • and Re-evaluate. These functions can be summarized with the acronym SIMILAR: State, Investigate, Model, Integrate, Launch, Assess and Re-evaluate. This core approach has been further illuminated by Brian Mar who states: " Most systems engineers accept the following basic core concepts: •Understand the whole problem before you try to solve it •Translate the problem into measurable requirements •Examine all feasible alternatives before selecting a solution •Make sure you consider the total system life cycle. The birth to death concept extends to maintenance, replacement and decommission. •Make sure to test the total system before delivering it. •Document everything." Clearly driving uncertainty and risk from the process or product development is fundamental to these steps. As will be discussed further, these steps are not in conflict with the principles of the Agile Manifesto, Design Thinking or the Zen of Python. Systems Engineering as a Stifler or Enabler to Innovation More agile systems engineering process has the potential for handling the uncertainty required to infuse innovation into the process. It is relatively easy to point to examples where an overthought process became a detriment in product development. Often termed "paralyses by analysis"(and often leading to death). Identified concerns in systems engineering processes include: an "unthinking adherence to process," the application of "rigid, untailored processes", and "selecting specifications and standards prematurely" can "stifle innovation" and "[lead] to wasted time and money". The debate is intriguing and has led to many excellent papers and discussions relative to systems engineering as a stifler or enabler of innovation. Some of the findings relate to "the organization having the capacity and willingness to undertake such changes in process, that is non-traditional approaches are easier to execute when the behavioral and organizational characteristics are aligned." Systems Engineering principles, if properly applied, can effectively support successful innovation, on one side exploiting the benefits of procedures and methods to support the effective production of creative ideas and their successful implementation, and on the other employing the specific skills and competencies of Systems Engineers not only to develop innovative solution, but also to produce a working environment conducive to creativity and innovation. The consideration of any systems engineering process must be viewed in the context of the culture (internal organization and the external environment). The value of incorporating the new and non-traditional approaches exercises the systems engineering process.
10:30 Software Development Using Agile and Scrum in Distributed Teams
Youry Khmelevsky (Okanagan College, Canada); Xitong Li (HEC Paris, France); Stuart Madnick (Massachusetts Institute of Technology, USA)
Agile software development practices like Scrum that allow teams to focus on delivering product and improved communication has made it one of the easiest and best software development techniques. On the other hand, such agile methods have been designed for collocated software development and are thus not directly applicable to distributed agile development. In this paper, we present findings from a recent case study on distributed Scrum projects, the challenges and benefits the case projects reported and unique lessons learned from this case study. In 1995, Sutherland and Schwaber presented the first paper describing the Scrum methodologies. They collaborated during the following years to merge the writings, experiences, and industry best practices into what is now known as Scrum. Scrum focuses on project management institutions where it is difficult to plan ahead. Agile software (SW) development has become a popular approach to the engineering of software systems in the commercial world. To be Agile a project must employ Agile development methods and must also fit within an Agile product development system: The development organization must be willing to practice refactoring, or lose the benefits of Agile. The software itself must be Agile, lending itself to rapid incremental deliveries and must be architected accordingly. In the end of 1990, agile practices and Scrum principles in small self-organizing Scrum teams were used in distributed software development projects successfully. Starting from 2005, agile practices and Extreme Programming (XP) SW development methodologies were applied within student capstone projects at Okanagan College (OC) and University of British Columbia Okanagan (UBC O) as well. Agile principles were used in a industrial SW development project as well, based on Manifesto for Agile Software Development and Twelve Principles of Agile SW as a case study. The project requirement was to develop a new web-information system based on a simple but working web-site, unfinished customer's project requirements in MS Excel worksheet, project specification (about 60 pages in MS Word document) and unfinished graphical design (about 30 graphical images of the future web-site). The project's team contained team members from the US, Canada and Eastern Europe. The offshore development team was located in Eastern Europe. The project structure can be shown in this way: the customers with employees and volunteers are located in the US, Germany and Eastern Europe; main SW development contractor is located in the US with employees in Canada and Eastern Europe; the offshore SW Development subcontractor is located in Eastern Europe. Dingsoyr et al., based on their literature review on agile software development, found that there are "very few on the Scrum development process" and explicitly call for academic research on Scrum. Ramesh et al. , on the other hand, pointed out that traditional agile methods rely on informed processes to facilitate coordination, which is different from what distributed software development traditionally requires (i.e., formal mechanisms for coordination). Therefore, the product provides a unique context for us to conduct a case study that evaluates the applicability of the Agile principles and Scrum practices in a real-world distributed global software development (GSD) project and examines the inconsistencies between theoretical assumptions behind Scrum and practical observations. Based on our experiences, we found that although the distributed agile development (DAD) was to be based on "Twelve Principles of Agile Software," they were not rigorously used by all members of the team and do not fit well into the global software development (GSD). Parts of the reasons are that "GSD typically involves stakeholders located in different time zones and geographic locations, from different national and organizational cultures, using different and, at times, unreliable technologies to collaborate". Such temporal, geographical and socio-cultural distances can result in significant communication, coordination and control challenges that need to be overcome for the benefits of GSD to be realized. The project was started in April and was planned to be finished in middle of July by using Agile Manifesto project development principles. During the first project development weeks, the development team found that the customer was unable to provide graphical design for all of the elements of the web-system, which was not included in the project development tasks and "generated" some new ideas "on the fly" during the project development. In the next following weeks, developers found many inconsistencies in the project specifications and project requirements. Due to such problems, the "Stage 0" was extended from 2 weeks to almost 1 month. The project team made a decision to use Scrum practices additionally to general Agile Manifesto principles, to reduce the project meetings' time and to improve performance within teams. Originally the main contractor and offshore development team planned to finish the project in two months in three iterations (Stage 0 - Stage 2) and then support the finished system for about 8-12 months. The Stage 2 was planed mostly for the refactoring, final testing and bugs fixing only. When the project was started, the development team found inconsistencies between graphical design, list of requirements and project specification, which was developed in Eastern Europe in the past. A Development Team is made up of 3-9 people with cross-functional skills who do the actual work (analysis, design, develop, test, technical communication, document, etc.). The Development Team in Scrum is self-organizing, even though they may interface with project management organizations (PMOs). Conclusion: In this paper we presented a case study on the application of Scrum and Agile practices to an distributed GSD project. We also discussed the challenges and benefits related to the application of these practices and Scrum in general to GSD. The contribution of this paper is in practical recommendations, which can be used by other companies, planning to use Scrum or/and Agile practices within GSD projects. In our case study we used research papers related to the GSD and DAD, but we found inconsistency between theoretical assumptions and our practical experiences, for example related to a special scrum training before project begins (which is almost impossible due to very short project duration and limited project budget).
11:00 Attritable Design Trades:Reliability and Cost Implications for Unmanned Aircraft
John Colombi, Bryan Bentz, Ryan Recker, Brandon Lucas and Jason Freels (Air Force Institute of Technology, USA)
Aircraft are generally designed and produced to be maintainable. Recently, the U.S. Air Force, due to increasing aircraft unit costs, began to investigate early conceptual designs for attritable (unmanned) aircraft. Attritable is a system characteristic that implies low-cost and reuse - at least a few times. This attribute is a design consideration that affects cost, reliability, redundancy and system life. For systems engineers, this concept of a low-cost, attritable aircraft provides a very interesting design trade space that presents several challenges. This paper introduces attritable as a system attribute, and attempts to show the far-reaching impacts and challenges this novel attribute has on reliability modeling and cost estimation.

3B6: INCOSE

Room: Salon Viger A
Chair: Luc Filion (Nuum Solutions inc, Canada)
10:00 Fulfilling the Potential of Consumer Connected Fitness Technologies: Towards Framing Systems Engineering Involvement in User Experience Design
Woodrow Winchester (Robert Morris University, USA); Valerie Washington (Kennesaw State University, USA)
As evidenced by their proliferation in the consumer marketplace, the transformative potential of connected fitness technologies (e.g. fitness and activity tracking technologies such as Fitbit) to enable health behavioral change is great. However, the value of these technologies in certain contexts of health behavioral change is currently under scrutiny. User experience design is an identified factor in realizing their potential; and there is a need for more active involvement by systems engineers to facilitate efforts. Leveraging an exemplar set of user experience heuristics, a research roadmap is offered in support of this work, elucidating systems engineering knowledge gaps within this context. Discussion is also had in putting the offered roadmap in action.
10:30 Using Atlassian Tools for Efficient Requirements Management
Luc Filion (Nuum Solutions inc, Canada)
This paper describes an industrial case study using Atlassian JIRA® and third party plugins for requirements management in the field of transit systems. The solution presented shows efficiency in supporting the management of requirements, traceability and the systems engineering processes globally. After a short description of the technologies in action and a brief overview of the process we targeted, we describe the collection of methods and technologies put into place and demonstrate their use in order to achieve our goal: reaching CMMI-2 for requirements management and traceability. We describe the current background and scope of work, the needs and goals to meet, and then we explain our configurations and possibilities. Our results describe several sets of metrics we have obtained for traceability, starting from a customer requirement down to tests, considering multi-level analysis with specific criteria (verified, tested and justified).
11:00 Applying Standard Independent Verification and Validation (IV&V) Techniques within an Agile Framework: Is there a Compatibility Issue?
James Dabney (Rice, USA)
Applying Standard Independent Verification and Validation (IV&V) Techniques within an Agile Framework: Is there a Compatibility Issue?

Wednesday, April 26, 11:30 - 13:00

Best Papers Awards Luncheon

Room: Salon de Bal, "A" Level of Marriott Hotel

Wednesday, April 26, 13:00 - 14:30

3C1: Cyber Security Issues

Room: Salon Viger C
Chair: Logan Mailloux (Air Force Institute of Technology & United States Air Force, USA)
13:00 Conceptual Design Acceleration for Cyber-Physical Systems
Kevin Lynch (Raytheon Corporation, USA); Randall Ramsey (Raytheon, USA); George L Ball (Raytheon, Inc., USA); Matthew Schmit and Kyle Collins (Georgia Institute of Technology & Aerospace Systems Design Laboratory, USA)
Central to compressing engineering development times for complex cyber-physical systems is the identification of reasonable alternatives early in the design cycle. This paper describes the development and use of an ontology to quickly prune the conceptual design space for cyber-physical systems engineering. The Georgia Tech Aerospace Systems Design Laboratory, Metamorph Inc., and Raytheon refined an approach originally piloted on amphibious ground vehicles to determine whether it could be generalized to other product domains. Reducing the design space early in the engineering life cycle enables designers to focus quickly on high-value alternatives to reduce cost and speed development. The OpenMETA tool suite, supported by semantic constructs for model integration, facilitates trades and analysis across the engineering disciplines, including electrical, mechanical, thermal, fluid, and cyber. The ontology developed by the authors and described in this paper provides the semantic underpinnings to encode relationships among design alternatives and quickly visualize these trades. As the complexity of engineered systems increases, concomitant increases in development time, risk, and cost necessitate new approaches to engineering development. Model and component reuse, using proven, tested components with known characteristics, reduces development time, risk, and cost. Making components available early in the design process helps engineers quickly sort through design alternatives, and enables a focus of both human and computational effort on high-value alternatives that meet performance, manufacturing, and cost requirements. A number of modeling challenges must be addressed in this design process [1]. The heterogeneity and semantics in modeling languages, multiple domains, multiple levels of abstraction and fidelity, and multiple physics models must be considered [2]. Models in the model-based environment range across electrical, mechanical, thermal, fluid, cyber, and cost, and exist at different levels of fidelity and abstraction. The OpenMETA tool suite developed by Vanderbilt University and refined by Metamorph, Inc., was used to address and reconcile heterogeneity of multi-domain multi-physics models [3]. Having a design method and tool suite facilitates exploration of the design space complexities and options, and reduces the design cycle time and focuses the design effort on a much smaller subset of potential designs that meet performance, manufacturing and cost requirements. This approach has the potential to improve the ability to deliver products much faster and more cost effectively than current design processes. The authors' current work extends the OpenMETA tool suite and approach into the air vehicle domain, taking development speed and cost more explicitly into account. The key contribution of the current work is using a lightweight ontology to act as an integration mechanism between design knowledge and component knowledge across engineering domains and disciplines. Vanderbilt University and subsequently Metamorph Inc. have developed and refined CyPhyML (the Cyber Physical Modeling Language), for modeling, evaluation, and synthesis of cyber-physical systems [4, 5]. Reference [6] discusses the specific tool implementation that is discussed in this paper. The lightweight ontology being developed has the semantic sensor network (SSN) ontology as its base [7], and makes use of the QUDT, Owl-Time, WGS84, and SWEET ontologies, as described in [8]. Decreasing elapsed time in the engineering life cycle of cyber-physical systems requires raising the level of abstraction, while working across heterogeneous models in different domains at different levels of fidelity. The authors describe an ontology-based approach that facilitates a shift of the development focus to high-value design alternatives early in the life cycle. Example trades using the approach are discussed, demonstrating the viability for accelerating conceptual design space exploration. REFERENCES [1] P. Derler, E. Lee, and A. Sangiovanni-Vincentelli, "Addressing modeling challenges in cyber-physical systems," Technical Report No. UCB/EECS-2011-17, Electrical Engineering and Computer Sciences, University of California at Berkeley, March 4, 2011. [2] G. Karsai, "Unification or integration? The challenge of semantics in heterogeneous modeling languages." GEMOC 2014, 2. [3] S. Neema, T. Bapty, J. Scott, "VU-ISIS final report: Meta tools extension and maturation," Institute for Software Integrated Systems, Vanderbilt University, December, 2014. [4] J. Sztipanovits, X. Koutsoukos, T. Bapty, S. Neema, S., & E. Jackson, "Design tool chain for cyber-physical systems: lessons learned," DAC '15, June 07 - 11, 2015, San Francisco, CA, USA. [5] G. Simko, T. Levendovszky, S. Neema, E. Jackson, T. Bapty, J. Porter, & J. Sztipanovits (2012, August). "Foundation for model integration: semantic backplane." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (pp. 1077-1086). American Society of Mechanical Engineers. [6] M. Schmit, S. Briceno, K. Collins, D. Mavris, K. Lynch, G. Ball (2016). "Semantic design space refinement for model-based systems engineering," 2016 IEEE International Systems Conference, April 18-21, 2016, Orlando, FL, USA. [7] M. Compton, P. Barnaghi, L. Bermudez, R. GarcíA-Castro, O. Corcho, S. Cox, S., & K. Taylor (2012). "The SSN ontology of the W3C semantic sensor network incubator group." Web Semantics: Science, Services and Agents on the World Wide Web, 17, 25-32. [8] J. Calbimonte, H. Jeung, O. Corcho, K. Aberer (2012). "Enabling query technologies for the semantic sensor web." International Journal on Semantic Web and Information Systems, 8(EPFL-ARTICLE-183971), 43-63.
13:30 Game Theoretic Analysis for Resource Allocation in Dynamic Multi-hop Networks with Arbitration
Laurent Njilla (Air Force Research Laboratory, USA); Harold Ouete (University of Douala, Cameroon); Niki Pissinou (Florida International University, USA); Kia Makki (Technological University of America (TUA), USA)
A connection through a mobile node may not be available because of the greediness of selfish nodes. In this paper, we address the issue of dynamic packet forwarding by a set of wireless autonomous ad hoc nodes. Wireless nodes acting in a selfish manner try to use the resources of other nodes without their own participation. We model the dynamic packet forwarding problem as a negotiation game with an arbitrator. In our model, a group of mobile nodes requesting to forward packets negotiates with a mobile arbitrator until an agreement at least by simple majority is reached on a resource allocation. The mobile arbitrator submits offers to each mobile device in the group, and the mobile nodes decide to agree or disagree on the offer. The ultimate decision is made by simple majority. We investigate and solve the negotiation by finding the optimal Nash Equilibrium strategies of the game. We consider offers generated from Dirichlet's distribution for an ensemble of mobile devices over a finite and sporadic time limitation. The solution obtained from negotiation ensures that a mobile device always finds a peer or arbitrator to help forward packets and keep the network flowing. Mathematical proofs and MATLAB simulations support our model.
14:00 System-Level Considerations for Modeling Space-Based Quantum Key Distribution Architectures
Logan Mailloux (Air Force Institute of Technology & United States Air Force, USA); Benjamin Sargeant and Douglas Hodson (Air Force Institute of Technology, USA); Michael R Grimaila (Air Force Institute of Technology & DoD, USA)
Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to securely distribute cryptographic keying material between two parties for increased levels of security. Terrestrial QKD systems are limited to distances of <200 km due to severe losses during single photon propagation in both optical fiber and line-of-sight free-space configurations. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Science successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the necessary behaviors and requirements for developing a model for studying the effectiveness of QKD between LEO satellites and ground stations.

3C2: Decision Making Systems III

Room: Salon Viger B
Chair: Saeid Nahavandi (Deakin University, Australia)
13:00 A New Decomposition-based Evolutionary Framework for Many-objective Optimization
A new class of Multi-Objective Evolutionary Algorithms (MOEAs) has emerged recently that uses the concept of decomposition to overcome the challenges faced by the current state-of-the-art MOEAs in undertaking optimization problems with more than three objectives. This new class of MOEAs employs a set of reference points to decompose the objective space into multiple scalar problems and to generate the target reference vectors for the solutions to sustain their diversity at every stage of the evolutionary process. In this study, we propose a novel framework for this class of MOEAs with a restricted mating selection scheme, with the aim to further improve the quality of the solutions close to the target reference vectors. The proposed framework is evaluated and compared with the current popular reference vector-based MOEAs to demonstrate its effectiveness. Using the Inverted Generation Distance (IGD) as the quality indicator, the experimental results indicate the superiority of the proposed framework when it is coupled with the MOEAs in solving 3- to 10- continuous objective functions in many-objective optimization problems.
13:30 Systems Engineering Decision-making: Optimizing and/or Satisficing?
Alex Gorod, Tiep Nguyen and Leonie Hallo (The University of Adelaide, Australia)
Not all systems are the same: consequently they need different management approaches. The varied typology of systems is not always universally recognized. One way in which systems can vary is in terms of complexity; and there is increasing awareness about growing complexity in systems across a number of literature domains. Complexity is now an evolving topic in systems engineering (SE). However, traditional SE has tended to address all systems with the same approach, moving from the premise that activities and their interrelationships are linear and measurable. Recently there has been a paradigm shift in the way systems are understood, with the recognition of their nonlinear and emergent properties. Traditionally optimization has been the primary method of decision-making. When dealing with complexity, true optimization is not always possible: therefore satisficing was introduced. Selecting and applying these two very different approaches is proving to be a challenging task for engineering practitioners. This paper explores SE decision-making in dealing with different types of systems with various degrees of complexity and proposes a decision-making methodology which can assist engineering managers with the selection of optimizing versus satisficing in a given situation or problem space.

3C3: Sensors Integration and Applications III

Room: Salon Terasse
Chair: Gholamhossein Ekbatanifard (Lahijan Branch, Islamic Azad University, Iran)
13:00 An Energy Aware Dynamic Cluster Head Selection Mechanism for Wireless Sensor Networks
Maryam Kalantari and Gholamhossein Ekbatanifard (Islamic Azad University)
Energy efficiency in data collection and dissemination is a very important factor in wireless sensor networks. Therefore, minimizing energy consumption and maximizing network lifetime are the key factors in the design of wireless sensor network protocols. One of the energy conservation techniques is to select proper cluster head in a cluster. In this paper, a protocol has been proposed to select the sensor node as cluster head, in the sense that selecting the closest sensor node to low-energy nodes will result in lower energy consumption and early death prevention of nodes. The simulation results has been shown that the proposed protocol could enhance the network lifetime.
13:30 Efficient Harvester with Active Load Modulation and Wide Dynamic Input Power Range for Wireless Power Transfer Applications
Abdullah Almohaimeed and Mustapha Yagoub (University of Ottawa, Canada); Rony Amaya (Carleton University, Canada)
This paper deals with an active load impedance design to enhance adaptive reconfigurable rectifier performance. The proposed design aim is to address issues raised by early breakdown voltage effect in conventional rectifiers and extends the rectifier operation for wider input power range. The active load, introduced to actively modulate and operate as switch for various terminations at both low and high RF input power levels, achieves 40% of RF-DC power conversion efficiency over a wide dynamic range of input power from -17 dBm to 32 dBm, while exhibiting 80% of peak power efficiency at 12 dBm. The active load power harvester was designed to operate in the 915 MHz ISM band and suitable for Wireless Power Transfer applications.
14:00 Requirements engineering of a micro-UAV defense system
Markus Diehl (Technical University Munich, Germany); Mirko Hornung (Technical University of Munich, Germany)
This paper describes a research effort to develop the requirements for a micro-UAV defense system using the Department of Defense Architectural Framework (DoDAF). The goal is to define the system requirements for a security system against a single intruder UAV. The analysis begins with the definition of a potential scenario that is based on an analysis of relevant performance characteristics of commercially available micro UAS. The statement of need finalizes the first step of the requirements process. From this the stakeholder and the system requirements are developed using the DoDAF.

3C5: Gaming, Entertainment and Sensor Systems

Room: Salle de Bal Foyer
Chair: Gaétan J. D. R. Hains (Huawei Technologies Co. Ltd., France)
13:00 Game Private Networks Performance: From Geolocation to Latency to User Experience
Gaétan J. D. R. Hains (Huawei Technologies Co. Ltd., France); Youry Khmelevsky (Okanagan College, Canada); Rob Bartlett (W. T. Fast Inc., France); Alex Needham (W. T. Fast Inc., Canada); Tyler Sutherland (WTFast, Canada)
The WTFast's Gamers Private Network (GPN) is a client/server solution that makes online games faster. GPN connects online video-game players with a common game service across a wide-area network. Online games are interactive competitions by individual players who compete in a virtual environment. Response time, latency and its predictability are keys to GPN success and runs against the vast complexity of internet-wide systems. We have built an experimental network of virtualized GPN components so as to carefully measure the statistics of latency for distributed Minecraft games and to do so in a controlled laboratory environment. This has led to a better understanding of the coupling between parameters such as: the number of players, the subset of players that are idle or active, the volume of packets exchanged, the size of packets, latency to and from the game servers, and time-series for most of those parameters. In this paper we present a mathematical model of those system game network parameters and show how it leads to: (1) realistic simulation of each of those network or game parameters, without relying on the experimental setup; (2) very large-scale numerical simulation of the game setup so as to explore various internet-wide performance scenarios that: (a) are impossible to isolate from internet "noise" in their real environment and; (b) would require vast supercomputing resources if they were to be simulated exhaustively. We motivate all elements of our mathematical model and estimate the savings in computational costs they will bring for very large-scale simulation of the GPN. Such simulations will improve quality of service for GPN systems and their reliability.
13:30 Gaming Network Delays Investigation and Collection of Very Large-Scale Data Sets
Ben Ward and Youry Khmelevsky (Okanagan College, Canada); Gaétan J. D. R. Hains (Huawei Technologies Co. Ltd., France); Rob Bartlett (W. T. Fast Inc., France); Alex Needham (W. T. Fast Inc., Canada)
We have built an experimental network of virtualized Gamer Private Network (GPN) components so as to carefully measure the statistics of latency for distributed Minecraft games and to do so in a controlled laboratory environment. This has led to a better understanding of the coupling between parameters such as: the number of players, the subset of players that are idle or active, the volume of packets exchanged, the size of packets, latency to and from the game servers, and time-series for most of those parameters. In order to measure these target variables we determined the best course of action would be to collect metrics data from a decentralized source. For such a large scale venture we investigated frameworks to utilize and build a collection application on top of. In our investigation we happened upon TechEmpower's framework benchmarking test website. Building the collector on top of fasthttp's framework it was clear the performance expectations were going to be met using the same benchmarking tool 'wrk' that TechEmpower utilized. We were able to far exceed even the measurements shown by the results by TechEmpower benchmark tests by a factor of ten. We believe that this is due to our utilization of Elasticsearch as a datastore choice which boasts fast insert and query speeds as a feature. To investigate gaming network performance issues our project requires collecting a large scale of data from upwards of 400,000 client's gaming sessions in order to statistically analyze and improve their network performance. Using the open source programming language golang, a data collection web application (named Collector), was constructed that is able to accept a vast number of rapid incoming connections. The collector then transmits some amount of data to a data store before disconnecting. This operational information from clients, would first be authenticated by the Collector and then at some point bulk transmitted to some form of data store as was possible. Experiments were conducted as to the best way to create and tear down connections to achieve the optimal cross between the greatest number of connections and the greatest volume of data processed in the future. The goal is for the collector to be able to handle at least one million connections per second. The Collector server generates a number of workers which listen for incoming TLS connections. When a worker receives a request it uses a multiplexed handler function to authenticate the data packet, extracts the information, generates a job object using the extracted information, and adds the new job object to the job channel. The job channel is a queue from which objects are processed and inserted into a bulk Elasticsearch processor which finally sends a bulk packet of data to a data store. The server also regularly prints updates on the screen with the number of connections processed, the incoming connection's host and IP address as well as the target Elasticsearch index. The server has a configuration file that contains general server operations, profiling options, and Elasticsearch options. The general server operations involve opening a specific port, maximum packet size and connection keep alive options. The profiling options determine the type of profiling the application will record which include memory, cpu and blocking. Elasticsearch options regard functionality of the Elasticsearch processor and the IP address of the Elasticsearch server location. Other configuration variables determine the number of workers and jobs the server will be able to handle. The server has three changeable variables which vary the server's capacity that can be found at the top of the main function. MaxQueueSize sets the number of jobs that the job channel can hold. MaxWorkers dictates the number of workers that are generated to listen for connections. The more workers there are the more connections that can be handled in a shorter period of time; however, adding more workers will quickly consume more resources. The Collector uses Aliaksandr Valialkin's fasthttp, an implementation of a http platform (framework). In determining the framework we'd utilize, we investigated different technologies and their current real world benchmark results. The initial performance comparison comes from TechEmpower \cite{TechEmpower:2016}, an online organization that performs a standard benchmark test on web frameworks. Their round 12 tests, which occurred February 25th of 2016, were hosted on a static environment that was created according to best practices and community input. Each instance of the framework is implemented and goes through standard tests which are then compared to each other. The tests we are directly interested in are related to data updates to a database. TechEmpowers benchmark tests use wrk, a load simulator, to send a request packet containing 20 updates at a rate limited only by the infrastructure and framework. Fasthttp connecting to postgresql was able to handle 3,959 requests (20 update queries) over a 15 seconds. The requests had a average latency of 62.9 ms, standard deviation of 62.3 ms and a maximum of 893.7 ms. Conclusion: In this work we have built the elements and general structure of a web application that accepts requests to insert data to a database from clients. The framework utilized was determined through tests employed by TechEmpower which were then emulated to confirm that the framework performed in the way described. It can be seen from the data that the tests were successful but require further testing. It is however shown that the results from the test far exceed what we had expected with regards to datastore integration and feel that continuing in the same direction will yield positive results.

3C6: INCOSE

Room: Salon Viger A
Chair: James J Mulcahy (Florida Atlantic University & MEDNAX, USA)
13:00 Reengineering Autonomic Components in Legacy Software Systems: A Case Study
James J Mulcahy (Florida Atlantic University & MEDNAX, USA); Shihong Huang (Florida Atlantic University, USA)
Reengineering Autonomic Components in Legacy Software Systems: A Case Study
13:30 Bridging the Gap Across Program Management, Systems Engineering, And Plant Modeling
Raymond Jonkers (Merlantec Management and Engineering, Canada)
Bridging the Gap Across Program Management, Systems Engineering, And Plant Modeling

Wednesday, April 26, 14:30 - 15:00

Break

Wednesday, April 26, 15:00 - 16:30

3D1: Autonomous Systems I

Room: Salon Viger C
Chair: Saeid Nahavandi (Deakin University, Australia)
15:00 Towards Trusted Autonomous Vehicles from Vulnerable Road Users Perspective
Khaled Saleh (Institute for Intelligent Systems Research and Innovation (IISRI)); Mohammed Hossny and Saeid Nahavandi (Institute for Intelligent Systems Research and Innovation (IISRI), Australia)
A number of recent research projects in human-vehicle interaction field are addressing the problem of human trust in autonomous vehicles. Almost all of these work are focusing on investigating the attributes and the factors that influence the human drivers' trust of these vehicles. However, a little research has been done on the bystander humans' trust of autonomous vehicles. Bystander humans in the context of autonomous vehicles, are humans that does not explicitly interact with the automated vehicle but still affect how the vehicle accomplishes its task by observing or interfering with the actions of the vehicle. Vulnerable road users (VRU) are considered one example of the bystander humans interfering with the autonomous vehicle. According to a recent research study, intent understanding between vulnerable road users and autonomous vehicles was one of the most critical signs that accounted for a trusted interactions between the two entities. In this paper we are proposing a computation framework for modelling trust between vulnerable road users and autonomous vehicles based on a shared intent understanding between the two of them.
15:30 Analyzing Hazards in System-of-Systems: Described in a Quarry Site Automation Context
Stephan Baumgart (Volvo Construction Equipment, Mälardalen University, Sweden); Joakim Fröberg (SICS & Swedish Institute of Computer Science, Sweden); Sasikumar Punnekkat (Mälardalen University, Sweden)
Methods for analyzing hazards related to individual systems are well studied and established in industry today. When system-of-systems are set up to achieve new emergent behavior, hazards specifically caused by malfunctioning behavior of the complex interactions between the involved systems may not be revealed by just analyzing single system hazards. A structured process is required to reduce the complexity to enable identification of hazards when designing system-of-systems. In this paper we first present how hazards are identified and analyzed using hazard and risk assessment (HARA) methodology by the industry in the context of single systems. We describe systemsof-systems and provide a quarry site automation example from the construction equipment domain. We propose a new structured process for identifying potential hazards in systems-of-systems (HISoS), exemplified in the context of the provided example. Our approach helps to streamline the hazard analysis process in an efficient manner thus helping faster certification of system-of-systems.
16:00 An Energy-Based Flight Planning System for Unmanned Traffic Management
Zhilong Liu (University of California, Berkeley, USA); Raja Sengupta (University of California, Berkeley)
In this paper, we proposed an energy-based flight planning system for Unmanned Aircraft Systems (UAS) Traffic Management (UTM). Fuel consumption estimation at the flight planning stage is safety critical in Air Traffic Management (ATM), because energy-related failures are often life-threatening. However, conservative fuel estimation is not economical and environmentally friendly because carrying unnecessary fuel load burns a lot of extra fuel. The same reasoning holds in UTM. ATM Researchers are actively working on optimizing fuel loading, but such research is lacking in UTM. In this paper, we aim to optimize energy consumption in UTM with a flight planning system. The accuracy and effectiveness of the system is illustrated by experiments and simulations.

3D2: Systems Verification and Validation

Room: Salon Viger B
Chair: Sofiene Tahar (Concordia University, Canada)
15:00 Formalization of Birth-Death and IID Processes in Higher-order Logic
Liya Liu (Concordia University, Canada); Osman Hasan (Concordia University); Sofiene Tahar (Concordia University, Canada)
Markov chains are extensively used in the modeling and analysis of engineering and scientific problems. Usually, paper-and-pencil proofs, simulation or computer algebra software are used to analyze Markovian models. However, these techniques either are not scalable or do not guarantee accurate results, which are vital in safety-critical systems. Probabilistic model checking has been proposed to formally analyze Markovian systems, but it suffers from the inherent state-explosion problem and unacceptable long computation times. Higher-order-logic theorem proving has been recently used to overcome the above-mentioned limitations but it lacks any support for discrete Birth-Death process and Independent and Identically Distributed (IID) random process, which are frequently used in many system analysis problems. In this paper, we formalize these notions using formal Discrete-Time Markov Chains (DTMC) with finite state-space and classified DTMCs in higher-order logic theorem proving. To demonstrate the usefulness of the formalizations, we present the formal performance analysis of two software applications.
15:30 Systems Integration and Verification in an Advanced Smart Factory
George L Ball (Raytheon, Inc., USA); Randall Ramsey and Nicholas Barrett (Raytheon, USA); Christopher Runge (Raytheon Missile Systems, USA)
In 2014, Raytheon Missile Systems began work on creating a new small satellite manufacturing facility in Tucson, AZ. The primary objectives of the new factory were to reduce the cost and cycle time of producing small satellites. One approach leveraged to address these objectives was to minimize costly human-based oversight inspection while increasing the level of quality assurance related to the assembly process. Unlike assembly lines which produce hundreds or even thousands of the same item on a continuous basis, the new factory had to be adaptable to doing "one-of-a-kind" production. In order to assure a high-level of confidence in the proper assembly and testing of such unique space assets, the Raytheon team architected an approach that incorporated some of the newest advances in information technology and robotic testing. The factory relies on an event-driven architecture that incorporates streaming analytics as the basis for removing human quality control observers and for identifying product or process anomalies when they originally occur, which could be earlier in the assembly than when they may manifest themselves to a human observer. An integrated sensor network and a supporting information system powered by open source software securely receives, stores, and analyzes the data streams. The data management platform known as the Open Manufacturing Information System or OMIS, is based on the Apache Hadoop environment. Hadoop provides the power and flexibility that enables rapid adjustment to changing customer requirements and differences between each satellite that is brought into the factory. The sensor-enabled factory monitors the assembly process and gathers build history information using a variety of network-enabled devices such as automated torque controllers, weight scales, digital semi-automated robotic testing, video and audio recorders, and local environmental indicators, as well as work instruction tracking. All of the data are timestamped using a common network time protocol server and tagged with metadata indicating the source of the data and other unique characteristics. The capabilities developed within the Apache Hadoop framework enable actionable insights from the large and complex data sets created during the manufacturing process. The assembly and test of a single small satellite generates sensor data in a variety of heterogeneous formats from multiple sources every second during the assembly process. In addition to the sensor integration issues, OMIS must also interact with our SAP and Oracle databases as well as the real time SCADA system. Data fusion of sensor data, process specifications, SAP, and SCADA provides real time integrated situational awareness of the factory and product flow while providing a comprehensive assembly workflow history for forensic analysis of anomalous events or quality issues and verification of chain of custody and provenance during assembly. The real time integration and extraction of value-added information is non-trivial and must be accessible to the customer on-demand and in a manner that facilitates decision making. The combination of the Hadoop framework and the application of advanced analytical methods has created opportunities for increased production, decreased costs, predictive failure notification, and improvement of the overall efficacy of satellite manufacturing and quality assurance.
16:00 Improving Adaptive Network Fuzzy Inference System with Levenberg-Marquardt Algorithm
Ana Farhat and Ka C Cheok (Oakland University, USA)
The well-known adaptive neuro-fuzzy inference system (ANFIS) uses a combination of least-square estimation (LSE) and gradient descent back-propagation methods to model a training data set. In this paper, we show that the rate of convergence of ANFIS can be very much improved by using a combination of LSE and Levenberg-Marquardt algorithm (LMA). The improved ANFIS converges more closely and significantly more rapidly to the data. Detail explanation of the proposed ANFIS is presented, and its validity is verified via simulation.

3D3: Modeling and Simulation II

Room: Salon Terasse
Chair: John Salmon (Brigham Young University, USA)
15:00 An Agent-Based Decision Tool to Explore Urban Climate & Smart City Possibilities
This paper investigates alternative ways to construct a decision tool intended to help Delaware Valley Regional Planning Commission (DVRPC) region to meet or exceed the goal of 80% reduction in the emission of Green House Gas (GHG) by 2050. The goal is to explore and build several pre-prototypes to evaluate the value of the role for ABM, alternative data sources (Census, energy reports, DVRPC surveys, etc.), GIS modeling, and various social science theories of human behavior (land value theory, economic disparity theory, cognitive learning theory, etc.). Sect. 2 presents a model of the business as usual scenario that uses trend extrapolation to project energy consumption and GHG production until 2050.Section 3 then explains initial research on an Agent Based Model (ABM) with which users can investigate the role of attitude, information awareness, and economic disparities upon consumer choice of residence location and transportation mode. Finally, we conclude with some lessons learned and challenges for scaling.
15:30 Optimization of a Modular Drone Delivery System
Jaihyun Lee (University of Michigan, USA)
Drone have recently become a promising solution to rapid parcel delivery due to the advances in battery technology and navigation systems. Drones have inherited limitations in battery capacity and payload, which make the efficient operation and management a critical problem for successful delivery system. Adopting modularity in the drone design can provide operational benefits to increase overall fleet readiness and reduce overall fleet footprint. This paper discusses potential costs and benefits of introducing modularity to a drone delivery system. We propose an optimization method for the operation management of a fleet of modular delivery drones. We present the initial results that compare the proposed algorithm with existing operations management methods. Preliminary results show that a simple operations management strategy can make a drone delivery system unstable with increasing demand on certain types of modules in the fleet.

3D4: Energy Management and Sustainability II

Room: Salon Neufchatel
Chair: Leila Ismail (UAE University & Founder and Director to the High Performance and Grid/Cloud Computing Research Laboratory, United Arab Emirates (UAE))
15:00 Towards an Energy-Aware Task Scheduling (EATS) Framework for Divisible-Load Applications in Cloud Computing Infrastructure
Leila Ismail (UAE University & Founder and Director to the High Performance and Grid/Cloud Computing Research Laboratory, United Arab Emirates (UAE)); Abbas Fardoun (United Arab Emirates University, United Arab Emirates (UAE))
With the growing use of Cloud computing, energy consumption of the underlying data center becomes a critical issue for both the environment and the cloud electricity cost. Therefore, there is a need for scheduling framework in the Cloud which takes into account the optimization of the energy consumption of the Cloud. In this paper, we propose an Energy-Aware Task Scheduling (EATS) cloud computing framework which is responsible to schedule users' tasks considering the energy consumption of the underlying data center. This paper describes our framework, and report on workload classifications of energy consumption. The results reveal that CPU-bound applications are the most consumer of energy, and therefore should be accounted for in any framework of energy-efficient scheduling, and that strategies based on shutdowns and startups should be avoided.
15:30 Energy Management System for Automated Driving
Kirill Gorelik and Ahmet Kilic (Robert Bosch GmbH, Germany); Roman Obermaisser (University of Siegen, Germany)
With increasing level of driving automation new requirements regarding reliable power supply for power net components arise. In addition to new fail-operational power nets, appropriate control strategies providing functional power supply at least for the duration of vehicle transition to a safe state (standstill) are required. This paper presents a generic, topology-independent concept for future energy management systems controlling fail-operational power nets for automated driving in both normal and failure cases. Based on predictive runtime energy flow optimization and a 3-level-degradation concept, the energy management system presented in this work allocates the available power net energy resources in a way allowing to bring the vehicle to a standstill at the safest location with the best-suited driving profile and with the maximum of driving comfort. By adapting the control strategy to the current system state, the energy management system enables reliable power supply for power net components and increases the overall energy efficiency.
16:00 Agent Based Model for the Evidence-Based Long Term Planning of Power and Water Critical Infrastructures
James Thompson and Damon Frezza (The MITRE Corporation, USA); Burhan Necioglu (MITRE Corporation, USA); Michael Cohen, Kenneth Hoffman and Kristine Rosfjord (The MITRE Corporation, USA)
The comfort, mobility, and economic well-being of the U.S. population depends on reliable and affordable electric power services. Sustainable water supplies are required for operating conventional power plants and long term planning across both of these sectors is not well coordinated. It is increasingly important to analyze the security, sustainability, and resilience of mid- and long term electric utility and water system capacity expansion plans in an integrated fashion with respect to potential challenges posed by climate change and other risks to this critical infrastructure. This paper describes an Agent Based Model (ABM) of a typical regional power system that incorporates the features of specific plant types and their cooling systems that are dependent on abundant water supplies at appropriate temperatures and quantities to support full power operation. The effects of potential water restrictions and constraints on power plant cooling systems (i.e. cooling towers, cooling ponds, and once-through condensers) are analyzed to evaluate the level of risk inherent in given long term capacity expansion plans. This paper presents: • The architecture of an ABM representing power plants, along with technical and economic features affecting their dispatch and water requirements • The application of the ABM to represent an existing capacity expansion plan for a region to 2030 and the impact of water constraints on the ability to meet the required demand levels • A summary of next steps to refine and apply the ABM to a mitigation strategy to develop a more resilient power system with respect to water limitations Motivation: The power and water critical infrastructure sectors have significant interdependencies that raise the likelihood of risk occurrence to the public and to the economy they both serve. Long term planning in these two sectors is often fragmented, conducted in silos, and could be better coordinated through the formation of a Joint Planning Unit involving power, water, environmental, economic regulators, and planners to improve systems security and resilience. The model presented in Figure 1 for the interdependencies of power and water critical infrastructures includes physical and policy-based dependencies. The initial implementation of this model exercised on a chosen, constrained geographic region identified potential vulnerabilities and failure points in power-water interdependencies and thus enables science- and data-driven joint long-term planning. The characteristics of individual power plants (fossil, nuclear, and renewables fueled) are captured in the agent descriptions. A key differentiator of this model is its focus on long-term planning; existing, functional power-water planning model often focus on short-term, event based interdependencies. The aim of this model is to provide a capability to integrate planning, analysis, and evaluation across several areas identified by DHS as Critical Infrastructure including Electric Power, water, nuclear systems, dams, agriculture, and other critical elements and services in a regional economy. MITRE's power-water model is a three-part system. A multi-criteria decision analysis (MCDA) model evaluates policies and approaches to maintaining and expanding energy production in the face of diminished water resources. The MCDA model is informed through an agent based model of the power and water agents (i.e., power plants and dams/rivers/lakes) in the defined region of interest. The ABM details how the power and water entities would develop over a 30-year time period. MITRE's power-water modeling system allows for the long-term comparison of and decisions concerning alternative power-water policies and technologies within a data-driven approach.

3D5: Model-Based Systems Engineering III

Room: Salle de Bal Foyer
Chair: Robert Hilbrich (German Aerospace Center (DLR), Germany)
15:00 Free and Open Source Fault Tree Analysis Tools Survey
Anis Baklouti (Supméca & ENISo, France); Nga Nguyen (EISTI, France); Jean-Yves Choley and Faïda Mhenni (SUPMECA, France); Abdelfattah Mlika (ENISo, Tunisia)
This paper gives an in-depth survey about some free and open source tools for Fault Tree Analysis (FTA), which is one of the most used techniques in safety and reliability engineering. We have carried out a comparative study for four different tools. Firstly, OpenFTA is an open source fault tree analysis which is based on XFTA calculation engine. Secondly, OpenAltaRica platform which is a free tool that analyzes the risk of complex systems. Thirdly, ALD Fault Tree Analyzer which is a free web-based tool that analyzes static fault trees. Finally, DFTCalc which is an open source tool that analyzes dynamic fault trees based on Stochastic Model Checking techniques. To compare these tools, three representative examples are used. The first one, modeled by OpenFTA and ALD Fault Tree Analyzer, is an Electro Mechanical Actuator (EMA) used to actuate the ailerons of an aircraft, with a static fault tree containing AND gates representing redundancy mechanism. The second, modeled by DFTCalc, is a remotely controlled lawnmower with a dynamic fault tree. The third, modeled by OpenAltaRica, is an example of fault tree generation from AltaRica code. In addition, the same example of EMA system has also been modeled by Isograph Fault Tree++ in order to compare free and open source tools with some commercial tools.
15:30 Experiences Gained From Modeling and Solving Large Mapping Problems During System Design
Robert Hilbrich (German Aerospace Center (DLR), Germany); Michael Behrisch (DLR, Germany)
The rising complexity of safety-critical systems and their increasing application to automate safety-critical tasks present a challenge for established development methodologies and tools. Are they able to handle the growing system complexity without compromising either system efficiency or its correctness? This challenge is addressed by the "correctness by construction" engineering principle. It aims to formalize error-prone and cumbersome engineering tasks during the design of a system in order to achieve correctness as well as efficiency despite high levels of complexity. However, the major obstacle in applying this principle in practice lies in the necessary formalization of "constructive tasks" for which human engineers with creative minds are still predominantly responsible. The authors of this papers applied this principle to mapping problems which typically occurs during the design of several real-world safety-critical systems. In order to automate the "mapping process" the tool suite ASSIST was developed by the authors. It takes textual specifications of a mapping problem and its constraints in a domain-specific language as input from the systems engineer and uses constraint programming and an embedded constraint solver to determine correct and optimized solutions. By using ASSIST, large-scale mapping problems, for which a manual construction of a solution required about 12 person months, were solved and optimized within 10 minutes on a regular desktop computer. In this paper, the experiences gained from modeling and solving specific mapping problems during the design of large safety-critical systems are described in more detail. Emphasis is being put on the description of modeling techniques, search strategies and mapping heuristics which proved to be successful or unsuccessful in practice. The characteristics for each of these real systems and the specifics for each mapping problem vary substantially. They cannot be easily published without violating intellectual property rights. Therefore, the authors devised a hypothetical system for this paper as an example to describe and illustrate the experiences gained from modeling and solving these large-scale mapping problems. In this example, a mapping is required between a set of 2500 sensor cables and a set of 6000 ports on several distributed network hubs. Each cable has to be connected to one port and each of these "assignments" has to satisfy an extensive set of safety- and resource-related requirements which significantly reduce the choice of ports. The optimization goal of this example is to reduce the total cable weight. The experiences which will be discussed based on this example refer to the following topics: Modeling a mapping problem and its constraints, Searching for solutions or Finding optimized solutions. For each of these categories, surprising issues and challenges will be presented which arose in large-scale mapping problems. They will be discussed based on the hypothetical example. Strategies, heuristics and modeling tricks will be presented which proved to be successful in practice to solve these issues. These contributions especially benefit practitioners in the field of model-based systems engineering of safety-critical systems. They underline that constructive tasks can be formalized and automated, so that the correctness of system can be argued based on its construction process.
16:00 Experimentable Digital Twins for Model-Based Systems Engineering and Simulation-Based Development
Michael Schluse (Technical University of Aachen, Germany); Linus Atorf (Institute for Man-Machine Interaction, RWTH Aachen University, Germany); Juergen Rossmann (Technical University of Aachen, Germany)
The concepts and methodologies behind Model-based Systems Engineering (MBSE) hold great promises concerning the development of complex systems. Various projects have been carried out successfully during the last years and demonstrated the power behind the overall concept—and the practical problems to reach the ambitious overall goals. Whereas the first steps of MBSE like the iterative modeling of requirements, designs, behaviors, and tests became standard procedures in Systems Engineering (SE), the transition to simulation often is still restricted to quite simple scenarios. Although elaborated system models deliver all the information needed, the simulation of the overall system in prospective working environments interacting with other systems is rather an exception. The problem is that there is still quite a gap between the first SE steps and the various algorithms simulation technology can offer today. Major reasons for this seem to be the resulting complexity of the system model when modeling complex interactions, the complexity of using state-of-the-art simulation technology and the absence of simulation frameworks for simulations across multiple domains and disciplines. "Experimentable Digital Twins", a concept originally developed for the eRobotics methodology, seem to have the potential to close the gap between SE and simulation by introducing a new structuring element to configure simulations. A new simulation system architecture integrating well-known simulation algorithms provides Virtual Testbeds for the simultaneous simulation of a network of different Digital Twins interacting with each other in various ways (i.e. a network of different systems, their components and their working environment). This approach has been successfully used for a variety of different applications in multiple research areas. As one application, it allows for the simulation-based optimization of parameters, system structure etc.

3D6: THEFOSE

Room: Salon Viger A
Chairs: William Edmonson (North Carolina A&T State University, USA), Claus Nielsen (Cranfield University, United Kingdom)
15:00 Fuzzy Classification Context for the Responsive and Formal Design Process
Solomon Gebreyohannes (NC A&T University, USA); William Edmonson, Albert Esterline, Abdollah Homaifar and Nadew Kibret (North Carolina A&T State University, USA)
This paper presents an application of a fuzzy relation in system modeling (from requirements) to be used for a Systems Engineering (SE) methodology. We define \emph{fuzzy classifications} (models for distributed systems), extract component and system \emph{theories} (sets of logical expressions), and ensure consistency of requirements for the Responsive and Formal Design (RFD) Process. The RFD process is a SE methodology that relates a set of requirements, associated models, simulations, and the relationship between them, by integrating Model-Based Systems Engineering (MBSE) to manage system modeling complexity with formal methods to ensure that designs are verifiably correct against their requirements. To translate informal requirements to logical expressions in the RFD process, we first model requirements using a 3-tuple structure called a \emph{classification} formulated from Barwise and Seligman's channel theory. A classification consists of ``tokens" (observed situations) and ``types" (situation features) and a binary relation classifying tokens with types. However, classifying tokens using types as present (represented as `$1$') or absent (represented as `$0$') as used in channel theory is not always possible (since it involves vagueness and imprecission) and the representation lacks expressiveness to reason about relations among such types (vague situation features). Hence, a binary classification doesn't capture uncertainity. In this paper, we consider a degree of truth in the relation between tokens and types to define a fuzzy classification. We then develop an algorithm that extracts a theory from a fuzzy classification. This helps in formal proof for checking consistency (no contradiction) and deducing to requirements (verifying properties). We demonstrate our development using three small satellites measurement system whose goal is to image the colorful auroral ovals seen around Earth's magnetic poles.
15:30 Applying the VDM Formalism Across Systems Engineering Lifecycles
Claus Nielsen (Cranfield University, United Kingdom)
While Systems Engineering has come a long way since its early beginning over six decades ago, it is still facing challenges in gaining the same degree of rigour and precision that is found in the theoretical foundation of other engineering domains. Systems engineering has always incorporated tools and methods from other engineering domains and adapted them to fit the purpose of engineering systems. This paper explores the feasibility of expanding the foundation of systems engineering through the use of one of the theoretical foundations of the Software Engineering domain, specifically the use of formal methods. A study is presented on the applicability of one of most well-established formal methods notations, the Vienna Development Method (VDM), across a classic systems engineering life-cycle, using a systems engineering case study on the Next Generation Air Transportation System.
16:00 Systems Theory and a Drive Towards Model-based Safety Analysis
Cody Fleming (University of Virginia, USA)
We propose that systems engineering principles taken from multidisciplinary engineering, from model-based design and systems engineering, and from new, emerging methods for safety analysis of complex, coupled systems can be applied to extend the methods of system safety assurance into a so-called field of ``Model-based Safety Analysis''. The safety analysis methods are based on a model of accident causality that is grounded in systems theory and frames safety as a control problem rather than just a reliability problem. This perspective can capture behaviors that are prevalent in complex, human- and software-intensive systems, and the paper includes a few brief examples to demonstrate the approach. This model-based safety analysis supplements existing model-based systems engineering activities, as well as other safety-related activities and can be applied early in concept development when design details or system specifications are not yet available--it provides a formal means for reasoning about immature system design concepts.

Thursday, April 27

Thursday, April 27, 08:00 - 09:30

4A1: Autonomous Systems II

Room: Salon Viger C
Chair: Peter Travis Jardine (Queen's University, Canada)
08:00 Design of Model Predictive Control via Learning Automata for a Single UAV Load Transportation
Kleber Cabral and Sergio Ronaldo Barros dos Santos (Instituto Tecnológico de Aeronáutica, Brazil); Sidney Givigi (Royal Millitary College of Canada, Canada); Cairo L. Nascimento, Jr. (Instituto Tecnológico de Aeronáutica, Brazil)
In recent years, autonomous aerial robots have been successfully used to perform the construction of structures composed by parts that have similar dimensions and inertial moments. However, these proposed control systems are not able to accurately control the UAVs during the handling and transporting loads with various weights and balance features. In this paper, we investigate a robust and innovative control strategy for UAV load transportation system that can deal with the load characteristics and disturbances such as ground effect and control noise. Taking into account the nonlinear and underactuated features of the quadrotor, a Learning Automata (LA) methodology is applied to tune the Nonlinear Model Predictive Controllers (NMPCs) in the various contexts of operation. Specifically, it applies LA to select the weighting parameters of the objective function in order to minimize tracking error provided by the plant. Simulation results demonstrate the learned weighting parameters can be efficiently employed to obtain NMPC controllers for tracking optimized trajectories to deal with different load conditions.
08:30 Experimental Results for Autonomous Model-Predictive Trajectory Planning Tuned with Machine Learning
Peter Travis Jardine (Queen's University, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada); Shahram Yousefi (Queen's University, Canada)
This paper presents experimental results of a high-level trajectory planning algorithm for autonomous quadrotors based on Model Predictive Control (MPC) tuned with machine learning. Time-varying planar inequality constraints are used to avoid obstacles. The nonlinear plant dynamics are linearized around a hover condition. Learning Automata is used to select the relative weights of the objective function and compensate for nonlinearities lost during this linearization. The proposed technique successfully guides a quadcopter to a target while avoiding a spherical obstacle placed in its path. These results demonstrate the potential application for MPC-based techniques in unmanned aerial vehicle operations that involve obstacles. Furthermore, they demonstrate that machine learning can be used to tune parameters in an MPC formulation.

4A2: Complex Systems Issues I

Room: Salon Viger B
Chair: Jawad Ahmad (Glasgow Caledonain University, United Kingdom)
08:00 A Systematic Approach to Model and Simulate Controlling Industrial Processes with Uncertainties
Ashraf A Zaher (American University of Kuwait); Mounib Khanafer (American University of Kuwait, Kuwait)
This paper investigates the design of adaptive controllers for such applications that could be modeled by low order dynamics with some parameters uncertainties. Some of the control applications that fall into this category include industrial processes (e.g. level, flow, pressure, etc.) and some automotive applications (e.g. active suspension). The paper introduces two different design techniques and compares them to traditional PID controllers. The first design is based on using a combination of state feedback and Lyapunov-based techniques. This proposed controller has the advantage of being applicable to both linear and nonlinear models. The key issue in the design is arriving at the best parameter update law that guarantees both stability and satisfactory transient performance. The second design technique makes use of the well-known gradient algorithm to identify the unknown parameter(s). A comprehensive comparison is then presented to highlight the advantages and disadvantages of the proposed strategies. Tradeoffs between stability and performance are carefully studied. A simulated first order process, using MATLAB, is used to exemplify the suggested techniques. Finally a conclusion is submitted with comments regarding real-time compatibility of the proposed controllers.
08:30 Energy Demand Prediction Through Novel Random Neural Network Predictor for Large Non-Domestic Buildings
Jawad Ahmad (Glasgow Caledonain University, United Kingdom); Hadi Larijani (Glasgow Caledonian University, United Kingdom); Rohinton Emmanuel and Mike Mannion (Glasgow Caledonain University, United Kingdom); Abbas Javed (Glasgow Caledonian University, United Kingdom); Mark Phillipson (Glasgow Caledonain University, United Kingdom)
Buildings are among the largest consumers of energy in the world. In developed countries, buildings currently consumes 40% of the total energy and 51% of total electricity consumption. Energy prediction is a key factor in reducing energy wastage. This paper presents and evaluates a novel RNN technique which is capable to predict energy utilization for a non-domestic large building comprising of 562 rooms. Initially, a model for the 562 rooms is developed using Integrated Environment Solutions Virtual Environment (IES-VE) software. The IES-VE model is simulated for one year and 10 essential data inputs i.e., air temperature, dry resultant temperature, internal gain, heating set point, cooling set point, plant profile, relative humidity, moisture content, heating plant sensible load, internal gain and number of people are measured. Datasets are generated from the measured data. RNN model is trained with this datasets for the energy demand prediction. Experiments are used to identify the accuracy of prediction. The results show that the proposed RNN based energy model achieves 0.00001 Mean Square Error (MSE) in just 86 epochs via Gradient Decent (GD) algorithm.

4A3: Energy Management and Sustainability III

Room: Salon Neufchatel
Chair: Jawad Ahmad (Glasgow Caledonain University, United Kingdom)

4A4: Engineering Systems-of-Systems I

Room: Salle de Bal Foyer
08:00 A Requirement-Driven Description Methodology for Technology System of Systems
Hanlin You (National University of National Defense, P.R. China); Mengjun Li (NUDT, P.R. China); Jiang Jiang, Bingfeng Ge and Kewei Yang (National University of Defense Technology, P.R. China); Ming Qiao (Province Hunan, National University of Defense Technology, Kaifu District, Changsha city, P.R. China)
Due to the philosophical complexity of technology and scale complexity of system of systems, traditional methods are incapable to delineate technology system of systems (TSoS) comprehensively and accurately. Consequently, it is proposed that a requirement-driven methodology using multi-view idea to cope with this challenge in this paper. First, the interactive relation between data preparation and analysis approaches is introduced meanwhile the evolution of model thoughts for TSoS is reviewed. Then, the description framework is constructed which consists of three views and twelve models. Third, illustrative examples are utilized to demonstrate the proposed methodology using patent documents and the Terminal High-Altitude Area Defense (THAAD). Finally, related conclusions are summarized and future work of TSoS engineering is discussed.
08:30 Modeling and Analysis of Health-Information System of Systems for Managing Transitional Complexity Using Engineering Systems Multiple-Domain Matrix
Suguru Okami and Naohiko Kohtake (Keio University, Japan)
This study examines an approach to model and analyze the transforming health-information system of systems. The process and architecture for the Cambodian malaria- surveillance system were modeled using the Engineering Systems Multiple-Domain Matrix modeling framework. By using the attributes of the process, architecture, and risk associated with the environment within the model, relative weights of the constituent systems were scored at each interval of time. The simulated confidence intervals of the absolute difference in the scores indicated that this approach captured the transformation of the system under investigation. This approach provides the first step in analyzing the transitional conditions of constituent systems as well as the entire health-information system, whereby informed decision-making for optimizing continuous system management is facilitated.
09:00 Systems of Systems Engineering for Particle Accelerator based Research Facilities
Thilo Friedrich (European Spallation Source ERIC & Royal Institute of Technology, Stockholm (KTH), Sweden); Annika Nordt (European Spallation Source ERIC, Sweden); Christian Hilbes (ZHAW School of Engineering, Switzerland)
This paper explores the applicability of Systems of Systems (SoS) Engineering in the development of large-scale particle accelerator research facilities. Modern particle accelerator facilities, realized by complex constellations of interacting systems, serve a variety of users as research enablers. While the constituent systems exhibit a significant degree of technical and operational independence and distinct life cycles, the performance required to conduct research still needs to emerge from their integration into one overall system, the research facility. This renders a Systems of Systems oriented approach to engineering useful. Furthermore, accelerator based research facilities face increasing availability expectations. Achieving those expectations can be supported through a tailored application of functional safety standards as engineering methodology guideline on a SoS level, as explained in this paper. A SoS-Engineering approach utilizing functional safety standards (IEC 61511, IEC 61508) in this way is concretized in a case study on the development of a Machine Protection Systems of Systems in the world-leading neutron science laboratory European Spallation Source ERIC (ESS).

4A5: THEFOSE

Room: Salon Viger A
Chairs: Rick Dove (Stevens Institute of Technology, USA), William Edmonson (North Carolina A&T State University, USA)
08:00 Case Study: Agile Hardware/Firmware/Software Product Line Engineering at Rockwell Collins
Rick Dove (Paradigm Shift International, USA); William Schindel (ICTT System Sciences, USA); Robert Hartney, III (Rockwell Collins, USA)
Case Study: Agile Hardware/Firmware/Software Product Line Engineering at Rockwell Collins
08:30 A Language Proposition for System Requirements
Benoit Lebeaupin (CentraleSupélec, France); Antoine Rauzy (NTNU, Norway); Jean-Marc Roussel (ENS, France)
Natural language is currently the basis of the majority of system specifications, even if it has several drawbacks. In particular, natural language is inherently ambiguous. In this article, we propose a way to complete the natural language text of requirements by giving a formal syntax to this text. We introduce and use an example to illustrate our ideas.

Thursday, April 27, 09:30 - 10:00

Break

Thursday, April 27, 10:00 - 11:30

4B1: Large-Scale Systems Integration

Room: Salon Viger C
Chair: Eric Guetre (TRIUMF, Canada)
10:00 System engineering for the ARIEL-II project
Eric Guetre (TRIUMF, Canada)
The Advanced Rare IsotopE Laboratory (ARIEL) project will triple the scientific output at TRIUMF (www.triumf.ca, Vancouver, Canada), by delivering two new beamlines that can produce rare ion beams. These rare ions beams are used by experimenters from Canada and across the world in the fields of nuclear physics, materials science and nuclear medicine. The first ARIEL project (ARIEL-I, $52M) was completed in 2015 and focused on the construction of the ARIEL building, the construction of a new building for compressing helium and a new electron linear accelerator. The second ARIEL project (ARIEL-II, $40M) has started and will focus on the equipment and apparatus to produce these rare ion beams and to transport them to existing experimental facilities. Together, ARIEL-I plus ARIEL-II is one of the largest made-in-Canada initiatives in the realm of the physical sciences. This paper will focus on the system engineering approach and challenges of the ARIEL-II project, from the perspective of the ARIEL project engineer. The ARIEL facility will be explained and how it will fit within TRIUMF's existing scientific facilities. The author will go over system engineering organizational, project, and technical processes for ARIEL-II. For organizational processes, this paper will show how ARIEL-II is really a program with 10 constituent projects with diverse funding and that are coordinated by a program management office. The human resource process is a particular challenge, because TRIUMF decided at the onset to continue operating its cyclotron, so that ARIEL must be planned, constructed and integrated with minimal disruptions to regular beam operations and from within the existing pool of personnel. Examples of reports will be shown, to illustrate how ARIEL measures its progress and key metrics. For project processes, risk management is key focus area. ARIEL not only tracks project management risks but also must identify and deal with hazards. The production of rare isotope beams requires high energy particle beams (up to 100kW), which involves radiological, cryogenic, high pressure, laser, and high voltage hazards. Many safeguards, both engineered and procedural, must be designed to eliminate or mitigate hazards. This paper will discuss the example of radiological hazard in the ARIEL building and how shielding and access control is used to mitigate this hazard. For technical processes, this paper will show the overall and multi-layered system architecture and how it is used to assign responsibility, identify key interfaces to be defined, and control the configuration of ARIEL. An example for the timing system layer will be shown. The paper will also show how ARIEL-II has been divided into phases with distinct science deliverables, and how the traditional system engineering V-model is used as the basis for collecting requirements and use cases, coming up with a concept for each system element that satisfy requirements, doing the detailed design procurement, verifying that system elements meets specifications, and finally commissioning (validation). Some of the lessons learnt from ARIEL-I will be shown and how ARIEL-II is proposing to improve. This paper will also present some of ARIEL-II's challenges, for example the difficulty of executing a large and intensive program within a matrix organization.
10:30 Ontology Mediation to Rule them All: Managing the plurality in Product Service Systems
Uri Shani (IBM, Israel); Marco Franke, Karl A Hribernik and Klaus-Dieter Thoben (BIBA - Bremer Institut für Produktion und Logistik GmbH, Germany)
The lifecycle of a product is managed not only through the Product Lifecycle Management (PLM), but needs to integrate with product services into a Product Service System (PSS). The related activities are performed throughout the entire lifecycle and require sharing information among tools of the different product lifecycle phases. When extending collaboration of PLM with services as integrated within a PSS, the physical product is linked with a vastly extended universe of information during the PSS lifecycle. To achieve robust and maintainable PSS the interoperability must be fulfilled between the physical products related data sources and the relevant services. To meet that end, we use ontologies to define a formal semantic for information sources and targets. Each tool or data source can use its own ontology independently of the other tools and sources, creating the potential of an unmanageable universe of data. Yet, the benefit is that components of the PSS have weak dependencies among them which leads to an open and flexible system that can easily evolve and adapt. This paper focuses on the provision of ontology driven services including the transformation of product related data into different ontologies and the aggregation of different data source specific ontologies to a holistic PSS universe with no specific ontology in its core. We present two approaches that implement ontology mediation (also termed "semantic mediation") as a variant of ontology matching since the level of matching can be rather complex. The application of this technology is also demonstrated in related domains, showing its potential when applied in PSS that is presently an ongoing research within the PSYMBIOSYS EU project. In consequence, the applicable data integration and ontology matching approaches are the hand tools to instantiate sustainable PSS into the market.
11:00 Nesting in the Evaluation of System Readiness for Complex Systems of Emerging Technologies
Michael Knaggs (US DoE, USA); Dennis Harkreader, Alfred Unione, John Oelfke and John Ramsey (KeyLogic Systems, USA); Dale Kearns (Deloitte, Inc., USA); Brian Sauser (University of North Texas); Brad Atwater (Lockheed Martin, USA)
This paper analyzes the impact of nesting assumptions on the calculated system readiness for an integrated complex system that includes multiple subsystem components. In particular, it focuses on the net impact of calculating the system readiness of a subsystem of technology components; calculating an equivalent technology readiness level (TRL) for the subsystem treated as a single component technology; and including this TRL and subsystem interfaces in a Systems Readiness Assessment (SRA) for a larger system. The SRA methodology used in this evaluation has been demonstrated previously in several Department of Defense (DoD) applications and recently in a DOE application. The process for converting an SRL to a single equivalent TRL is based on methodology described in a handbook on SRA applications issued by the National Security Administration. The analysis concludes that nesting assumptions can have a significant impact on the estimated readiness of a larger system. However, the analysis also concludes that equality, equivalence, and consistency in the identification and aggregation of technologies into technology subsystems (parity) can provide consistent and comparable evaluations of system readiness that are useful in the development of system designs and for tracking progress toward system readiness goals. The result is a potentially powerful and pragmatic approach for focusing management attention on critical elements of the R&D life cycle and supporting decisions on R&D investments.

4B2: Complex Systems Issues II

Room: Salon Viger B
Chair: Huy T Tran (University of Illinois at Urbana-Champaign, USA)
10:00 A Hierarchical Framework for Complex Networks Robustness Analysis to Errors
Michel Bessani (University of São Paulo, Brazil); Rodrigo Fanucchi (COPEL & University of São Paulo, Brazil); Júlio Massignan (University of São Paulo, Brazil); Marcos Camillo (COPEL & University of São Paulo, Brazil); João Bosco London Jr. (Universidade de São Paulo, Brazil); Carlos Maciel (USP, Brazil)
Robustness analysis is concerned with the capability of a Complex Network, or System, to handle damaging events. It can consider errors or malicious attacks and is performed by simulating parts removals and quantifying such removals impact. Traditionally, errors are sampled with an assumption of equal failure probabilities for all susceptible system' parts. However, today engineered systems are becoming even more heterogeneous, with complex structure and dynamics. This paper introduces a hierarchical framework for robustness analysis to contemplate the structural complexity by respecting the different type of elements that constitute engineered systems. The proposed framework have two layers. The first is a Sampling Layer that uses models from Reliability Engineering to distinguish the process of failure for each type of element during random errors simulations. The second is a Performance Layer that uses the simulations from Sampling Layer to quantify the removals Impacts. Samples from a Brazilian Power Distribution Network are used as a test case to demonstrate the Hierarchical framework and are compared with Dutch samples already analyzed in the Literature. The results are in agreement with the traditional Robustness analysis (with equal failure probabilities), revealing that Brazilian samples are less Robust than Dutch samples. In addition, a new analysis by considering the time variable added by the Sampling Layer is presented. When considering the time variable, the majority of High Impact events occurs in a large time window. However, some events happen in a single day time window and lead to severe Impacts (around 50\% of performance loss). We conclude this manuscript with some aspects that should be explored in future research, as the use of covariates and rare events simulation techniques to focus on the low probability high impact events.
10:30 Designing Resilient System-of-Systems Networks
Huy T Tran (Georgia Institute of Technology); Jean C Domercant (Georgia Tech Research Institute & Electronic Systems Laboratory, USA); Dimitri Mavris (Georgia Institute of Technology, USA)
The networked nature of many system-of-systems (SoS) requires that system engineers consider resilience in the design and analysis of future SoS networks. We present an approach for the design space exploration of resilient SoS networks. We model candidate network designs using complex network methods, and generate and analyze a network design space with Design of Experiments. Results show that of the factors considered, threat type has the largest effect on network resilience. Threat type is also shown to strongly influence the best adaptation strategy to incorporate.
11:00 Autonomous System Ranking by Topological Characteristics: A Comparative Study
Mehmet Engin Tozal (University of Louisiana, Lafayette)
The Internet is a highly engineered, large scale complex system serving billions of people worldwide. The whole system is formed by tens of thousands of autonomous networks owned by a different organizations. These autonomous networks are connected to each other through hundreds of thousands of relations which reflect the business partnerships among the network operators as well as the traffic routing in the Internet. Ranking ASes by their topological characteristics allow us to acquire immediate insight on the complex structure of the Internet and make decisions based on various criteria. In this study we compare and contrast six different AS ranking schemes based on the topological features of the ASes: customer degree, provider degree, peer degree, customer-cone size, alpha centrality and betweenness centrality. We report varying levels of agreement/disagreement among the ranking schemes and show that it is necessary to select multiple ranking schemes to gain a diverse insight on the topology.

4B4: Engineering Systems-of-Systems II

Room: Salle de Bal Foyer
Chair: Allison Doren (MITRE Corporation, USA)
10:00 On the Resilience of Systems of Systems
Matthew Summers (Cranfield University, United Kingdom); S. Barker (Cranfield University, Afghanistan)
The need to consider how systems can be made resilient to failure modes has gained increasing traction in the fields of systems thinking and systems design, and is now more widely studied, with authors identifying the potential disruptive effects of failure upon a system, and codifying these disruptions into specific types. When the focus of specification moves from the bounded single system to the consideration of capability and effect, however, systems of systems, rather than systems must be contended with. Systems of systems have been classified as being of a number of types (acknowledged, collaborative, directed, virtual, for example), whilst authors have endeavoured to characterise the properties of systems of systems, and the difficulties associated with their design, introduction and operation. This study has invariably arrived at the conclusion that systems of systems are infinitely more complex than bounded single systems, and as the final system of systems design will still need to be resilient to failure, this in turn poses more difficult questions for the study of resilience, as the properties of a bounded single system are unlikely to be the same as those of a system of systems. This paper will consider the problems faced by the need to specify resilience in a system of systems environment, by first evaluating how the various types and properties of systems of systems might affect the consideration of resilience, and then proposes an initial codification of systems of systems resilience disruption types, along with recommendations and required further work.
10:30 SysML Executable Systems of System Architecture Definition: A Working Example
This paper provides a working example of an SoS architecture in Systems Modeling Language (SysML) using IBM Rhapsody that was developed in support of the Defense Advanced Research Projects Agency (DARPA) SoS program Cross-Domain Maritime Surveillance and Targeting (CDMaST). One goal of this effort was to provide the CDMaST program industry participants with an example of an executable SysML model that represented a baseline SoS architecture and supported the DARPA desire for system of systems (SoS) documentation through a model-based implementation approach. The SoS architecture was defined as a selection of platforms, weapons, sensors, and mission systems (elements) operating collectively in the maritime environment, the assignment of weapons, sensors, and mission systems to platforms, the interfaces between elements, and the services SoS elements provide. An architecture element is a distinct component in the overall SoS architecture. Elements include platforms, weapons, communications and data links, sensors, battle managers (manned and unmanned), and systems that support mission processing (e.g. data fusion, targeting etc.). The architecture model provides the framework for two lines of reasoning about the SoS. First, the model reflects the results of physics-based trades conducted to inform architecture decisions. Second, the model defines the systems and behaviors which will be analyzed to demonstrate their operational effectiveness in mission and campaign modeling analyses. The model aims to capture information that is important for analysis, implementation and testing activities. Motivation In today's defense environment, individual systems are conceived, engineered, and developed under the Department of Defense (DoD) systems acquisition, operations and support frameworks. In practice, once deployed, each system generally operates as a component of a larger SoS. Current acquisition and engineering guidance calls for addressing these larger SoS considerations throughout the system life cycle [1], from initial concept, requirements, design, implementation and test through post-deployment system evolution. The latest best practices for engineering SoS considerations into a system [2] points to the need for a quantitative technical approach to inform investment decision makers about the impact and the cost of system architecture changes. DoD guidance [1] also provides a framework for addressing the evolution of systems in an SoS based on an "Implementer's View" [3] of system engineering for SoS. As the practice of SoS engineering (SoSE) expands, there is increased interest in approaches that define SoS architectures from an SoSE perspective. Model based engineering (MBE) approach with the use of Unified Profile for DoDAF and MODAF (UPDM) has been proposed as a way to address SoSE challenges [4], [5]. CDMaST Broad Agency Announcement (BAA) calls for performers to use industry standard modeling languages and methods for architecture generation [6]. For SoSE purposes a SysML model represents an unambiguous, structured, executable, digital representation of the SoS system architecture. Ideally the model can represent the SoS systems engineering architecture and serve as a single repository of a structured, self-consistent description of the elements of the SoS, the detailed interfaces between them, and the logic which enables the elements to carry out the mission. An executable model allows the verification and validation that the architecture operates coherently in selected scenarios. Results The paper walks through the SoS architecture using the model which represents the elements that comprise the architecture. This work demonstrates how elements are configured to execute specific scenarios and presents views into the architecture which illustrate key features of the architecture using the standard DoDAF. The model specifies systems in a baseline maritime surface and anti-submarine warfare SoS, captures relevant behavior, performance attributes and interfaces for each element, and describes end-to-end flow across the architecture, including battle management, communications and human decision-making. Selected views from the model are presented below. Figure 1. OV-5b: Operational Activity Model Figure 1 shows DoDAF view OV-5b: Operational Activity Model. This view provides a description of the necessary resource flows that are input (consumed) by and output (produced) by each operational activity. Figure 2. SysML Model Structure Figure 2 shows a fragment of the model structure including platforms, element types, elements and their mapping to operational activities. Figure 3. SV-10b: Systems State Transition Description for a Weapon The model represents relevant behaviors of each element in the form of state charts. These in turn provide the basis for executing the model in selected kill chains. Figure 3 gives an example of a state chart for a weapon element. Figure 4. Fragment SV-10c: Systems Event Trace Description from the Executing Model The model represents how the elements operate end-to-end to execute the kill chain or sequence of actions to prosecute the mission from surveillance through engagement. Figure 4 shows a fragment of a sequence diagram generated by the executing model. The executable model presented in this paper is a working example of baseline architecture for surface and anti-submarine warfare SoS. It demonstrates the feasibility of capturing SoS architecture in a SysML model and utility of such models for examining SoS architectures, including constituent elements, their behaviors, interfaces and their mapping to operational activities. References [1] DoD Defense Acquisition Guidebook. Washington, D.C.: Pentagon, May 2013. [2] The Technical Cooperation Program. TTCP TECHNICAL REPORT TR - JSA/TP4 -1- 2014 Recommended Practices: System of Systems Considerations in the Engineering of Systems. August 2014. [3] J. Dahmann, G. Rebovich, J.A. Lane, R. Lowry, K. Baldwin "An Implementers' View of Systems Engineering for Systems of Systems", Proceedings of IEEE International Systems Conference 2011, April 4-7, 2011, Montreal, Quebec, Canada. [4] M. Hause, "SOS for SoS: A new paradigm for system of systems modeling", IEEE Aerospace Conference, 1-8 March 2014, Big Sky, MT. [5] UPDM, 2013, Object Management Group (OMG), 2013, Unified Profile for DoDAF/MODAF (UPDM) 2.1, available at http://www.omg.org/spec/UPDM/2.1/PDF.
11:00 On Defense Strategies for System of Systems Using Aggregated Correlations
Nageswara Rao and Neena Imam (Oak Ridge National Laboratory, USA); Chris Yu Tak Ma (Hang Seng Management College, Hong Kong); Kjell Hausken (University of Stavanger, Norway); Fei He (Texas A&M University-Kingsville, USA); Jun Zhuang (State University of New York at Buffalo, USA)
On Defense Strategies for System of Systems Using Aggregated Correlations

4B5: INCOSE

Room: Salon Viger A
Chair: Thomas A McDermott, Jr (Georgia Tech Research Institute & Georgia Tech Sam Nunn School of International Affairs, USA)
10:00 Application of Model-Based Systems Engineering Methods and Tools to Improve Emergency Care Delivery
Mohamed Elshal (Indiana University, USA)
Application of Model-Based Systems Engineering Methods and Tools to Improve Emergency Care Delivery
10:30 ATRIUM - Architecting Under Uncertainty: For ISO 26262 compliance
Naveen Mohan (KTH Royal Institute of Technology, Sweden); Per Roos and Johan Svahn (Scania CV AB, Sweden); Martin Torngren (Royal Institute of Technology, Sweden); Sagar Behere (KTH Royal Institute of Technology, Sweden)
ATRIUM - Architecting Under Uncertainty: For ISO 26262 compliance
11:00 INCOSE Academic Research Forum - Future Systems Engineering Research Directions
Thomas A McDermott, Jr (Georgia Tech Research Institute & Georgia Tech Sam Nunn School of International Affairs, USA); Jon Wade (Stevens, USA); Richard Adcock (Cranfield University & BKCASE, United Kingdom)
INCOSE Academic Research Forum - Future Systems Engineering Research Directions