For full conference details, visit the IEEE SysCon 2016 website: http://2016.ieeesyscon.org

Program for 2016 Annual IEEE Systems Conference (SysCon)

Time Grand Cypress A Grand Cypress ABC Grand Cypress B Grand Cypress C Grand Cypress D Hemingway's Restaurant Palm ABC Palm Corridor Palm DEF Poinciana A Poinciana AB Poinciana B Registration Counter 1

Monday, April 18

07:00-17:00                         Registration
08:00-10:00             1A1: Tutorial: Smart Home System Cybersecurity: Threat and Defense in a Cyber-Physical System   1A2: Tutorial: Data Analytics      
10:00-10:15               Coffee Break        
10:15-12:00             1B1: Tutorial: Smart Home System Cybersecurity: Threat and Defense in a Cyber-Physical System (Continued)   1B2: Tutorial: Data Analytics (Continued)      
12:00-13:00           Lunch            
13:00-15:00             1C1: Tutorial: Distributed Sensing and RF Tomography   1C2: Tutorial: Intelligent Control Architecture for Autonomous Vehicles      
15:00-15:15               Coffee Break        
15:15-17:00             1D1: Tutorial: Distributed Sensing and RF Tomography (Continued)   1D2: Tutorial: Intelligent Control Architecture for Autonomous Vehicles (Continued)      
17:15-19:30                   Security and Privacy Technical Committee Meeting   Workforce Development Technical Committee Meeting  

Tuesday, April 19

07:00-17:20                         Registration
08:15-08:30   Opening Remarks                    
08:30-09:30   Keynote Speaker: Janos Sztipanovits                    
09:30-10:00         Coffee Break              
10:00-12:00   Executive Plenary: "Computer-Assisted Techniques in Clinical and Academic Medicine"                    
12:00-13:30         Lunch              
13:30-15:10 2C1: Robotic Systems I   2C2: Complex Systems Issues I 2C3: Decision Making Systems I     2C4: Medical Systems   2C5: Model-Based Systems Engineering I   2C6: Modeling and Simulation I  
15:10-15:40         Coffee Break              
15:40-17:20 2D1: Robotic Systems II   2D2: Complex Systems Issues II 2D3: Decision Making Systems II     2D4: Systems Verification and Validation   2D5: Model-Based Systems Engineering II   2D6: Modeling and Simulation II  
17:30-18:30         Reception                
18:30-20:30 Young Professionals Networking Event                        
19:00-21:00               Analytics and Risk Technical Committee Meeting   Industrial Interface Technical Committee Meeting    

Wednesday, April 20

07:00-17:20                         Registration
08:00-09:40 3A1: Research in Systems Engineering I   3A2: Transportation Systems 3A3: Knowledge Management     3A4: Performance Systems   3A5: Model-Based Systems Engineering III   3A6: Modeling and Simulation III  
09:40-10:10         Coffee Break              
10:10-11:50 3B1: Research in Systems Engineering II   3B2: Space and Communication Systems I 3B3: Energy Management and Sustainability     3B4: Engineering Systems-of-Systems I   3B5: Model-Based Systems Engineering IV   3B6: Modeling and Simulation IV  
11:50-13:40         Best Paper Awards Luncheon              
13:40-15:00 3D1: Microgrids   3D2: Space and Communication Systems II 3D3: Biomedical Systems     3D4: Engineering Systems-of-Systems II   3D5: Sensors Integration and Applications I   3D6: System Architecture I  
15:00-15:40         Coffee Break              
15:40-17:20 3E1: Cyber Security Issues I   3E2: Space and Communication Systems III 3E3: Air and Space Systems     3E4: Systems Engineering Theory   3E5: Sensors Integration and Applications II   3E6: System Architecture II  
17:30-19:00                     Intelligent Transportation Design Committee Meeting    

Thursday, April 21

07:00-11:50                         Registration
08:00-09:40 4A1: Robotic Systems III   4A2: Modeling and Simulation V 4A3: Cyber Security Issues II                
09:40-10:10         Coffee Break              
10:10-11:50 4B1: Model-Based Systems Engineering V   4B2: Modeling and Simulation VI 4B3: Space and Communication Systems IV                

Monday, April 18

Monday, April 18, 07:00 - 17:00

Registration

Room: Registration Counter 1

Monday, April 18, 08:00 - 10:00

1A1: Tutorial: Smart Home System Cybersecurity: Threat and Defense in a Cyber-Physical System

Room: Palm ABC
08:00 Smart Home System Cybersecurity: Threat and Defense in a Cyber-Physical System
Shiyan Hu (Stanford, USA)
Cyber-Physical System (CPS) research addresses the close interactions and feedback loop between the cyber components such as embedded computing systems and the physical components such as energy and mechanical systems. As an exemplary CPS, smart home energy system has gained significant popularity due to the massive deployment of advanced metering infrastructure enabling a transformative shift of the classical grid into a more reliable and secure grid. Smart home is critical in this infrastructure as it controls the end use components of a grid. Despite its importance, such a system is vulnerable to various cyberattacks such as energy theft and pricing hack. In this tutorial, I will describe several of our recent works on smart home cyberthreat analysis and defense technology development. I will first show that due to the interdependence between utility pricing and customer energy load, a cyberattacker could tamper smart meters for electricity bill manipulation and energy load unbalancing, and similarly energy theft could potentially disturb the power system. I will then discuss some advanced control theoretic and algorithmic techniques to defend against those cyberattacks, including partially observable Markov decision process (POMDP) based detection and cross entropy optimization based Feeder Remote Terminal Unit (FRTU) deployment optimization. I will conclude the talk with some of the future research directions on this topic.

1A2: Tutorial: Data Analytics

Room: Palm DEF
08:00 Distributed Sensing and RF Tomography
Michael Wicks (University of Dayton, Italy)
Many applications require imaging, shape reconstruction and material characterization of objects in clutter, including, for example, aircraft and airport surveillance, below ground imaging, foliage penetration (FOPEN), concealed weapons detection (CWD), crowd control, border control, through the wall surveillance (TWS), antenna and RCS measurements, as well as quality control, industrial automation, medical imaging and 3D/4D printing. Recent advances in computing, computational sciences and radio frequency (RF) technology improved the potential for successful applications tomography to these challenging problems. Tomographic systems may be supported by a variety of technologies, but they all share one common feature in that they all require viewing of the environment from a variety of angles. This is referred to as geometric diversity of illumination and observation. The technology that supports geometric diversity is based upon distributed sensors. For applications where sensing occurs using electromagnetic waves, the most common sensor is radar. Distributed sensing systems employ a single aperture that is moved to form a synthetic aperture radar (SAR) or numerous simultaneous fixed aperture systems. RF tomography is typically employs a distributed system of low-cost, reconfigurable electromagnetic transmit and receive antennas placed arbitrarily around the region of interest. RF tomography transmitters radiate known waveforms. But, sources of opportunity may also be exploited, while spatially distributed receivers sample of the scattered fields, and relay this information to a central processor. The distinctive attribute of RF tomography is its high resolution capabilities: sub-wavelength, range-independent, bandwidth-independent, resolution which is a function of the RF carrier frequency. This tutorial will present the principles of RF tomography, and the relationship between classical electromagnetics, signal processing, and applications specific phenomenology as in medical imaging, SAR, and seismic sensing. This tutorial will include results from the most recent experiments and trends with many different applications. In particular, this tutorial will demonstrate theoretical concepts using experimental results obtained via one of the first dedicated RF tomography chamber.

Monday, April 18, 10:00 - 10:15

Coffee Break

Room: Palm Corridor

Monday, April 18, 10:15 - 12:00

1B1: Tutorial: Smart Home System Cybersecurity: Threat and Defense in a Cyber-Physical System (Continued)

Room: Palm ABC

1B2: Tutorial: Data Analytics (Continued)

Room: Palm DEF

Monday, April 18, 12:00 - 13:00

Lunch

Room: Hemingway's Restaurant

Monday, April 18, 13:00 - 15:00

1C1: Tutorial: Distributed Sensing and RF Tomography

Room: Palm ABC
13:00 Data Analytics
Paul C. Hershey (Raytheon, Inc., USA)
This tutorial provides an in depth review of the ever evolving technical area called "Data Analytics," a subset of "Big Data," that encompasses data analysis, data fusion, data storage, data sources, infrastructure and technology, screening and filtering algorithms, machine learning, and complexity. These techniques will be introduced with respect to their individual contribution to data analytics and to their combined contributions to data analytics systems. Specific use cases will be presented in which participants will observe, both through presentation and videos, the individual and combined applications and value of these techniques and systems for specific commercial and Department of Defense (DoD) use cases. Participants will emerge from this tutorial with a focused understanding of data analytics principles and techniques that will enable them to apply these concepts toward building engineering systems for mission decision support. This tutorial is applicable to individuals interested in systems engineering with respect to analysis of complex systems, mission support, and automated decision aides.

1C2: Tutorial: Intelligent Control Architecture for Autonomous Vehicles

Room: Palm DEF

Monday, April 18, 15:00 - 15:15

Coffee Break

Room: Palm Corridor

Monday, April 18, 15:15 - 17:00

1D1: Tutorial: Distributed Sensing and RF Tomography (Continued)

Room: Palm ABC

1D2: Tutorial: Intelligent Control Architecture for Autonomous Vehicles (Continued)

Room: Palm DEF

Monday, April 18, 17:15 - 19:30

Security and Privacy Technical Committee Meeting

Room: Poinciana A

The Meeting of the Security and Privacy in Complex Systems Technical Committee http://ieeesystemscouncil.org/content/security-and-privacy-complex-systems-technical-committee of the IEEE Systems Council will be held at 5:15 pm on Monday, April 18, 2016. All SysCon 2016 conference attendees are invited to join for review and planning of the SPCS Technical Committee activities in 2016-2017, hosted by Prof. Shiyan Hu, the SPCS Committee Chair. The SPCS TC aims to promote interdisciplinary research, development, and education in the field of security and privacy for complex systems. Security is addressed at the infrastructure, system, software, and hardware levels. The TC focuses on addressing security and privacy issues in critical systems and technology applications such as intelligent transportation systems, aviation and aerospace systems, robotics, and smart energy systems. The TC also contributes to educational and workforce development efforts by endeavors that demonstrate integrity, safety, and security engineering.

Workforce Development Technical Committee Meeting

Room: Poinciana B

The Meeting of the Workforce Development Committee (WDC) (located at http://ieeesystemscouncil.org/content/workforce-development-technical-committee) of the IEEE Systems Council will be held on Monday, April 18, 2016 at 5:15 pm. All those attending the SysCon 2016 conference are invited to attend this meeting, hosted by WDC member Roger Oliva, in order to review and plan the activities of this committee for the upcoming year. The WDC's goal is to improve and refine technical workforce education, at different educational levels, in order to enable this workforce to be able to adapt and develop solutions to future needs. Due to constraints this year, WDC was only able to host a limited number of interactive educational workshops, at the postgraduate and career professional level, but wishes to expand the scope and domain of these workshops, among other educational goals, in the coming year.

Tuesday, April 19

Tuesday, April 19, 07:00 - 17:20

Registration

Room: Registration Counter 1

Tuesday, April 19, 08:15 - 08:30

Opening Remarks

Room: Grand Cypress ABC

Tuesday, April 19, 08:30 - 09:30

Keynote Speaker: Janos Sztipanovits

Room: Grand Cypress ABC

Model- and component-based design have yielded dramatic increase in design productivity in several narrowly focused homogeneous domains, such as signal processing, control and aspects of electronic design. However, significant impact on the design and manufacturing of complex cyber-physical systems (CPS) such as vehicles has not yet been achieved. This talk describes challenges of and solution approaches to building a comprehensive design automation tool suite for complex CPS. The primary driver for the OpenMETA tool suite was to push the boundaries of "correct-by-construction" methods to decrease significantly the costly design-build-test-redesign cycles in design flows. The discussion will focus on the impact of heterogeneity in modeling, analyzing and optimizing CPS designs. This challenge is compounded by the need for rapidly evolving design flows by changing/updating the selection of modeling languages, analysis and verification tools and synthesis methods. Based on experience with the development of OpenMETA and with the evaluation of its performance in a complex CPS design challenge, the talk will argue that the current vertically integrated, discipline-specific tool chains for CPS design need to be complemented with horizontal integration layers that support model integration, tool integration and design process integration. The presented arguments will be based on the OpenMETA technical approach including the new integration layers, an overview of the technical framework used for their implementation and on practical experience with their application.

Tuesday, April 19, 09:30 - 10:00

Coffee Break

Room: Grand Cypress D

Tuesday, April 19, 10:00 - 12:00

Executive Plenary: "Computer-Assisted Techniques in Clinical and Academic Medicine"

Room: Grand Cypress ABC

This special, plenary panel will focus on how we leverage from high technology - specifically, advanced modeling, simulation, visualization, and virtualization methods - to support medical training, research, and actual medical procedures. The topics will range from minimally invasive procedures (such as laparoscopic surgery), through medical imaging, telemedicine, medical robotics, and e-health. The key theme of the panel will be the effectiveness of low and high-end technologies and the work needed to further improve their impact on medical training and practice.

Tuesday, April 19, 12:00 - 13:30

Lunch

Room: Grand Cypress D

Tuesday, April 19, 13:30 - 15:10

2C1: Robotic Systems I

Room: Grand Cypress A
Chair: Shahram Payandeh (Simon Fraser University, Canada)
13:30 An Integrated Robotic System for Transporting Surgical Tools in Hospitals
Huan Tan and Ying Mao (GE Global Research, USA); Yi Xu (CapsoVision Inc., USA); Balajee Kannan, Weston Griffin and Lynn DeRose (GE Global Research, USA)
The performance of a hospital's sterile processing center (SPC) significantly impacts patient safety and overall productivity. Key to automating this process is to reliably transport instruments throughout the process. In this paper, we detail a robust integrated system for enabling mobile robots to autonomously perform manipulation of assets; specifically, transporting reusable surgical instrument trays in the SPC of a hospital. Our method is based on a cognitive decision making mechanism that plans and coordinates the motions of the robot base and the robot manipulator at specific processing locations. A vision-based manipulator control algorithm was developed for the robot to reliably locate and subsequently pick up surgical tool trays. Further, to compensate for perception and navigation errors, we developed a robust self-aligning end-effector that allows for improved error-tolerance in larger workspaces. We evaluated the developed integrated system using an Adept PowerBot mobile robot equipped with a 6-DOF Schunk PowerCube arm and our customized end-effector in an SPC-like environment. The experiment results validate the effectiveness and robustness of our system for handling surgical instrument trays in tight and constrained environments.
13:50 A Predictive Motion Planner for Guidance of Autonomous UAV Systems
Peter Travis Jardine (Royal Military College of Canada, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada)
This paper investigates Unmanned Aerial Vehicle (UAV) systems motion planning for ground attack missions involving enemy defenses. The UAV dynamics are modeled as a unicycle, linearized using dynamic extension and expanded over a finite prediction horizon as a piece-wise affine function. The motion planning problem is then formulated as a constrained, convex minimization in the form of Linear Quadratic Model Predictive Control (LQMPC). Avoidance of enemy defenses is achieved using linear inequality constraints. The design is tested in a simulated ground attack mission involving a layered enemy defense system using MATLAB. Preliminary results demonstrate the feasibility of using LQMPC to guide a UAV in ground attack missions involving complex enemy defenses.
14:10 Motion Planning for AmigoBot with Line-Segment-Based Map and Voronoi Diagram
Jin Cheng (University of Jinan, P.R. China); Qing Hui (University of Nebraska-Lincoln, USA)
The motion planning problem for an AmigoBot with only sonar sensors is addressed in this paper. A line segment based map is firstly constructed incrementally from readings of sonar sensors. Map building algorithms are proposed to reduce the size of the map while maintaining the complete and accurate information about the environment. Then the Voronoi diagram of the line segment based map is generated with Fortune's sweep line algorithm. A shortest accessible path from the initial configuration to the goal is searched from the Voronoi diagram with Dijkstra's algorithm. The notion of Clearance is defined to guarantee the safety of the path encountered with obstacles. Finally, a path tracking control law with the line-of-sight approach is designed to follow the reference path. Simulation is conducted to verify the designed motion planning approach and the results show that it is effective for such an AmigoBot to accomplish the given task in unknown environments.
14:30 Decentralized Learning in Pursuit-Evasion Differential Games with Multi-Pursuer and Single-Superior Evader
Mostafa Awheda and Howard Schwartz (Carleton University, Canada)
In this paper, we consider a multi-pursuer single-superior-evader pursuit-evasion differential game where the speed of the evader is similar to the speed of each pursuer. A new fuzzy reinforcement learning algorithm is proposed in this work for this game. Each pursuer of the game uses the proposed algorithm to learn its control strategy. The proposed algorithm of each pursuer uses the residual gradient fuzzy actor critic learning (RGFACL) algorithm to tune the parameters of the fuzzy logic controller (FLC) of the pursuer. The proposed algorithm uses a formation control approach in the tuning mechanism of the FLC of the learning pursuer so that the learning pursuer or the other learning pursuers can capture the superior evader. The formation control mechanism used by the proposed algorithm guarantees that the pursuers are distributed around the superior evader in order to avoid collision between pursuers. The formation control mechanism also guarantees that the capture regions of each two adjacent pursuers overlap or at least border each other so that the capture of the superior evader will be guaranteed. The proposed algorithm is a decentralized algorithm as no communication among pursuers is required. The only information that the proposed algorithm of each learning pursuer requires is the position and the speed of the superior evader. The proposed algorithm is used to learn a multi-pursuer single-superior-evader pursuit-evasion differential game. The simulation results show the effectiveness of the proposed algorithm as the superior evader is always captured by one or some of the pursuers learning their strategies by the proposed algorithm.
14:50 VBCA: A Virtual Forces Clustering Algorithm for Autonomous Aerial Drone Systems
Matthias R. Brust (Singapore University of Technology and Design, Singapore); Mustafa İ Akbaş (Florida Polytechnic University, USA); Damla Turgut (University of Central Florida, USA)
Recent advances in wireless sensors and autonomous drone technologies have drastically expanded the usage of multi-drone systems in surveillance, tracking, and mapping. However, the coordination and collaboration of drones for three- dimensional (3-D) coverage remain a crucial challenge. In this paper, we propose a 3-D clustering algorithm for the autonomous positioning of an aerial drone system. Our approach draws from molecular geometry, where forces among electron pairs surrounding a central atom actively position the entities of a system. The advantages of our approach compared to existing methods are that (1) the usage of self-organizing principles enables the autonomous operation, (2) the implementation strategy of the virtual forces allows the utilization of an entirely local communication protocol, and (3) the clustering process produces scalable topologies exhibiting high volume coverage. Extensive simulations show that our virtual forces based approach results in steady-state topologies with a high volume coverage. Most importantly, VBCA is scalable and triggers an efficient topology rearrangement if the number of nodes in the system is changing, while providing direct network connectivity with a central drone. We also compare the volume coverage results of VBCA against existing approaches and find that VBCA is up to 40% more efficient. We conclude that the nature-inspired topologies found in the molecular geometries can be created by a restricted set of computationally efficient local virtual forces to efficiently position autonomous drones.

2C2: Complex Systems Issues I

Room: Grand Cypress B
Chair: Mahmoud Efatmaneshnik (University of New South Wales - Canberra & Australian Defence Force Academy, Australia)
13:30 Optimal Attack Strategy with Heterogeneous Costs in Complex Network
Ye Deng and Jun Wu (National University of Defense Technology, P.R. China)
The problem of network disintegration, such as suppressing the epidemic spreading and destabilizing terrorist networks, has broad applications and recently has received growing attention. This paper first presents a limited cost model of attack strategy on complex networks, and the network performance is quantitatively measured by the size of the largest connected component. Here, we introduce the unequal probability sampling into the network disintegration problem to identify the optimal attack strategy, in which node coding is proposed. The efficiency of the proposed solution was verified by applying in model network and real-world network. Numerical experiments suggest that our solution can sift the optimal attack strategy regarding the attack cost. We get some insightful conclusions about the relationship between attack cost and the optimal attack strategy. We find that the low-degree nodes are attacked preferentially when the total cost is deficient; moreover, the high-degree nodes are attacked preferentially when the total cost is sufficient. However, there is a climax, the high-degree nodes won't be attacked preferentially if the cost of single node is more than a threshold. We believe our understanding will be helpful to decision-maker.
13:50 System and Architecture Evaluation Framework Using Cross-domain Dynamic Complexity Measures
Jonathan Fischi and Roshanak Nilchiani (Stevens Institute of Technology, USA)
Effective quantification and comparison of system complexity content when architecting systems is difficult and challenging. Prior work on dynamic complexity measures provides the foundation for this effort. Quantified complexity helps make better informed system architecture selection between competing designs since the increased complexity of a system can lead to increased fragility and more exposure to failures and risks. Therefore the quantification of complexity is important when designing and planning the operation of a complex system. It has also proved useful in a framework for generating technical risks. The scope of this paper is to apply dynamic complexity measures to current, real-world complex systems. This work introduces a multi-step framework to evaluate complex systems and enhance a systems engineer's ability to compare competing systems/architectures. The case study utilizes the framework to contrast autonomous car architectures proposed by Google and Toyota. The results are presented and compared to results from case studies in prior works. The findings advance a system complexity evaluation framework using dynamic complexity measures.
14:10 Enterprise Cyclomatic Complexity
Bob Stroud (Raytheon, USA); Atila Ertas (Texas Tech University, USA)
This paper shows how the McCabe cyclomatic complexity measure can be applied to enterprise architectures using standard enterprise architecture framework tools There are many measures of complexity, among them cyclomatic complexity advanced by McCabe. Domerçant and Mavris suggested how cyclomatic complexity might be applied to Systems of Systems (SoS). While there are many commercial and non-commercial tools for estimating the cyclomatic complexity of software, they are not designed to estimate the complexity of enterprises. This paper shows how a contemporary enterprise architecture tool can be used to estimate the complexity of an enterprise documented in the tool by extending the suggestion of Domerçant and Mavris. Many contemporary projects fail to deliver their intended result on time or on budget. One, if not the only, root cause is the complexity underlying these projects. EA was intended to improve the management of complex organizations dealing with complex problems and result in improved project performance. But the EA discipline in isolation has not proven to be the hoped for panacea. A transition from an EAF strictly defined to manage engineering complexity or managerial complexity to a framework for estimating complexity regardless of cause is indicated, considering that engineering complexity and managerial complexity are themselves interrelated. This paper provides part of the foundation for that process, a method for estimating the cyclomatic complexity of enterprises using a contemporary enterprise architecture software application. Complexity is accepted as a general problem to be avoided or minimized in enterprises, and there are many related suggested remedies in the literature. For example, Sheard and Mostashari note that most systems engineering measurement documentation does not address complexity and those that do apply only to actual coded software and are not applicable to integrated systems. Jacobs advances the Generalized Complexity Index (GCI) measure but this does not readily apply to enterprise architectures. Schuetz et al. formulate an approach to complexity measure drawing from the number of components in the enterprise and the heterogeneity with the general result that the larger the number of components and heterogeneity of the components, subject to some constraints, the higher the enterprise complexity. They consider TOGAF as a method for considering enterprise architecture, but they do not consider DoDAF as a method for producing enterprise architecture artifacts that might inform measures of enterprise architecture complexity. Finally, Parry et al. observe that enterprises (in the context of businesses) are not typically managed as though they are complex even though they are, resulting in little attention to complexity measurement. Many contemporary enterprise developments comprise a larger proportion of software sometimes with schedule extension and budget overruns attributed to complexity, and some research has been applied strictly to software complexity measures. McCabe is an early entry into this domain with his journal paper that introduced the concept of cyclomatic complexity in software contexts. McCabe reuses a previously introduced concept called the cyclomatic number. This paper shows how McCabe's work, as extended by Domerçant and Mavris, can be further extended to enterprises described by standard enterprise architecture framework software tools.
14:30 Modularization and Task Sequencing of Complex Assembly Systems
Mahmoud Efatmaneshnik (University of New South Wales - Canberra & Australian Defence Force Academy, Australia); Michael J Ryan (University of New South Wales, Australia); Shraga Shoval (UNSW-CANBERRA @ADFA, Australia)

2C3: Decision Making Systems I

Room: Grand Cypress C
Chair: Shiyong Liu (Southwestern University of Finance and Economics, P.R. China)
13:30 Troubleshooting Optimization Using Multi-Start Simulated Annealing
Wlamir Vianna and Leonardo Ramos Rodrigues (EMBRAER, Brazil); Takashi Yoneyama (ITA, Brazil); David Mattos (Instituto Tecnologico de Aeronautica, Brazil)
A troubleshooting strategy is a sequence of actions that must be carried out in order to solve a problem. Some troubleshooting strategies consist of a combination of actions and questions. In such cases, each possible answer for a question may lead to a different set of troubleshooting actions (or a different sequence of troubleshooting actions). In many applications, the set of all possible actions and questions are known. Then, the troubleshooting problem can be defined as finding the optimal sequence of actions and questions, which can be modeled as a combinatorial optimization problem. This paper describes an optimization method to minimize the expected cost of repair (ECR) of a single failure troubleshooting model, considering both dependent and independent actions, questions and cost clusters. The proposed method uses a combination of simulated annealing and multi start search to solve the troubleshooting problem. Numerical examples are presented to illustrate the application of the proposed method in troubleshooting models with different complexity levels.
13:50 Optimal Multi-Dimensional Fusion Model for Sensor Allocation
Mark D Rahmes, John Delay, George Lemieux and Kevin Fox (Harris Corporation, USA)
We describe a multi-dimensional model for the fusion of activity based intelligence (ABI) hypothesis-driven evidence through optimal sensor management. We determine decision-making strategies based upon the ability to perform data mining and pattern discovery, utilizing open source, actionable information from multiple sources to prepare for specific events or situations. Our solution is based on an analytical framework using game theory to support ingestion of data sources (evidence), integration of analytical algorithms, open source data mining, trends, and pattern analysis. Linear game theory optimization is also used to support multiple hypothesis analysis. This solution may also save money by offering a Pareto efficient, repeatable process for resource management. We combine operations research methods and remote sensing for decision-making with several possible actions, state of world, and a mixed probability metric. Our tool allows for calculating optimal strategies, provides greater knowledge about remote sensing access times and increases likelihood of a decision-maker making the best decision. We fuse evidence using Dempster's Rule and Nash Equilibrium (NE) for allocation of demands by sensor modality. We discuss a method for calculating optimal detector to determine accuracy of resource allocation. By calculating all NE possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. We model the impact of decision-making on accuracy by adding more dimensions to the decision-making process as sensitivity analysis. Future work is to implement the design on a distributed processing platform to support real-world-sized scenarios and simulations.
14:10 Multi-Objective Optimization of Decision Trees for Power System Voltage Security Assessment
Hanieh Moammadi, Gholamreza Khademi and D Simon (Cleveland State University, USA); Maryam Dehghani (Shiraz University, Iran)
A method is proposed for online power system voltage security assessment (VSA) using decision trees (DTs). The DT inputs are the data gathered from phasor measurement units (PMUs). The dimensions of the training data are reduced in two ways. First, the number of features is decreased by principal component analysis (PCA). Second, the number of training cases is decreased by correlation analysis. Biogeography-based optimization (BBO) and invasive weed optimization (IWO) are combined with four multi-objective (MO) optimization methods to find the optimum dimensions of the PMU data while minimizing the misclassification rate of the security test. The four MO methods include vector evaluation (VE), nondominated sorting (NS), niched Pareto (NP), and strength Pareto (SP). A systematic comparison of MOIWO and MOBBO is conducted using Pareto front hypervolume and relative coverage. The method is applied to a 66-bus power grid in Iran. The results show that the training data size is reduced by about 98%, and the training time is approximately 200 times faster because of the dimension reduction. The misclassification rates of the DTs are in the range of 4-9%. Hypervolume and relative coverage indicate that VEBBO performs better than the other methods.
14:30 Group decision making for weapon systems selection with VIKOR based on consistency analysis
Xiaoxiong Zhang (University of Waterloo, Canada); Jiang Jiang and Bingfeng Ge (National University of Defense Technology, P.R. China); Ke-wei Yang (National University of Defence Technology, P.R. China)
Weapon systems selection is an unstructured, complex multi-criteria decision analysis problem with a wide range of considerations. In this paper, a hybrid approach which uses VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian (VIKOR) technique, is presented for the screening of weapon systems. More specifically, a group of experts are first asked to make judgments on the criteria using fuzzy preference relations. Next, the experts are assigned directly different weights based on their opinions with consistency analysis, which is one of the main novelties in this paper. A collective comparison matrix is then constructed and the weights of the criteria are determined based on the dominance concept. VIKOR is used to rank and determine the best alternative. A case study demonstrates the usefulness and effectiveness of the proposed approach, in comparison with the technique for preference by similarity to ideal solution (TOPSIS).
14:50 Understanding the Efficacy of Interactive Visualization for Decision Making for Complex Systems
Mehrnoosh Oghbaie, Michael Pennock and William Rouse (Stevens Institute of Technology, USA)
Interactive visualization has become a popular approach to support decision makers coping with complex decision problems in the commercial world. However, it is not clear that these interactive visualizations result in better decision making. Human judgment under uncertainty is known to fall victim to a number of biases that result from the heuristics that we employ, yet the design of many interactive visualizations seems to favor aesthetic impact over mitigating biases. We hypothesize that as the complexity of the decision problem increases, improperly designed visualizations will lead decision makers to mistake spurious correlations as causal relationships. In this paper, we discuss findings from the literature that reinforce this concern. We consider the possibility that properly designed aiding could be used to mitigate these effects, and we describe a series of experiments that we are performing to test this hypothesis. Our first experiment suggested that experts were less likely to latch on to an oversimplified causal mechanism than non-experts. Consequently, our second experiment is intended to test an aiding approach designed to encourage non-experts to consider more possibilities.

2C4: Medical Systems

Room: Palm ABC
Chair: Raja Jayaraman (Khalifa University, United Arab Emirates (UAE))
13:30 Going Beyond Healthcare IT Inter-operability in Chronic Disease Management
Nelson King (Khalifa University, United Arab Emirates (UAE)); Raja Jayaraman (Khalifa University)
Chronic care consumes the majority of a nation's healthcare costs as it involves more specialists providing care over multiple visits for an extended duration of time. The role of healthcare information technology (HIT) in managing chronic care presents a significant opportunity for improvement. Inter-operability has been seen as the stumbling block to HIT for achieving efficiencies in paperless patient care. Yet from a systems perspective, the integration challenge to effective delivery of healthcare appears to go beyond inter-operable HIT, especially in chronic disease management. A great deal of interaction takes place beyond the limited scope of information passed between disparate HIT systems. For example, the chronic patient plays a greater role through self-monitoring which means applications of HIT must extend beyond the premises of the various providers. This paper presents a systems approach to identify the key technologies and people involved in chronic care and more importantly, the likely interactions between them. An understanding of these system elements then become the basis for prioritizing future research and clinical interventions including the necessary scope and scale of inter-operability.
13:50 Designing a Secure e-Health Network System
Healthcare data breaches are a growing issue, with healthcare security incidents increasing more than 900% in the last 2 years. A large U.S. health insurance provider had a major data breach, which resulted in the theft of more than 80 million patient and employee records. The U.S. Health Insurance Portability and Accountability Act (HIPAA) currently does not require Electronic Personal Health Information (ePHI) to be encrypted, increasing the vulnerability of e-health information. This paper proposes a secure e-health network system architecture which will significantly reduce the risk of data breaches and data theft, with minimal additional cost or network delay. This architecture is reliant on the application client and ensures authorized access to health records through the use of a secure client and a 2-step authentication process. The proposed network design will reduce instances of compromised networks, phishing attacks, or unwanted remote access, while improving authenticity of credentials.
14:10 Medical system design in a dynamic, regulated, multi-product, multi-life cycle environment
Scott Hareland (Medtronic, USA)
In many medical applications, system design encompasses the development of both capital equipment and an array of single-use items necessary to deliver therapy and treat medical conditions. Typically, the capital equipment has a long anticipated service life and must be designed to accommodate a range of single-use products, both legacy as well as future products which will bring new features to market. As the lifecycle expectation for capital equipment grows, this can lead to a growing misalignment between capital equipment capabilities and the new single-use products that need to be supported. In this decoupled product lifecycle environment, medical regulatory concerns must also be considered as it is time-consuming and difficult to make changes to capital equipment after it is installed. System needs and architectural trade-offs are described regarding decoupled product lifecycle system design. This includes the need for capital equipment to provide a long service life, and also to anticipate the needs of future technologies and capabilities which may not yet have been envisioned. The misalignment between existing infrastructure and new, evolving technologies is readily observable in today's world. As new therapeutic medical devices are introduced over time, the capital equipment needed to support them may not have the proper functionality or performance, or may require costly field upgrades and regulatory approval. This is important in the medical industry where the cost of regulatory and clinical evaluation can be comparable to the development costs. Designing capital to support future, sometimes unknown technical needs must balance both the up-front design effort and cost against the potential of having a prematurely obsolete system offering. Case studies will include a high-level discussion of the lifecycle of long-life capital medical equipment in relation to the introduction of more frequent, single-use product capabilities that must function with the installed capital equipment as well as on-going clinical learning that may impact the long-term needs of capital equipment design. In our system design approach, current needs for capital equipment and promising candidates for future functionality are identified and ranked. Technical roadmaps lead to the incorporation of expanded performance and system interfaces, including unassigned channels and future software-defined behavioral properties, in order to design for extensibility. Various primitive functions are built into the basic architecture such that more complex functions can be created later. Hardware capabilities are designed to accept new functionality and performance via software (a much less expensive means to upgrade capital equipment). The regulatory submission includes design evidence that the interfaces can be configured appropriately in future operational scenarios via software configurability. When future single-use products are introduced and require channels in the expanded interface, there is limited re-verification of the capital equipment. System testing is performed to demonstrate that new single-use products interface with the capital equipment, but additional verification on the installed base of capital equipment may be minimized. The fixed extensibility cost for future capabilities in the installed capital can offset needs for recurring costs that may otherwise be required on future disposable, single-use products. As the down-stream costs of system upgrades may pose serious limitations, additional efforts are taken in early design decisions to incorporate extensibility into the design with an intentional focus on easier upgradability in the future.
14:30 Hybrid Function Approximation Based Control with Application to Prosthetic Legs
Donald Ebeigbe, D Simon and Hanz Richter (Cleveland State University, USA)
We develop a hybrid controller for an n-degree of freedom robot where one control approach is used for some joints while another control approach is used for the remaining joints. We combine Slotine and Li's regressor based control, and function approximation technique (FAT) based regressor-free control, to obtain a coupled controller. We verify the closed loop stability of the hybrid controller via Lyapunov functions and update laws to show that the tracking errors approach zero as time approaches infinity. We then apply the controller to an uncertain model of a robotic system comprised of a prosthesis which emulates the angular knee motion of a human leg, and a prosthesis test robot which emulates the vertical hip motion and the angular thigh motion of a human. Simulation results show good reference trajectory tracking in the presence of ground reaction forces while keeping the control signal magnitudes reasonably small. The minimum tracking errors were 1.57% for the hip vertical hip motion, 0.29% for the thigh angle, and 0.34% for the knee angle (relative to their respective ranges of motion). The maximum steady-state control signal magnitudes were 840 N, 456 Nm, and 253 Nm for the hip vertical hip motion, thigh angle, and knee angle respectively.
14:50 Recurrent Dynamic Neural Network Model for Myoelectric-based Control of a Prosthetic Hand
Adrian Teban and Radu-Emil Precup (Politehnica University of Timisoara, Romania); Emil-Ioan Voisan (University Politehnica Timisoara, Romania); Thiago Eustaquio Alves de Oliveira and Emil M. Petriu (University of Ottawa, Canada)
This paper discusses a novel nonlinear autoregressive network with exogenous inputs (NARX) neural network (NN) model of the nonlinear dynamic mechanisms occurring in the myoelectric-based control of prosthetic hand fingers. The experimental results demonstrate the good performance of the NN architecture in terms of output response and mean squared error. The comparison of experimental results also shows that the proposed NARX recurrent NN outperforms a linear recurrent NN with the same training algorithm.

2C5: Model-Based Systems Engineering I

Room: Palm DEF
Chair: Sarah Law (Raytheon, USA)
13:30 The Hazard Analysis Profile: Linking Safety Analysis and SysML
Martina Müller, Michael Roth and Udo Lindemann (Technical University of Munich)
To handle stricter safety regulations combined with increasing complexity and shorter development cycles, it is necessary to consider safety aspects starting from the early phases of design. This paper presents an approach to link methods of safety analysis and modeling (SysML). Even though SysML and MBSE are common in the early stages of system design, there is a lack of methods integrating model-based design activities and safety analyses. Existing approaches either focus on particular tasks or build models after conducting separate safety analyses. Our presented approach, tailored to early stages of system design, introduces a "Hazard Analysis" SysML profile accompanied by a procedure for its application within a model-based safety analysis. It provides a preliminary hazard analysis and facilitates the systematic identification of safety-critical functions and components.
13:50 Preliminary Proof of Concept for Using sUAS to Inspect HML Systems
Luis Daniel Otero (Florida Institute of Technology, USA)
The inspection of high mast luminaire (HML) structures depends heavily on visual assessments from experienced field inspectors. State agencies rely on these visual inspections to make key decisions about the health of structures-such as allocation of human resources and funds to maintain\repair structures—that significantly affect public safety and costs. This paper describes an ongoing research approach -and results obtained—to evaluate the use of sUAS for HML inspections. Four preliminary field tests were carried out to understand the level of sUAS maneuverability required to capture live data of HML elements in an open field with varying winds speeds, and establish a general testing process for subsequent field tests that included HML inspectors. Three galvanized steel and one weathered steel HMLs were used as test subjects during the field tests. Preliminary results from this research effort provide evidence to support the use of sUAS for HML inspections. Future research areas are identified to extend the work presented in this paper.
14:10 A Systems Engineering Approach for a Dynamic Co-Simulation of a SysML Tool and Matlab
Dirk Bank, Felix Blumrich, Philipp Kress and Christian Stöferle (University of Applied Sciences Ulm, Germany)
Industry is in a debate about the potential of the Systems Modelling Language (SysML). One part sees the advantage in project planning and development. The other part argues that SysML only results in unnecessary additional work. The description of systems is largely still managed by the widely-used Microsoft Office software like Excel or PowerPoint. Indeed this software has its limitations especially when it comes to complex systems. The aim of this publication is to show the practical benefit and to open a further field of application for the SysML. The coupling of SysML with a simulation software enables us to perform periodic data exchange in a dynamic toolchain. This is a fundamental step which leads to a new approach of systems development and domain collaboration. In association of the related domains, the SysML tool is depicted to be the highest instance. A calculation software translates all the instructions coming from this instance which are finally verified simultaneously by a visualization tool. In doing so, the system can be modeled and requirements validated at an early stage. For this case a high level co-simulation of a "Traffic Alert and Collision Avoidance System (TCAS)" serves as a benchmark. Tests ensue that the SysML controlled TCAS simulation is an appliance to demonstrate the execution of important decisions and innovations. By means of the visualization the TCAS behavior becomes transparent. The advantages of the dynamic co-simulation are particularly in the application of model based development of complex systems.
14:30 Matrix-based Multi-hierarchy Fault Tree Generation and Evaluation
Michael Roth (Technical University of Munich); Christoffer von Beetzen (Technical University of Munich, Germany); Udo Lindemann (Technical University of Munich)
Due to increasing product complexity, variance and stricter safety regulations there is a need to improve safety analyses and to shift safety considerations to early stages of design. The fault tree analysis is one traditional method applied to safety analyses. Its major limitations are the required detailed system knowledge and high manual efforts involved. To shift it to the early stages it is necessary to improve efficiency and cope with abstract concepts. This paper therefore improves a matrix-based approach to automatically generate fault trees to solve these challenges. It extends it by integrating multi-hierarchy models and enabling the automated generation of AND-gates. By that, it provides a preliminary FTA tailored to the phase of system design. It identifies critical system elements and allows the comparison of alternative system concepts.

2C6: Modeling and Simulation I

Room: Poinciana AB
Chairs: Yongwimon Lenbury (Mahidol University, Thailand), Ismael Minchala (Universidad de Cuenca, Ecuador)
13:30 Cellular Automata Simulation of Signal Transduction and Calcium Dynamics with Healthy and Faulty Receptor Trafficking
Yongwimon Lenbury and Chontita Rattanakul (Mahidol University, Thailand)
The signal transduction process is one by which an external signal is detected by the cell and converted into cellular responsive actions leading to changes in secondary messengers, such as cAMP (cyclic adenosine monophosphate) or intracellular calcium. Calcium is an important second messenger and has been the subject of numerous experimental investigations and dynamic studies in intracellular signaling. Defects in the signaling process involving the calcium-sensing receptors can lead to faulty re-adjustments of circulating calcium level. Dimerization process has also been proposed to play an important role in the efficiency of the receptor responses. Here, we construct and simulate a Monte Carlo cellular automata model coupled with a system of difference equations to model receptor binding, trafficking, and dimerization triggered by an agonist, while keeping track of the concentration of cytosolic free calcium by using a system of difference equations.
13:50 Optimal Power Allocation for LTE Users with Different Modulations
Ying Wang (University of Michigan, USA); Ahmed Abdelhadi and T. Charles Clancy (Virginia Tech, USA)
In this paper, we demonstrate the optimal power allocation for QPSK, 16-QAM, and 64-QAM modulation schedules and the role of channel quality indicator (CQI). We used sigmoidal-like utility functions to represent the probability of successful reception of packets at user equipment (UE). CQI as a feedback to the base station (BS) indicates the data rate that a downlink channel can support. With Levenberg-Marquardt (LM) Optimization method, we present utility functions of different CQI values for standardized 15 Modulation order and Coding Scheme (MCS) in 3rd Generation Partnership Project (3GPP). Finally, we simulate and show the results of the optimal power allocation algorithm.
14:10 A comparative study of black-box models for cement quality prediction using input-output measurements of a closed circuit grinding
Ismael Minchala, Christian Sanchez and Marcelo Yungaicela (Universidad de Cuenca, Ecuador); Manuel Reinoso (ASK Solutions EC, Ecuador); Alfredo Mora (UCEM-Guapan, Ecuador); Jean Mata (Universidad Catolica de Cuenca, Ecuador)
This paper presents a comparative study of three different modeling techniques for predicting cement quality using input-output measurements of the closed circuit grinding in a cement plant. The modeling approaches used are the following: statistical, artificial neural networks (ANN), and adaptive neuro-fuzzy inference systems (ANFIS). The data set for generating the predictive models are obtained from a database of the operation of the cement plant, UCEM-Guapan. Online validations of the proposed models allow the selection of the best approach.
14:30 Single-phase Earth Fault Location Method Based on Harmonic Analysis for the NUGS
Ning Tong, Jihan Liang and Hui Li (Huazhong University of Science and Technology (HUST), P.R. China); Xiangning Lin (Huazhong University of Science and Technology, P.R. China); Zhengtian Li (Huazhong University of Science and Technology (HUST), P.R. China)
It has been a hot issue for decades about the faulted feeder selection as well as the fault location methods for the neutral un-effectively grounded distribution network. As for the existing methods dealing with the single-phase earth fault in this kind of network, they show great limit as the impact of the Petersen-coil is usually not taken into consideration, and the zero-sequence current is in most cases not available. In this paper, a novel single-phase earth fault location method making use of the characteristics of harmonic component of faulted phase current is proposed, and a range voting algorithm is designed to decide the fault location. Its validation and superiority is finally assessed under various fault conditions.

Tuesday, April 19, 15:10 - 15:40

Coffee Break

Room: Grand Cypress D

Tuesday, April 19, 15:40 - 17:20

2D1: Robotic Systems II

Room: Grand Cypress A
Chair: Huan Tan (GE Global Research, USA)
15:40 A Learning Invader for the Guarding a Territory Game
Hashem Raslan and Howard Schwartz (Carleton University, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada)
This paper explores the use of a learning algorithm in the "guarding a territory" game. The game occurs in continuous time, where a single learning invader tries to get as close as possible to a territory before being captured by a guard. Previous research has approached the problem by letting only the guard learn. We will examine the other possibility of the game, in which only the invader is going to learn. Furthermore, in our case the guard is superior to the invader. We will also consider using models with non-holonomic constraints. A control system is designed and optimized for the invader to play the game and reach Nash-Equilibrium. The paper finally shows how the learning system is able to adapt itself. The system's performance is evaluated through different simulations and compared to the Nash-Equilibrium. Experiments with real robots were conducted and verified our simulations in a real-life environment. Our results show that our learning invader behaved rationally in different circumstances.
16:00 Development of a Low-Cost Autonomous Surface Vehicle using MOOS-IvP
David Mattos and Douglas Santos (Instituto Tecnologico de Aeronautica, Brazil); Cairo L. Nascimento, Jr. (Instituto Tecnológico de Aeronáutica, Brazil)
This paper describes the implementation of a low-cost Autonomous Surface Vehicle (ASV) using a behavior-based software, the MOOS-IvP. The platform used is a catamaran boat driven by two direct current motors as the propulsion system. It is embedded with an Arduino, a low-cost Inertial Measurement Unit (IMU), a digital compass, a GPS receiver and a wireless RF serial modem as the communication system. The boat communicates with a Ground Control Station (GCS) sending telemetry data and receiving navigation commands for the propulsion motors. The GCS uses the MOOS-IvP software to implement the autonomous navigation procedures and the GPS/Compass/IMU sensor fusion algorithms. Results from simulation and experimental tests in an environment with virtual obstacles are presented and discussed.
16:20 A Fuzzy Reinforcement Learning Algorithm Using a Predictor for Pursuit-Evasion Games
Mostafa Awheda and Howard Schwartz (Carleton University, Canada)
In a pursuit-evasion game, the pursuer learning its strategy by any learning algorithm usually captures the evader when the environment of the game is similar to the environment that the pursuer was trained on. However, the trained pursuer may not be able to capture the evader if the environment of the pursuit-evasion game is different from the training environment. In this paper, we propose a fuzzy reinforcement learning algorithm so that the ability of the pursuer to capture the evader, in a pursuit-evasion game, will increase even when the environment of the game is different from the training environment. The proposed algorithm predicts the future position of the evader using a Kalman filter and then tunes the fuzzy logic controller (FLC) of the pursuer so that the pursuer moves directly to the expected position of the evader, where the capture of the evader will occur. The proposed algorithm is called the Kalman filter fuzzy actor critic learning (KFFACL) algorithm. The proposed KFFACL algorithm is applied to pursuit-evasion games that have environments different from the training environment. Simulation results show that the proposed KFFACL algorithm outperforms the state-of-the-art fuzzy reinforcement learning algorithms in terms of the ability of the pursuer to capture the evader and the capture time.
16:40 Design of a Cost-Effective Autonomous Underwater Vehicle
Michael Fowler (Alumni, USA); Terianne Bolding (Spectra Logic, USA); Kyle Hebert, Frank Ducrest and Ashok Kumar (University of Louisiana at Lafayette, USA)
A team of undergraduate computer science students conducted research to create an economic autonomous underwater vehicle (AUV). The inspiration originated the high cost associated with this branch of robotics. These vehicles are used in multiple fields including the exploration of large bodies of water and underwater equipment maintenance. These self-sufficient robots are able to keep individuals out of harsh and extreme environments. Our proposed version of an AUV is unique and it can function fully without being physically attached to the surface. The AUVs currently available in the market cost of at least ten thousand dollars. Our AUV, as is, has the structure and capability to function on a small scale for just $400. Our design can be scaled up for more accuracy and longer battery life. This slightly scaled up version could drive the price of our AUV design to a couple of thousand dollars compared to tens of thousands of dollars. This would make it more feasible for companies to have fleets of these underwater robots for a multitude of uses. Another aspect of this would be creating an entirely new market for hobbyists who would like to send robots underwater for photography, amateur film making, or a number of things that only a consumer driven market can create. From a technical aspect, this robot required thinking up simple solutions to complex problems to keep the cost of the device down. It required the waterproofing of cheap water resistant sensors. It also required thinking outside of the box on parts and equipment that are waterproof but that are not used in robotics. For example. Using modified bilge pumps as motors saved money when comparing them to waterproof servos.
17:00 Autonomous Robot System Architecture for Automation of Structural Health Monitoring
Romulo Lins (Faculdade SENAI Mariano Ferraz, Brazil); Sidney Givigi (Royal Millitary College of Canada, Canada)
Inspection of defects in civil infrastructure has been a constant field of research. In the majority of inspections, a technician is responsible to go physically to the field in order to detect and measure defects. Through the measurement results, engineers are able to perform the Structural Health Monitoring (SHM) of a measured structure. In this paper, a fully architecture of an autonomous system is proposed with the goal to automate the SHM task. The proposed system uses an autonomous robot, database and the proposed architecture to integrate all sub-systems for the automation of the SHM. Experimental results validate the technical feasibility of the proposed system.

2D2: Complex Systems Issues II

Room: Grand Cypress B
Chair: Lawrence John (Analytic Services, Inc. (ANSER) & Applied Systems Thinking Institute, USA)
15:40 On Importance Sampling in Sequential Bayesian Tracking of Elderly
Shahram Payandeh (Simon Fraser University, Canada)
Caring for elderly is a task which is facing various communities. Enabling seniors to live with dignity and security is one of the main goals in providing the care they deserve. Living in independent dwellings or in care giving facilities with minimum supervision and intervention can enable our elderly to maintain such dignity. Being able to monitor movements and activities of the elderly through various available sensing modalities are the key requirements for promoting such sense of independence. However, due to various limitations of the current sensing technology (i.e. either due to the lack of privacy, distributed and coarseness of the sensed information), the tracking information are subject to occlusions or occasional black-outs. It has been shown that the Sequential Bayesian approach can offer a suitable framework for tracking targets with the expected state-space definitions where their trajectories can follow a non-Gaussian distribution. However, the general approach requires to distribute various sample estimates of the motion in order to capture the prior distribution of the expected trajectories. This paper presents an approach which can be used as a part of such a priori distributions of sample trajectories in order to capture the predicted movements of the elderly in sequential framework. As such, it is possible to reduce the number of samples and offer a more computationally efficient approach.
16:00 Analysis of Political and Trade Decisions in International Gas Markets: a Model-Based Systems Engineering Framework
Thomas A McDermott, Jr (Georgia Tech Research Institute & Georgia Tech Sam Nunn School of International Affairs, USA); Molly Nadolski (Georgia Institute of Technology, USA); Rahul Basole (Georgia Tech Research Institute, USA); Adam Stulberg (Georgia Institute of Technology, USA)
By taking a model-based systems engineering (MBSE) approach, a framework can be developed for long-term exploration of a complex adaptive system in multiple contexts. The framework uses MBSE tools to define the complex system architecture and modern internet state transfer and structured data format standards to integrate natural language descriptions, datasets, and models. These can constitute a knowledge architecture that can be used as a long-term research tool. The long-term goal is a framework that captures conceptual models of the complex system, data sets and relationships, dynamic models and simulations, and decision analytics within a common environment. This paper presents a scenario in which we evaluate links between transformations of complex natural gas systems and analyze political intervention into the Russian-European natural gas markets. In this example, we specifically examine geographical, physical (cross-border infrastructure), and commercial value streams through the prism of network analyses. This is one context of a more general model of international gas relationships and flows at multiple levels. The resulting framework provides insight into dynamic behaviors at multiple levels of the system, such as the emergence of infrastructure network and intricate relationships of strong corporate ties and knowledge networks, along with possible strategies for political and economic intervention. The primary goal of an MBSE approach is to capture interrelationships in the complex system at varying levels of abstraction, which enables a common reference for diverse models and datasets.
16:20 A Complex Adaptive Systems Engineering (CASE) Methodology — The Ten-Year Update
Brian E. White (CAU-SES, USA)
A practical methodology for dealing with complex systems in an engineering sense is offered. This is a significant improvement over previous descriptions whose techniques have been successfully applied in highly technological systems involving many key stakeholders. Additional application of this updated methodology is encouraged as well as the publication of future case studies which will continue to show how well it works in practice.
16:40 A Computational Model of Cooperation Dynamics: Sensitivity Analysis
Lawrence John (Analytic Services, Inc. (ANSER) & Applied Systems Thinking Institute, USA); Matthew Parker (Virginia, USA); Brian Sauser (University of North Texas); Jon Wade (Stevens Institute of Technology)
This is the third paper addressing our research into the canonical forces affecting cooperation in Government Extended Enterprise (GEE), a type of system of system (SoS). This paper shows how we implemented the proposed theory and presents a detailed sensitivity analysis of the underlying computational model. We found that the proposed model has a slight bias in favor of cooperation and is much less sensitive to an actor's choice of decision making strategy than to the configuration of forces. Moreover, when all forces are equal, the impact of combinations of pro-cooperative forces (Sympathy and Trust) outstrips the combined anti-cooperative forces (Fear and Greed). Finally, the model is sufficiently insensitive that a single, well-informed individual, rigorously applying a self-consistent set of data coding principles can reliably translate authoritative narrative data into the numerical inputs required to run the model. We will report on analysis of three cases studies in the realm of emergency management, currently well under way, in future papers.
17:00 Standardization, modularization and platform approaches in the engineer-to-order business - Review and outlook
Michael Gepp, Matthias Foehr and Jan Vollmar (Siemens AG, Germany)
Standardization, modularization and platform approaches (S/M/P) are methodologies to cope with technical complexity of products and systems. Despite a long history in product business S/M/P are considered as a fairly recent idea in engineer-to-order (ETO) business. They are applied with varying degrees of success in this business due to a number of challenges. This contribution describes the development of S/M/P from their beginnings in product business to their application in the ETO business. It provides a selection of relevant literature and studies on S/M/P that reflects the current state of research in the ETO business. The contribution also identifies challenges regarding the implementation of S/M/P among ETO companies in order to promote a broader application of S/M/P in ETO business. The evaluation of cost and benefit of S/M/P approaches, the integration of modules to systems, the definition of module design criteria, the definition of S/M/P specific key performance indicators as well as the scarce methodological support for implementing S/M/P are seen as main hurdles in the ETO business.

2D3: Decision Making Systems II

Room: Grand Cypress C
Chair: John Salmon (Brigham Young University, USA)
15:40 Enabling Better Supply Chain Decisions Through a Generic Model Utilizing Cause-Effect Mapping
Sarah M. Rovito and Donna H. Rhodes (Massachusetts Institute of Technology, USA)
Supply chains are critical to delivering components and products safely, affordably, and securely. However, these complex networks of suppliers, manufacturers, and customers are vulnerable to internal and external disruptions and subject to exploitation. This can result in adverse impacts to the system and inhibit value delivery. A new generic electronics supply chain model is developed that can reveal information regarding system vulnerabilities and opportunities for decision-makers to intervene. The model draws upon a previously-developed Cause-Effect Mapping (CEM) analytic technique and assists with making decisions affecting complex systems, including those operating in resource-constrained environments. Elements of System Security Engineering (SSE) and Trusted Systems and Networks (TSN) analysis are taken into consideration to provide a greater understanding of security concerns and impacts to a supply chain focusing on electronics for the defense industry. The model, adaptable to a diversity of systems and capable of recognizing non-obvious sources of vulnerability, can be used by systems engineers to provide a holistic view of a complex supply chain. The model facilitates the communication of information regarding supply chain vulnerabilities to decision-makers and other individuals, as described in specific use cases.
16:00 Perspectives on the Use of Decision Analysis in Systems Engineering: Workshop Summary
Ali Abbas (University of Southern California, USA)
The purpose of this paper is to summarize the main discussions and conclusions that emerged during a National Science Foundation (NSF)-sponsored workshop in October 2015 in Arlington Virginia, on the use of decision analysis in systems engineering. The workshop participants involved people from academia, industry and the federal government. We present some of the main misconceptions about the field of decision analysis that were raised and conclude with a summary of the main recommendations of the workshop.
16:20 Creation of a Decision-Support Methodology for Selecting More-Electric Aircraft Subsystem Technologies
Jeremie Craisse, Simon Briceno, Imon Chakraborty, Dimitri Mavris, Young Jin Kim and Yongchang Li (Georgia Institute of Technology, USA); Simon Krueger (RWTH Aachen University, Germany); Elena Garcia (Georgia Institute of Technology & Georgia Institute of Technology, USA)
Ambitious aircraft emission goals combined with airline fuel costs are driving an increasing focus on energy efficiency for the aircraft industry. Aircraft equipment systems (AES) perform key aircraft functions, such as pressurization or control surface actuation, but they are also energy consumers, increasingly so with time. Selecting the AES technologies of the future is inherently a multi-criteria problem if all stakeholders are to be satisfied. The problem is further complicated because technologies cannot be considered to be completely independent and must be considered from a subsystem architecture perspective. This paper proposes an evolution of the SP2 method including a two-level approach considering both independent technologies and integrated architectures. These are then linked to technology attributes and high level objectives through qualitative subject matter expert driven relationships. The information is finally displayed in an interactive environment ranking technologies and architectures, while allowing the decision maker to define and explore the scenarios driving the rankings.
16:40 Decision-Making in the Program Organization Architecture Framework- Owner View (POAF-OV)
Adel Alblawi and Jerrell Stracener (Southern Methodist University, USA)
This paper present a practical approach for decision makers of a program organization where wide range of alternatives are available in defining the organization that responsible for the execution of the program. We utilize a constraint programming techniques to formulate program architecture description as a decision model. The model in this paper builds upon the results of the Program Organizational Architecture Framework (POAF). This approach provides valuable support for decision makers during the design and development of a complex system.

2D4: Systems Verification and Validation

Room: Palm ABC
Chair: Shiyong Liu (Southwestern University of Finance and Economics, P.R. China)
15:40 A Consistency Checking Approach for System Architecture
Xiaokai Xia and Zhiqiang Fan (North China Institute of Computing Technology, P.R. China); Yancen Dong (North China Institute of Computing Technology)
In the model driven system engineering, architecture plays an important role in quality of the developed complex system. In some extend, the quality of the architecture determines the developed system's quality. The system's architecture is usually described from different views, including static views and dynamic views, like DoDAF (Department of Defense Architecture). Then how to ensure the consistency of the models from different views becomes a great challenge. This paper proposes a predicate logic based approach for checking the consistency of the system models described by UML models. And the case study of the consistency checking of the class diagram and state machine diagram models of a flight control system shows the feasibility and effectiveness of the proposed approach.
16:00 Intentional Enterprise Architecture
Ray Feodoroff (Heriot Watt & Raytheon Australia Propriety Limited, Australia)
This paper documents a number of observations that may provide the basis for how User Requirements Notation (URN), currently the subject of ITU-T Z.150 and Z.151, can act as a modelling approach atop which normative guidance for architecture rationale or decision capture of System Safety and other concerns may be applied to Architecture Descriptions to act as an Assurance Justification. URN is thus offered as a means to fill recognized gaps in ISO/IEC/IEEE 42010 as the means for rationale capture. Moreover, the goal-oriented or goal-refinement aspects of URN may be viewed as an analogue of the Claims-Argument-Evidence style of argument ISO/IEC/IEEE 15026 proposes. URN certainly can compete with Goal Structured Notation (GSN). Stealing then from the tension between safety viewpoints at the University of York (UK) and those at MIT (US), and looking at ideas from outside those cliques (but alluded to by those cliques), the potential for integration of safety argument philosophies into Architecture Descriptions, so as to arrive at Intentional Enterprise Architecture, is investigated. All roads appear to lead to URN.
16:20 Formal Analysis of Fault Tree using Probabilistic Model Checking: A Solar Array Case Study
Marwan Ammar (Concordia University, Canada); Khaza Anuarul Hoque (University of Texas at Arlington, USA); Otmane Ait Mohamed (Concordia University, Canada)
Fault Tree Analysis (FTA) is a widespread technique used to assess the reliability of safety-critical systems. The traditional way of conducting FTA is either through paper and pencil proof or through computer simulation techniques, which are inefficient and prone to inaccuracy. In this paper, we propose the use of probabilistic model checking to automatically analyze fault trees of safety-critical systems. Our methodology consists in the probabilistic formalization of the gates used in a fault tree to a Discrete-Time Markov Chain (DTMC) and a Markov Decision Process (MDP), and the subsequent probabilistic verification using PRISM tool to quantitatively analyze the system. To illustrate the proposed approach we perform the fault tree analysis of a solar array system, used as power source for the DFH-3 satellite. The results show that harsh thermal environment is the main cause of system failures.
16:40 Applying Formal Verification to Early Assessment of FPGA-based Aerospace Applications: Methodology and Experience
Khaza Anuarul Hoque (University of Texas at Arlington, USA); Otmane Ait Mohamed (Concordia University, Canada); Yvon Savaria (Polytechnique Montreal, Canada)
SRAM-based Field Programmable Gate Arrays (FPGAs) have been used in the aerospace application for more than a decade. Unfortunately, a significant disadvantage of these devices is their sensitivity to radiation effects that can cause bit flips in memory elements and ionisation induced faults in semiconductors, commonly known as Single Event Upsets (SEUs). An early dependability analysis on SRAM FPGA-based safety-critical application will enable the designers to develop a more reliable and robust design complying with design requirements, such as the DO-254 standard. We propose a methodology based on probabilistic model checking, to analyze the dependability and performability properties of such designs to guide design decisions. Starting from the high-level description of a system, a Markov (reward) model is constructed from the extracted Control Data Flow Graph (CDFG). Various dependability and performability related properties are then verified automatically using the PRISM model checker tool.
17:00 Architecting a Development and Testing Plan for the Army's Common Operating Environment: Applying Agile Systems of Systems Development to Army Network Acquisition
Sean Crowley, Caulin Shannon, Christian Considine and George Gardner (United States Military Academy, USA); Michael J. Kwinn, Jr. (US Military Academy, USA); Steven Henderson (United States Military Academy, USA)
As the Army seeks to develop a mission command network capable of supporting the projection of American power around the globe, current processes for capability development prevent this network from reaching its potential. The Army's acquisition process produces systems and capabilities divided into stovepipes that struggle to work together as intended. This study analyzes the current processes of capability development and integration in order to propose more efficient practices as the Army transitions from its current process of Capability Set Management to the development of a Common Operating Environment.

2D5: Model-Based Systems Engineering II

Room: Palm DEF
Chair: Kristin Paetzold (Universität der Bundeswehr München, Germany)
15:40 Creation of Domain-Specific Languages for Executable System Models with the Eclipse Modeling Project
Sven Jäger, Ralph Maschotta and Tino Jungebloud (Ilmenau University of Technology, Germany); Alexander Wichmann (Technische Universität Ilmenau, Germany); Armin Zimmermann (Ilmenau University of Technology & Systems and Software Engineering, Germany)
Model-based systems engineering is an increasingly accepted method supporting design decisions. System engineers or modelers have the choice between tools and system description languages that are either abstract and generic or specifically adapted to their domain. The latter approach is easier and more efficient but restrictive. The success of this approach strongly relies on the support of domain-specific tools. The design or adaptation of such software tools and their underlying conceptual models is a complex task, which can be supported by a model-based approach on the meta model level itself. This paper proposes a workflow for designing complex systems by using domain-specific models which may combine structural and behavioral aspects. It is loosely based on the Object Management Group's Model Driven Architecture approach. For this purpose we use the Eclipse Modeling Framework and Eclipse Sirius Project, which are part of the Eclipse Modeling Project. The paper describes the complete workflow based on a simple real-life system example, covering the design of the domain-specific language, semi-automatic model editor generation, modeling the system, and finally executing a simulation of its behavior.
16:00 Not (strictly) relying on SysML for MBSE: language, tooling and development perspectives, The Arcadia/Capella rationale
Using the Arcadia/Capella solution as an example, this paper explores why standard UML/SysML languages are not necessarily the unique or best alternatives for implementation of an MBSE solution. The Thales experience is used to elicit MBSE language and tooling high-level requirements. This paper analyzes various implementation alternatives and justifies structuring choices made regarding Capella to efficiently support the Arcadia engineering method.
16:20 Considerations for Model Curation in Model-Centric Systems Engineering
Lucie Reymondet, Donna H. Rhodes and Adam M. Ross (Massachusetts Institute of Technology, USA)
Contemporary systems are often highly complex sociotechnical systems that require models to help system engineers with sense-making and decision-making. The systems community has developed and instantiated many modeling approaches, practices, formal languages and toolsets, which are all areas of progress. If models and their instantiations could be managed as assets, documented, archived, protected, retrieved and re-used as such, modeling and analysis tasks would likely gain in quality and timeliness. This paper asks the question of "What would a curator need to know about models or their instantiations to provide a model curation function"? Considerations of the activities and body of knowledge associated with curation of models are presented. The potential usefulness of a curated system modeling approach is illustrated on an example.
16:40 Expressing Embedded Systems Verification Aspects at Higher Abstraction Level - SystemVerilog in Object Constraint Language (SVOCL)
Muhammad Rashid (Umm Al-Qura University, Saudi Arabia); Muhammad Waseem Anwar (Consultant MODEVES Project, National Science, Technology and Innovation Plan, Saudi Arabia); Farooque Azam (CEME, National University of Sciences and Technology (NUST), Pakistan)
In Model Based System Engineering (MBSE), structural and behavioral aspects of the system are modelled at higher abstraction level. However, verification aspects such as assertions based verification are generally treated at lower abstraction level, resulting in a reduced design productivity. This paper presents an approach to represent SystemVerilog assertions at higher abstraction level along with structural and behavioral aspects by proposing SVOCL (SystemVerilog in Object Constraint Language). The proposed OCL extension allows to represent verification aspects such that the minimum transformation efforts are required due to its close SystemVerilog semantics. Traffic light controller serves as a case study.

2D6: Modeling and Simulation II

Room: Poinciana AB
Chair: Radu F. Babiceanu (Embry-Riddle Aeronautical University, USA)
15:40 Interdisciplinary Design Method for Actuation Load Determination of Aircraft High-Lift Systems
Oliver Bertram (German Aerospace Center (DLR), Germany)
There is a strong need to investigate future aircraft high-lift systems in virtual as well as real test rigs because of increasing requirements, their influence on aircraft, safety aspects and interactions with other aircraft systems. An investigation of innovative high-lift systems is only possible with integrated approaches of different disciplines involved in the aircraft design process. For this reason, a kinematics design method will be presented for fast estimation of high-lift actuator loads in an interdisciplinary design approach. The method will be applied to a future high-lift test rig. Therefore, two different kinematic types will be modeled with a module method for analyzing flap deployment and actuation loads. Aerodynamic calculations of the pressure distributions, which are the input loads for the kinematics, are carried out with the VSAERO method. The calculated actuation loads will constitute the basis for the dimensioning of the test rig components.
16:00 Ground Reaction Force Estimation in Prosthetic Legs with an Extended Kalman Filter
Seyed Fakoorian, D Simon, Hanz Richter and Vahid Azimi (Cleveland State University, USA)
A method to estimate ground reaction forces (GRFs) in a robot/prosthesis system is presented. The system includes a robot that emulates human hip and thigh motion, along with a powered (active) prosthetic leg for transfemoral amputees, and includes four degrees of freedom (DOF): vertical hip displacement, thigh angle, knee angle, and ankle angle. We design a continuous-time extended Kalman filter (EKF) to estimate not only the states of the robot/prosthesis system, but also the GRFs that act on the prosthetic foot. The simulation results show that the average of the thigh, knee, and ankle RMS estimation errors are 0.007, 0.015, and 0.4657 rad with the use of four, two, and one measurements respectively. The average GRF estimation errors are 2.914, 7.595, and 20.359 N with the use of four, two, and one measurements respectively. It is shown via simulation that the state estimates remain bounded if the initial estimation errors and the disturbances are sufficiently small.
16:20 Stochastic Model based Dynamic Power Estimation of Microprocessor using Imperas Simulator
Awais Yousaf (The University of Lahore & University of Engineering and Technology, Pakistan); Shahid Masud (Lahore University of Management Sciences, Pakistan)
This paper presents a novel approach for instruction-level power profiling of microprocessor using a simulator - Imperas. A stochastic model has been developed to profile power dissipation due to micro operations performed by the microprocessor. The microprocessor with all peripherals was completely configured in the Imperas Open Virtual Platform. The methodology involves designing Imperas VAP Tools based Binary Interception Library to capture various micro activities for an application being executed. An open source Open RISC 1000 core has been employed as a target processor. A characteristic profile and stochastic data that include instruction type, number of instructions, simulation time, statistics of cache and bus activities, etc. are extracted for the application on virtual platform. Various dynamic losses associated with the processor have been incorporated using stochastic models. Prominent advantage of our proposed power profiling technique is that the accuracy and preciseness are proportional to the number of instructions executed in the application. Complete architecture of Open RISC 1000 has been profiled in terms of power dissipation by micro operations performed due to execution of a group of instructions. Moreover, algorithms with different time-complexities have also been compared for their power efficiency and the effect of increasing the number of cores of microprocessor on power dissipation in a multi-core system has also been explored. This technique results in very fast power estimation as conventional RTL level simultaneous testing of software and hardware is complex.
16:40 Minimal Path Sets Models for Reliability Computation of Wireless Sensor Networks
Sobhi Mejjaouli (University of Arkansas at Little Rock, USA); Radu F. Babiceanu (Embry-Riddle Aeronautical University, USA)
Computing system reliability is a classical engineering reliability problem and was modeled using different computational approaches for many years. For systems which include a large number of components, one modeling approach is to consider the system components as vertices of a network graph, with the integrated components connected by edges. Wireless sensor networks follow very close this network graph model, with the access medium used for communication between the sensor nodes modeled as edges and the sensor nodes modeled as vertices. Probability and reliability theory consider a minimal path set as the lowest number of components whose functioning assures the operation of the system. This work models the reliability of wireless sensor networks using minimal path sets defined as the minimal set of functioning sensor networks that ensures a certain threshold of events detected by the sensor network.

Tuesday, April 19, 17:30 - 18:30

Reception

Room: Grand Cypress D

Tuesday, April 19, 18:30 - 20:30

Young Professionals Networking Event

Room: Grand Cypress A

Tuesday, April 19, 19:00 - 21:00

Analytics and Risk Technical Committee Meeting

Room: Palm DEF

Meeting of the Analytics and Risk Technical Committee http://ieeesystemscouncil.org/content/analytics-and-risk-technical-commitee of the IEEE Systems Council will be held on Tuesday, April 19, 2016 from 7:00 pm to 9:00 pm. All SysCon 2016 conference attendees are invited to join for review and planning of the ARTC Committee activities in 2016-2017. Hosted by Desheng Dash Wu, ARTC Committee Chair, and James H. Lambert, S.M.IEEE. The ARTC enables growth and understanding of theory and best practices in analytics and risk. Risk analytics in business intelligence represents data-oriented techniques to supplement business systems for risk-based decision making. Risk performance analysis in manufacturing intelligence uses advanced data analytics, modeling, and simulation to produce a fundamental transformation to new product-based economics through internet-based service enterprises and demand-driven supply chains. Risk evaluation plays key roles in emerging areas such as biomanufacturing, nanotechnology, and energy. There is a dramatic increase in the use of predictive analytics in these and many other areas. The ARTC brings together scientists and engineers from a variety of backgrounds and disciplines, and provides opportunities to discuss these open issues and advance the related interests of the IEEE Systems Council.

Industrial Interface Technical Committee Meeting

Room: Poinciana AB

Meeting of the Industrial Interface Technical Committee http://ieeesystemscouncil.org/content/industrial-interface-technical-committee of the IEEE Systems Council will be held Tuesday, April 19, 2016 from 7:00 pm to 9:00 pm. All SysCon 2016 conference attendees are invited to join for review and planning of the II TC Committee activities in 2016-2017. The IEEE Industrial Interface Technical Committee has a multi-disciplined membership and is designed to serve as an interface between IEEE (standards and activities) and all industry sectors that use IEEE Standards, Recommended Practices and Guidelines in their business activities. The goal of the TC is to be capable of addressing the topics of energy and power, communications, sensors, automation, systems and controls, process controls, data collection and analysis, and risk management. The II TC is reorganizing and is actively seeking new members in order to meet these goals.

Wednesday, April 20

Wednesday, April 20, 07:00 - 17:20

Registration

Room: Registration Counter 1

Wednesday, April 20, 08:00 - 09:40

3A1: Research in Systems Engineering I

Room: Grand Cypress A
Chair: Brian E. White (CAU-SES, USA)
08:00 A Hybrid Controller Design for Complex Network Systems with Hybrid Automaton-Based Convergence Analysis
Xianlin Zeng (Chinese Academy of Sciences, P.R. China); Qing Hui (University of Nebraska-Lincoln, USA)
This paper proposes a hybrid controller with event-based switching design to complex network systems, and analyzes the convergence result with network hybrid automaton models. Specifically, the main contributions of the paper are four folds. First, a hybrid controller design is proposed for a wide spectrum of complex network systems. Second, a class of hybrid dynamical systems is described via a hybrid automaton model, some invariance convergence results for this class of hybrid dynamical systems are proposed. Third, a network automaton model is presented to capture the details of the closed-loop system and the convergence result is proved. Finally, the hybrid network controller is applied to the Kunder's two-area four-machine power system, the implementation of the hybrid network controller for the power network in simulink is presented, and the simulation is conducted to show the efficacy of the proposed hybrid controller design.
08:20 The Role of Uncertainty in Systems Engineering Practice: An Empirical Analysis of Engineering Peer Reviews
Paul Nugent (Western Connecticut State University, USA)
Uncertainty is a theoretical concept that undergirds much of organizational theory and engineering theory. This paper analyzes systems engineering peer review data obtained from a large defense contracting company to better understand the gaps between theory and practice with respect to uncertainty. Preliminary results reveal that in practice uncertainty plays critical roles with respect to language and system ontology that heretofore have been neglected or underrepresented in theory.
08:40 Patterns of Causation in Accidents and Other Systems Engineering Failures
Diane Sorenson and Karen Marais (Purdue University, USA)
Project failures occur despite the industry's best systems engineering efforts. Detailed information on project failures is difficult to find, which makes it difficult to study their causes. We propose that by studying accidents we may be able to explain project failure causation and help prevent these failures from occurring. This paper addresses the first step in this research, which is to determine whether project failures and accidents have similar causes.
09:00 Improved E-learning Experience with Embedded LED System
Martin Malchow (Hasso Plattner Institute, Germany); Jan Renz (Hasso Plattner Institute for Software Systems Engineering, Germany); Matthias Bauer (Hasso Plattner Institute, Germany); Christoph Meinel (Hasso Plattner Institute, University of Potsdam, Germany)
During the last years, e-learning has become more and more important. There are several approaches like teleteaching or MOOCs to delivers knowledge information to the students on different topics. But, a major problem most learning platforms have is, students often get demotivated fast. This is caused e.g. by solving similar tasks again and again, and learning alone on the personal computer. To avoid this situation in coding-based courses one possible way could be the use of embedded devices. This approach increases the practical programming part and should push motivation to the students. This paper presents a possibility to the use of embedded systems with an LED panel to motivate students to use programming languages and solve the course successfully. To analyze the successfulness of this approach, it was tested within a MOOC called "Java for beginners" with 11,712 participants. The result was evaluated by personal feedback of the students and user data was analyzed to measure the acceptance and motivation of students by solving the embedded system tasks. The result shows that the approach is well accepted by the students and they are more motivated by tasks with real hardware support.

3A2: Transportation Systems

Room: Grand Cypress B
Chair: Johannes Masino (Karlsruhe Institute of Technology, Germany)
08:00 Systems Engineering Approach for eco-comparison among power-train configurations of hybrid bus
Mariangela Iuliano (Institut Superieur de Mecanique de Paris- SUPMECA, France); El-Mehdi Azzouzi (Institut Supérieur de Mécanique de Paris, France); Felipe Camargo Rosa (CAPES Foundation, Ministry of Education of Brazil, Brazil); Ottorino Veneri (CNR - National Research Council of Italy - Istituto Motori, Italy); Moncef Hammadi (SUPMECA & QUARTZ EA 7393, France); Stanislao Patalano (University of Naples Federico II, Italy)
This paper aims to realize an eco-comparison among power-train configurations of hybrid buses in terms of performance, fuel consumption and CO2 emission. The present study has been carried out in the context of the international research program PLACIS ( PLAteforme Collaborative d'Ingénierie Systèmes). In this work, experimental data of a pure electric power-train, evaluated in a dedicated laboratory of Istituto Motori - the National Research Council of Italy, have been used to carry out a pre-design phase of the modelling procedure. From that point on, in order to optimize the power-train performance, a series hybrid vehicles configuration and a parallel one have been modeled and simulated on DYMOLA-MODELICA environment. The vehicle that has been taken into account, as reference for the comparison, is a "RENAULT Master" minibus. Power-trains have been modeled with a backward-forward configuration in order to have a physical approach to the problem, respecting the required performances. The study has been developed with System Engineering approach that aims to manage the complexity of systems with a multidisciplinary proposal.
08:20 Accurate Evacuation Route Planning Using Forward-Backward Shortest Paths
Nishaben Patel (SDSU Graduate, USA); Manki Min (South Dakota State University, USA); Sunho Lim (Texas Tech University, USA)
We are living in the 21st century in which technological advances have greatly improved the quality of human lives. However nature keeps presenting its challenges in the form of tsunamis, floods, wild fires, hurricanes, volcanoes, etc. Excellent evacuation route planning is of paramount importance to reduce loss of lives during such disasters. The parameters of finite time lined capacity of evacuation paths and thousands to millions of evacuees for the problem of evacuation route planning makes it more complex than the problem of finding the shortest path which has been studied extensively in graph theory. Computation time (execution time hereafter) and evacuation time are two key properties which determine the efficiency and effectiveness of evacuation routing algorithms. This paper proposes a graph theory based evacuation routing algorithm called Forward Backward Shortest Path (FBSP). FBSP uses a novel idea to determine multiple routes by combining multiple paths at a time unlike other evacuation routing algorithms such as Shortest Multiple Path (SMP) that uses just the shortest paths between the source nodes and destination nodes. The experimental analysis on simulated graphs showed that FBSP improves evacuation time over SMP without significantly increasing execution time.
08:40 Route Networks within the Air Transport System: A comparative study of two European low-cost airlines using network metrics
Mark Hall (University of Bristol & Airbus Group Innovations, United Kingdom); Ranjit Ravindranath and Pablo Bermell-Garcia (Airbus Group Innovations, United Kingdom); Anders Johansson (University of Bristol, United Kingdom)
This paper presents a comparative analysis of the route networks of two European low-cost airlines in 2015. A case study is presented which highlights the key differences and similarities of their route network characteristics that they operate, aimed toward improving understanding of the current Air Transport System (ATS) for modeling and simulation purposes.
09:00 Systems Optimization of Charging Infrastructure for Electric Vehicles
John Salmon (Brigham Young University, USA)
The large advances in new battery technology has made Electric Vehicles (EV) more attractive and feasible. As the cost and weight of batteries decreases, and the EV range increases, more individuals and companies will consider investing in these vehicles. However, one potential obstacle is the lack of ubiquitous charging locations. Many individuals would be uncomfortable purchasing a vehicle which limits longer trips and constrains their travel close to charging stations. To overcome this obstacle, a specific market segment, taxi drivers, which make many shorter trips typically near large cities, could be early adopters of EV technology and help justify and establish the charging infrastructure which could be used by others later on. With a large potential investment, this system and infrastructure needs to be analyzed and optimized for performance, cost, and other stakeholder objectives. This paper investigates the location and number of charging stations that would be required to meet the demands of a subset of the New York City taxi cab system. An operations model is developed applying the fare data available from the NYC Taxi and Limousine Commission to evaluate the impact on the schedule and performance of individual taxi driver with EVs. Next, thousands of taxi driver shifts are simulated within the system defined by various numbers and locations of charging stations. Following the initial exploratory assessment, optimization of the system is implemented, the results which could be used to inform city official and decision makers on key decisions during a system implementation phase.

3A3: Knowledge Management

Room: Grand Cypress C
Chair: Ophir Kendler (IEEE Systems Council & SysEne Consulting Inc., Canada)
08:00 The Holism in Competence - A Study on building wholesome competence in people
Narayana Gpl Mandaleeka (Tata Consultancy Services Ltd., India); Ravi Shankar Pillutla (IEEE Education Society Chair, India)
Competence building is a holistic concept. It involves not only known aspects such as knowledge, perceived experience, and soft skills but also aspects that are not clearly known or understood such as cross influences between knowledge, skills and soft capabilities. Competence is the 'ability to do' work, whereas most training methods focus on imparting knowledge, which has a poor correlation to people performance in the roles they take up after education. Outcomes and the way to derive them effectively is the essence of Competence building. Organizations need an assurance of good 'ability to do' in the people who join them. Systems Engineering being multidisciplinary in nature and application and the focus being 'able to do', the education pedagogies are to align with this. This paper analyzes this problem and looks at options to address the cross influences and enhance the ability to do in people. Based on the various pilots done and the study of the holistic aspects, this paper attempts to provide a pedagogy for this.
08:20 A Conceptual Framework for Complex System Design and Design Management
Tirumala Vinnakota (Tata Consultancy Services Ltd, India)
Complex system design and its design management are very difficult though there are lot of developments and research in systems engineering, and complex systems science. Although various authors have proposed approaches to deal with complex systems design and its management, what is missing is the holistic approach considering human centeredness of various stakeholders, their understanding and acting systemically. Currently, there is lack of a conceptual design and design management framework leveraging design thinking and systems thinking that take human-centeredness, understanding and acting of various stakeholders systemically into consideration. The aim of this paper is to explore the area of complex systems, design, its design management and their problems and how can design thinking and systems thinking be leveraged to overcome the problems of complex system design, its design management and to propose a conceptual framework. We found that there is a reciprocal influencing relationship between design thinking and systems thinking that will be beneficial to complex systems design and its management, especially in dealing the complex coercive relationships amongst stakeholders. The proposed conceptual complex system design and its management framework will be useful for complex system designers and complex system design managers in order to effectively manage the challenges of complex system design and its management problems in a complex dynamic environment.

3A4: Performance Systems

Room: Palm ABC
Chair: Michael Pennock (Stevens Institute of Technology, USA)
08:00 Reference Clusters Based Feature Extraction Approach for Mixed Spectral Signatures with Dimensionality Disparity
Nian Zhang (University of the District of Columbia, USA)
This paper presents the design and implementation of a new adaptive feature selection technique for spectral band selection prior to classification of remotely sensed hyperspectral images. This approach integrates spectral band selection and hyperspectral image classification in an adaptive fashion, with the ultimate goal of improving the analysis and interpretation of hyperspectral imaging. The four components in the proposed adaptive feature selection, including local gradient calculation, reference cluster determination, prototype classes building using a fuzzy classifier, and relevant bands selection are presented in detail. The hyperspectral image data set from the ROSIS (Reflective Optics System Imaging Spectrometer) were used as training and testing data. We tested the effect of the approach on different number of selected spectral bands. The classification accuracy for AFS was illustrated by the ROC curve. In addition, in order to compare the proposed method with other methods, we applied the proposed adaptive feature selection (AFS) approach and the principal component analysis (PCA) method to the GentleBoost classifier using different number of spectral bands after processing the ROSIS Pavia scene. The experimental results demonstrated that the classification accuracies obtained by the AFS method are higher than that of the PCA method. In addition, for each method, the higher the number of spectral bands, the higher the classification accuracy.
08:20 Biometric Data Emulation and Encryption for Sport Wearable Devices (A Case Study)
Daniel Atkinson, Nick McDonald, Corey Frank and Youry Khmelevsky (Okanagan College, Canada); Scott McMillan (XCo Tech Inc., Canada)
This paper investigates the biometric data emulation and encryption for the sports wearable devices, including data generation performance with different data encryptions for an NoSQL document database. We discuss more deeply a specific topic, related to testing data generation and data encryption for the performance and stress testing of our NoSQL database. This research is a small part of a gamification/real-time solution and state those requirements within a research project "GAUGE: Exact Positioning Systems For Sport and Healthcare Industries", conducted by Computer Science department at Okanagan College (BC, Canada) with XCo Tech Inc (Kelowna, BC, Canada), which was supported by Natural Sciences and Engineering Research Council of Canada (NSERC) in 2015. Our designed system includes a NoSQL database. The emulated data are related for each individual player (personal statistics) as well as between players to provide a competitive aspect. XCo Tech Inc. (Xco), based in Penticton BC, Canada is developing an agnostic sensor platform for enabling interconnectivity, analysis and integration of information for sports, fitness and healthcare. The company's software system collects data from multiple sensors and transmits that data to servers where the data is integrated, synchronized, and analyzed. The data and derived analytics are then transmitted to other devices or persons where an app can use the data and analytics to present valuable real-time information to the user. Critical to the value-add proposition of the system is the ability to measure a person's location with cm level precision indoors and outdoors. For example, basketball coaches are interested in analyzing the cuts, jumps and bursts performed by a player during play. Therefore, XCo implements algorithms to analyze data from location systems and other sensors to determine such analytics. A second aspect requiring investigation is the synchronization of data from different sensors or systems such that the data can be integrated. For instance, positions obtained from the positioning system could be integrated with data from MEMS (micro-electro-mechanical systems) accelerometers and gyroscopes to better detect and classify maneuvers if the data from the separate sensors and positioning system can be synchronized. In this research paper we discuss two research parts of the project, related to different sensors data emulation as well as data encryption on the way from sensors to a NoSQL database. We investigate different encryption/decryption algorithms and related database performance issues. Our main contributions are: (1) new area of NoSQL database implementation; (2) a new testing data generation for a NoSQL database performance optimization; (3) performance analysis for different encryption algorithms in new development environment. In this paper we looked at how to generate data to emulate biometric sensors and investigated the effectiveness of different data encryptions for NoSQL document data bases for location and biometric data captured by sports wearable devices. Choosing an encryption method to use can be difficult, however through this research we have discovered 2 encryption methods that work well. The AES and blowfish algorithms seem to be the best choice for the system implemented. Blowfish can be implemented to be more secure than AES, however AES is faster when encrypting very large amounts of data, especially when using Intel R AES-IN. They outperform 3DES in both speed and security, 3DES is an outdated algorithm, and should not be implemented in new systems.
08:40 Game Servers Deployment Automation Case Study
Zane Ouimet, Heath Caswell and Youry Khmelevsky (Okanagan College, Canada); Rob Bartlett (W. T. Fast Inc., France); Alex Needham (W. T. Fast Inc., Canada)
This paper describes a software system prototype for automated game servers deployment and configuring customized game servers on demand. The described system has a web interface which allows customers to create accounts, purchase, services, and gain access to and configure their purchased web servers. This service consists of a website which acts as an interface for customers to purchase subscriptions, and gain access to and configure their purchased servers. Behind the website is a system which dynamically deploys virtual machines with the requested configurations, handles all of the networking details, and provides information back to the customer on how to connect to their server. The proposed prototype was tested by deploying popular Minecraft game server. This facilitated network research by allowing users to have a more scalable testing environment and thus enable controlled laboratory experiments. This paper goes through the entire life-cycle of the project, starting with some information on existing research about the subject, and how it relates to ours. Following that we describe our project requirements, the solution we ended up using and how it was modified to fit our requirements.We then have a section showing performance experiments we ran. The final section is the conclusion which talks about the outcome of our project in relation to our original goals, and how it will impact future research in this area. Minecraft [28], [3] is a popular video game played worldwide, and is built simply enough to be used for network analysis and research. This paper describes a software system prototype for automated game servers deployment and configuring customized game servers on demand. The described system has a web interface which allows customers to create accounts, purchase, services, and gain access to and configure their purchased web servers. This service consists of a web application which acts as an interface for customers to purchase subscriptions, and gain access to and configure their purchased servers. Behind the website is a system which dynamically deploys virtual machines with the requested configurations, handles all of the networking details, and provides information back to the customer on how to connect to their server. The project prototype development was the next step in the design, construction and test of a new layer of game server software that can optimize and monitor in real-time game services [5], [6]. It stems from the observation that game servers place demands on computing resources - hardware and network - that can vary with user behaviour and whose optimization is the key to customer satisfaction. Virtualized servers provide new flexibility in hardware reservation and allocation but their use can make resource optimization difficult by making it context sensitive i.e. dependent on the allocation of virtual machines (VMs) to hardware. The main project's objective was to study predictive monitoring and optimization for game server clusters. The first phase of the project was to gather performance data about game servers, then analyze its time behaviour to allow the creation of a performance-prediction software module [7]. The initial module version applied virtualized game servers in various configurations, and later versions were tested with physical servers as well as parallel (cluster) game servers. Later, the project investigated performance optimization based on short-term predictions. Our main contributions are: (1) a unique network and gaming servers infrastructure created for the emulation experiments, which is also being used to perform stress testing and data analysis of network game applications, as well as to monitor the performance of game servers within a proprietary Gaming Private Network (GPN) [16]; (2) an automated software system prototype for creating and configuring customized game servers on demand, and (3) using this automated software system prototype we are able to improve the game servers utilization. Two networking and servers optimization research projects GPN-Perf1: Investigating performance of game private networks" (2014) and GPN-Perf2 research project application (2015-2018) were funded by Natural Sciences and Engineering Research Council of Canada (NSERC) [2]. This research potentially has far reaching impacts not only in reducing game latency but also in optimizing other types of network traffic via the prioritizing of the most important data packets sent over the network.

3A5: Model-Based Systems Engineering III

Room: Palm DEF
Chair: Warren K. Vaneman (Naval Postgraduate School, USA)
08:00 Semantic Design Space Refinement for Model-Based Systems Engineering
Matthew Schmit (Georgia Institute of Technology & Aerospace Systems Design Laboratory, USA); Simon Briceno (Georgia Institute of Technology, USA); Kyle Collins (Georgia Institute of Technology & Aerospace Systems Design Laboratory, USA); Dimitri Mavris (Georgia Institute of Technology, USA); Kevin Lynch (Raytheon Corporation, USA); George L Ball (Raytheon, Inc., USA)
This paper describes a process developed to utilize information within an ontology, a form of a component model library, for design space refinement in conceptual design through the use of an Interactive Reconfigurable Matrix of Alternatives (IRMA). An ontological approach is proposed, as it enhances the capabilities of component model libraries that are already being constructed for use in model-based systems engineering (MBSE) programs. By defining a common vocabulary across multiple design domains, an ontology is capable of driving information consistency and reducing errors that occur due to misinterpretation of common vernacular. Within an ontology, a functional decomposition of any complex system can be represented as a hierarchy of "classes", where a class defines any concept within a given domain. The components that comprise each system, along with the functions that each component performs and the means by which each function can be accomplished, are related to the system as associated properties of the system. The case study presented by this paper describes a process by which information can be stored within and extracted from an ontology for design space refinement and concept selection early in the conceptual design phases. Such a task is critical, as all MBSE approaches will still require design space refinement to narrow the design space to a feasible size for exploration. Additionally, for any specific type of system (e.g., missile, satellite) the starting point for design space refinement will be nearly identical. If the knowledge of subject matter experts (SME) can be captured within the ontology for a given system, then this process of design space refinement can be partially automated, and reused any time a new system is designed. For this case study, a notional missile system was constructed in Protégé, an open source, ontology web language (OWL) based program developed by Stanford University. For the notional missile system, a representative functional decomposition was created for the seeker system, propulsion unit, control actuation system, and the aerodynamic components of the system, along with the functions that each component must perform and associated design component options to achieve each function. Incompatibilities between the different available design component options of the system are identified in Protégé using object properties, which establish relationships between different objects in Protégé. This functional decomposition structure of the ontology may be identical to the decomposition that is used in an IRMA, which can use information exported from the ontology to perform design space refinement during conceptual design. The result of this study is a process which is capable of extracting information from an ontology, organizing it, and importing it directly to an IRMA which is fully functional and ready to be used. There are numerous advantages to developing this process through an ontology, rather than going straight to a conceptual design tool designed for design space refinement and concept selection. First and foremost, developing this process enhances the capabilities of ontologies that are already being constructed for use in MBSE programs and engineering design in general. Furthermore, once established this process can be used over and over again, across multiple projects. This could potentially result in huge increases in efficiency and affordability of design projects in the long run, supporting the efforts of the Department of Defense's Better Buying Power (BBP) initiative.
08:20 Enhancing Model-Based Systems Engineering with the Lifecycle Modeling Language
Warren K. Vaneman (Naval Postgraduate School, USA)
As systems become more complex, the systems engineering community must find new and more efficient ways of dealing with complexity throughout the system's lifecycle. Model-Based Systems Engineering (MBSE) has proven to be effective at managing complexity through the development of systems in a virtual environment. Several languages have been developed in the spirit of MBSE; however, these languages often do not include the full spectrum of information needed for holistic system solutions. The Lifecycle Modeling Language (LML) has been developed to provide extensible language that contains both visualization models and ontology. When LML is coupled with the Systems Modeling Language (SysML) and the Department of Defense Metamodel 2.0 (DM2), the result is modeling languages that better support systems engineering processes across the entire spectrum of lifecycle concerns.
08:40 An Integrated Design Methodology for Safety Critical Systems
Faïda Mhenni and Jean-Yves Choley (SUPMECA, France); Nga Nguyen (EISTI, France)
Nowadays man-made systems are getting more complex including new technologies and components from different domains. In addition, they are used in many safety critical missions. This induces new challenges in the design of such systems as new methods and tools are needed to manage the complexity while taking into account safety aspects. To face these challenges, the use of model-based approaches such as MBSE is compulsory. In addition, only an efficient integration of safety concerns early in the design process guarantees an optimal design avoiding late and costly changes. Our proposal is an integrated methodology named SafeSysE, including both MBSE and MBSA processes. SafeSysE narrows the gap between the design and safety analyses since it allows to assist the safety expert in generating the safety artifacts such as FMEA and FTA from the system models. It enhances the consistency between the system model including the requirements, structure and behavior of the system in one side and the safety artifacts in the other side.

3A6: Modeling and Simulation III

Room: Poinciana AB
Chair: Otmane Ait Mohamed (Concordia University, Canada)
08:00 Specification and Execution of System Optimization Processes with UML Activity Diagrams
Alexander Wichmann (Technische Universität Ilmenau, Germany); Sven Jäger, Tino Jungebloud and Ralph Maschotta (Ilmenau University of Technology, Germany); Armin Zimmermann (Ilmenau University of Technology & Systems and Software Engineering, Germany)
Designing complex systems requires domain knowledge as well as tool-supported modeling and analysis techniques. General-purpose as well as domain-specific tools can be used for this task. The latter has the advantage of not requiring lowlevel model knowledge from systems designers that are domain experts, but is only possible with specialized tools that have to be programmed for a certain purpose or field by software engineers. The gap between general-purpose tools and domainspecific applications can be bridged by a (meta-)model-based description of structure and behavior of domain objects and the subsequent generation of a software tool. This approach has been successfully demonstrated in earlier work and the result is termed simulation-based application (SBA). One of the main applications of such tools is to find the best solution for design decisions, which can be done by manual evaluations of design ideas or automatically if parameters and design space are well understood. Such an automatic indirect optimization method should be adapted to the specific domain, which would require programming effort for the SBA. A logical extension to the usual system description is to apply the modelbased paradigm to such a method description as well. This paper proposes an approach to model optimization processes (i.e., heuristics) graphically with UML activity diagrams that describe the data and control flow of such an algorithm. The resulting models are transformed into an executable algorithm automatically and control the work flow of an optimization with the software tool that has been designed for this task. An example of a heuristic optimization process for a wireless sensor network setup is presented.
08:20 Robustness of Cooperative Forward Collision Warning Systems to Communication Uncertainty
Seyed Mehdi Iranmanesh and Ehsan Moradi-Pari (West Virginia University, USA); Yaser P. Fallah (University of Central Florida, USA); Sushanta Das (Automobile OEM, USA); Muhammad Rizwan (Hyundai-Kia America Technical Center, USA)
Cooperative collision avoidance systems rely on communication between vehicles to detect possibility of collision. In this paper, we present a systematic approach to study the performance of emerging communication based vehicle safety systems. Examples of such systems include forward collision warning (FCW) application. We employ a co-simulation tool that we have developed jointly in collaboration with industry partners for this purpose. The tool allows joint study of the safety application and its underlying communication system. We utilize this tool and study the impact of communication uncertainties on a new variation of the FCW algorithm that has been designed by the team. In this paper, the impact of communication loss and the choice of signal communication logic are examined. It is shown that the relationship between communication loss and accuracy of hazard detection algorithm is non-linear. It is also shown that employing error-dependent communication logic will yield considerable gains in communication or accuracy of tracking and hazard detection. Network awareness in communication logic is also demonstrated to be beneficial in high communication loss situations.
08:40 Performance Evaluation Using Markov Model For A Novel Approach In Ethernet Based Embedded Networked Control Communication
Mohamad Khairi Ishak (Universiti Sains Malaysia, Malaysia)
In this paper, a method is proposed to calculate Markov chain model for modeling network jitter and delays based on the number of nodes and different minimal backoff time values which assigned to each node. The Markov chain approach gives more detailed performance analysis base on number of nodes and minimum backoff time assigned to each node in the network model. A derivative process of mathematical formula is presented to calculate the transition probabilities in the Markov chain model and related parameters. This approach shows how the backoff algorithms behave during the transmission and the network performance. The proposed analytical modeling shows the advantage of our approach over a standard Carrier Sense Multiple Access/ Collision Detection (CSMA/CD) setting.

Wednesday, April 20, 09:40 - 10:10

Coffee Break

Room: Grand Cypress D

Wednesday, April 20, 10:10 - 11:50

3B1: Research in Systems Engineering II

Room: Grand Cypress A
Chair: Youry Khmelevsky (Okanagan College, Canada)
10:10 Customer-focused development practices in Systems Engineering companies: A case study across industry sectors
Torgeir Welo (The Norwegian University of Science and Technology, Norway); Geir Ringen (Norwegian University of Science and Technology, Norway)
Establishing deep understanding of customers and their perception of value is a prerequisite to deliver commercially successful products in today's hostile business environment. This paper elaborates on to which degree companies focus on customer value, along with their collaborative practices with customers undertaken in product development. Emphasis has been placed on comparing such practices in Systems Engineering (SE) companies with the ones in other industry sectors. We hypothesize that since SE companies are typically requirements-driven and operate in a B2B environment separated from end-user's real needs, this would imply that they are less focused on (understanding and satisfying) true customer value in their product development practices. A case study was designed to use two different methodologies for data collection. First, a literature review was conducted to establish an overview of theory and industry best-practices. The findings were synthesized, analyzed and decomposed into a set of twelve governing statements, which were implemented into a survey distributed to 50 companies. The outcome from the literature review was in parallel converted into a customer-focus capability maturity tool intended for in-depth, face-to-face assessments with nine companies. The results from the two data strategies were triangularized and used to test our initial hypothesis. The results from both methods indicate that product engineering practices in SE companies have a more distant relation to customer value than in many other industries. The implications are that there may be a potential for SE companies to define more integrated practices with direct interfaces towards customers to prevent missed value opportunities and hence improve innovation capability.
10:30 Using Prototypes to Leverage Knowledge in Product Development: Examples from the Automotive Industry
Jorgen A. B. Erichsen (NTNU, Norwegian University of Science and Technology, Norway); Andreas Pedersen (Norwegian University of Science and Technology (NTNU), Norway); Martin Steinert (Norwegian University of Science and Technology, Norway); Torgeir Welo (The Norwegian University of Science and Technology, Norway)
This article is rooted in the automotive industry as starting point, and discusses the topic of leveraging tacit knowledge through prototypes. The aim of this study is to make the case of using reflective and affirmative prototypes for knowledge creating and transferal in the product development process. After providing an overview on learning and knowledge, the Socialization, Externalization, Combination and Internalization (SECI) model is discussed in detail, with it a clear distinction between tacit and explicit knowledge. Based on this model, we propose a framework of using said reflective and affirmative prototypes in and external vs. internal learning/knowledge capturing and transferal setting. Rounded by two case examples from the automotive industry we end by identifying the emergent research questions and areas. Using prototypes and prototyping may hold a monumental potential to better capture and transfer knowledge in product development, thus leveraging existing integration events in engineering as a basis for knowledge transformation.
10:50 Gamification of Incentives and Mechanism Design in Systems Engineering
Joseph Clerkin (University of Alabama Huntsville (UAH), USA); Bryan Mesmer (University of Alabama in Huntsville, USA)
In this paper, gamification is examined as a training and education tool for internal and external organization communication. First, gamification is examined as a teaching tool. Then, concepts in systems engineering are explored. Finally, an outline for a game is proposed that incorporates the essence of the mathematically rigorous methods of mechanism design and value-driven design.
11:10 Consistency Analysis for Requirements, Functions, and System Elements: Requirements for the Entire Development Process
Christopher Lankeit (University of Paderborn & Heinz Nixdorf Institute, Germany); Viktor Just (University of Paderborn, Germany); Ansgar Trächtler (Universität Paderborn, Germany)
Tomorrows systems will be based on close interactions of mechanics, electrics/electronics, control engineering, software technology or new materials, as well as possessing inherent intelligence that will make them superior to mechatronics. Their main features are adaptability, robustness, and proactivity. Intelligent systems are multidisciplinary and therefore, they need to be developed in a discipline-spanning manner. Two targets arising from this are on the one hand a consistent superordinate process model, and on the other hand an appropriate support for this process model with sufficient methods. One step towards reaching those targets is more formalization in systems engineering for traditional engineering. A systematic use of different requirement levels in a given development process is displayed in this paper. It is shown that, when interpreted in the right way, requirements provide one option to interconnect the different phases inside this development process. Four levels of requirements are defined and allocated to a development process. For a process model's applicability, it is beneficial to provide supporting methods. We discuss certain methods for the different development phases of the V-model. Starting with goals of the development, the evolution from goals towards functions and systems is described via enriched partial models. The interactions of the partial models with the requirements levels are described to increase consistency between requirements, functions and system elements. A benefit emerging with this is the advantageous traceability of requirements. To formalize requirements connections to the system, an analysis method is presented, which quantifies connectivity of each element, as well as the degree of connections inside the entire system. Hence, the possibilities of examining the connections between requirements, goals and system elements are expanded.

3B2: Space and Communication Systems I

Room: Grand Cypress B
Chair: Ahmed Abdelhadi (Virginia Tech, USA)
10:10 An Application-Aware Spectrum Sharing Approach for Commercial Use of 3.5 GHz Spectrum
In this paper, we introduce an application-aware spectrum sharing approach for sharing the Federal under-utilized 3.5 GHz spectrum with commercial users. In our model, users are running elastic or inelastic traffic and each application running on the user equipment (UE) is assigned a utility function based on its type. Furthermore, each of the small cells users has a minimum required target utility for its application. In order for users located under the coverage area of the small cells' eNodeBs, with the 3.5 GHz band resources, to meet their minimum required quality of experience (QoE), the network operator makes a decision regarding the need for sharing the macro cell's resources to obtain additional resources. Our objective is to provide each user with a rate that satisfies its application's minimum required utility through spectrum sharing approach and improve the overall QoE in the network. We present an application-aware spectrum sharing algorithm that is based on resource allocation with carrier aggregation to allocate macro cell permanent resources and small cells' leased resources to UEs and allocate each user's application an aggregated rate that can at minimum achieves the application's minimum required utility. Finally, we present simulation results for the performance of the proposed algorithm.
10:30 On the Validation of Path Loss Models Based on Field Measurements Using 800 MHz LTE Network
Yazan Alqudah (Western Carolina University, USA); Belal Sababha (Princess Sumaya University for Technology, Jordan); Ayman Elnashar (DUcompany, Jordan); Sohaib Sababha (Ducompany, Jordan)
Path loss models play an important role in cellular network planning and deployment. This work reports on the path loss models accuracy for predicting received signal strength in urban environment. The path loss is a key factor in coverage predication and hence model tuning is mandatory for accurate RF planning. Using large set of field measurements from a commercial LTE 800MHz network with 15MHz channel bandwidth, different propagation models are evaluated and compared for their accuracy in predicting the path loss. The measurements are conducted in Dubai, UAE which offers a unique environment in its construction materials, architecture, topology and vegetation. The goal of sharing our findings is to help tune and improve the accuracy of models used to depict path loss.
10:50 Application-Aware Resource Block and Power Allocation for LTE
Tugba Erpek (Virginia Tech); Ahmed Abdelhadi and T. Charles Clancy (Virginia Tech, USA)
In this paper, we implement an application-aware scheduler that differentiates users running real-time applications and delay-tolerant applications while allocating resources. This approach ensures that the priority is given to real-time applications over delay-tolerant applications. In our system model, we include realistic channel effects of Long Term Evolution (LTE) system. Our application-aware scheduler runs in two stages, the first stage is resource block allocation and the second stage is power allocation. In the optimal solution of resource block allocation problem, each user is inherently guaranteed a minimum Quality of Experience (QoE) while ensuring priority given to users with real-time applications. In the power allocation problem, a new power allocation method is proposed which utilizes the optimal solution of the application-aware resource block scheduling problem. As a proof of concept, we run a simulation comparison between a conventional proportional fairness scheduler and the application-aware scheduler. The simulation results show better QoE with the application-aware scheduler.

3B3: Energy Management and Sustainability

Room: Grand Cypress C
Chair: Elizabeth B Connelly (University of Virginia, USA)
10:10 Bounds on Decentralized Concave Optimization in Energy Harvesting Wireless Sensor Networks
Nicholas Roseveare (Innovative Signals Analysis, USA); S M Shafiul Alam and Bala Natarajan (Kansas State University, USA)
Wireless sensor networks (WSNs) have increasingly become the viable means of distributed sensing and control for a wide array of applications. The energy-sensitive sensor nodes in these systems are often augmented by an energy harvesting device, allowing for continuous operation. The drawback, however, is that the availability of communication resources is uncertain. Decentralized optimization is a common technique implemented to coordinate such a disparate collection of devices. Most decomposition methods involve iterative updates where public information about joint constraints or objectives must be shared. Recent work in distributed optimization has provided some new insights on the performance of optimization in such a distributed network. For perfect and unlimited communication, the convergence of the optimization performs as good as a centralized controller. However, limited communication introduces delays and quantization errors which affect solution convergence, especially for algorithms utilizing multi-hop updates. In this paper, we analyze the effect of deterministic delays and quantization errors on the convergence of decentralized optimization in an energy harvesting wireless sensor network. The corresponding utility maximization problem is being solved through a combination of dual decomposition and alternating direction method of multipliers (ADMM). The convergence bound on the associated dual function update exhibits a square law uncertainty with respect to the maximum allowable communication delay and quantization noise variance.
10:30 Reliability of Dynamic Load Scheduling with Solar Forecast Scenarios
Abdulelah Habib (University of California San Diego, USA); Zachary Pecenak and Vahid Disfani (University of California San Diego); Jan Kleissl (University of California, San Diego, USA); Raymond de Callafon (University of California San Diego, USA)
This paper presents and evaluates the performance of an optimal scheduling algorithm that selects the on/off combinations and timing of a finite set of dynamic electric loads on the basis of short term predictions of the power delivery from a photovoltaic source. In the algorithm for optimal scheduling, each load is modeled with a dynamic power profile that may be different for on and off switching. Optimal scheduling is achieved by the evaluation of a user-specified criterion function with possible power constraints. The scheduling algorithm exploits the use of a moving finite time horizon and the resulting finite number of scheduling combinations to achieve real-time computation of the optimal timing and switching of loads. The moving time horizon in the proposed optimal scheduling algorithm provides an opportunity to use short term (time moving) predictions of solar power based on advection of clouds detected in sky images. Advection, persistence, and perfect forecast scenarios are used as input to the load scheduling algorithm to elucidate the effect of forecast errors on mis-scheduling. The advection forecast creates less events where the load demand is greater than the available solar energy, as compared to persistence. Increasing the decision horizon leads to increasing error and decreased efficiency of the system, measured as the amount of power consumed by the aggregate loads normalized by total solar power. For a standalone system with a real forecast, energy reserves are necessary to provide the excess energy required by mis-scheduled loads. A method for battery sizing is proposed for future work.
10:50 Resilience Analytics in Systems Engineering with Application to Aviation Biofuels
Elizabeth B Connelly and James H. Lambert (University of Virginia, USA)
The resilience of systems is a topic of increasing interest for systems engineers. Across systems, stakeholder perspectives, and application domains, there is opportunity for resilience analytics that is broadly applicable. This paper introduces an important facet of resilience analytics that is concerned with disruption of priorities and changeable stakeholder requirements. In particular, resilience analytics in this paper addresses the evolution of stakeholder preferences under scenarios of technological, economic, environmental, regulatory, and other stressors. Updating of the problem frames is described to improve the objectivity of risk assessments and adapt strategic priorities as new information becomes available over time. Resilience analytics is demonstrated in a case study of supply chains for aviation biofuel. The case study provides transferable lessons for updating of systems engineering strategic plans for an innovative technology.
11:10 A Large-Scale Customer-Accessible Energy Monitoring System
Rafael Rodrigues (Instituto Federal de Santa Catarina, Brazil); Juliano Zatta (Instituto Fedetal de Santa Catarina, Brazil); Jonas Souza, Anna Espindola and Eduardo Carvalho (Instituto Federal de Santa Catarina, Brazil)
Global energy consumption has risen drastically. In Brazil, although economic growth in recent years has been underwhelming, the growth rate of energy consumption has steadily risen. At the beginning of 2015 important adjustments were made to the energy tax. Within the state of Santa Catarina, in particular, the National Agency of Electrical Energy of Brazil (ANEEL) approved a readjustment of 43% on the bills of industrial customers of electricity. The concepts and advantages of Smart Grids make relevant detailed measurements. These aspects, combined with the recent adjustments on electricity bills, demonstrate the need to develop real-time consumption monitoring systems. From this point of view there are not a significant amount of systems designed to achieve those functions efficiently and at a low cost. This paper proposes the development of a low-cost prototype for monitoring the consumption of electricity in real-time using communication with electronic meters of power distribution companies. Specifically, we seek to develop a prototype computer based on the Intel® Galileo Gen2 platform and data communication standard ABNT NBR 14522.
11:30 Power Integration Based Dynamic Equilibrium Identification Method of Beam Pumping System
Zhang Yan (Xi'an University of Technology & School of Mechanical and Precision Instrument Engineering, P.R. China); Qiu Zongming and Zhao Huaijun (Xi’an University of Technology, P.R. China)
Against the problem of electricity waste on beam pumping system due to misjudgment existed in conventional balance judgment methods caused by motor generation reversely and load fluctuation etc., a power integration based equilibrium identification method is proposed as the judgment to comprehensively represent the real dynamic equilibrium state in the process of pumping, which accurately measure and accumulate the each total electricity consumption including forward and reverse of the motor in one or more up-down stroke period, then obtain the ratio of upstroke electricity consumption average and the down stroke to determine whether a pumping unit is in the running state of equilibrium. A smart high performance device designed using dual—DSP scheme concentrates parameter acquisition and processing, telecommunication, intelligent control and alarm for the power integration balance method. Field experimental and applied results in Zhongyuan oilfield show that the method as standard can obtain correct judgment avoiding the factors of load disturbance and decrease energy waste significantly after adjustment, the experiment device with intuitive operation and practicability have a wide range of application prospects in the oil industry.

3B4: Engineering Systems-of-Systems I

Room: Palm ABC
Chair: Alejandro Salado (Virginia Tech, USA)
10:10 An Implementer's View of the Evolutionary Systems Engineering for Autonomous Unmanned Systems
Judith S. Dahmann (MITRE Corporation, USA); Chris Scrapper (SPAWAR System Center Pacific, USA); Ryan Halterman (SPAWAR Unmanned Systems Group, USA)
Overview The paper presents the application of a novel systems engineering approach for research, development, test, and evaluation of unmanned systems. The Evolutionary Systems Engineering Model for Unmanned System, shown in Figure 1, is based on the System of Systems (SoS) Systems Engineering (SE) Wave Model [2] adopted by the DoD as best practices for Systems of Systems System Engineering [3,4]. The SoS Wave Model was adopted by SPAWAR Systems Center (SSC) Pacific's Unmanned Systems Branch to enable the agile and rapid evolution of autonomous unmanned systems. It provides a systematic process for technology insertion and overarching strategies for managing risk and ensuring key capability objectives are being met as the system evolves. Implemented as a continuous improvement process, the Evolutionary System Engineering approach provides flexibility and adaptability to address the inevitable changes in both the technical landscape and the larger operational environment. This paper will provide a description of these drivers and how implementation of the SoS Wave Model addresses them. Why the Wave Model for Unmanned Systems? The development of autonomous unmanned systems represents a unique challenge that shares many of the same crosscutting issues as traditional SoS. These challenges arise from the need to tightly integrate component technologies in a federated system where components may be owned and operated independently. This requires a disciplined system engineering approach to address these dynamic, heterogeneous development challenges. In 2013, the SSC Pacific's Unmanned Systems Group adopted the SoS Implementer's View [2] approach to address issues associated with the distributed development of autonomous unmanned systems. Expediting delivery of autonomous capabilities to the Warfighter relies on a continuous improvement process for assessing capabilities and limitations of the autonomous system, use of maturing technologies based on key performance parameters, and reducing risk by understanding performance tradeoffs and associated cost as the system evolves. Central to the concept of this continuous improvement process is the ability to measure and accumulate evidentiary information to facilitate the agile and rapid response to unexpected issues and uncertainty arising during the development process. The ability to continuously measure and inter-compare results is paramount to the identification and mitigation of risk in a timely-manner. This ensures critical capability objectives and functional requirements are being met. Tailoring the Wave Model for Unmanned Systems The conceptual model for the Implementer's View of SE for SoS [2] is shown graphically in Figure 1. This is a conceptual view of SoS evolution based on a set of logical steps: SoS analysis, SoS architecture evolution, planning and orchestrating updates to a SoS implemented as a set of overlapping 'waves' of activity with feedback both within and across waves. Figure 1: Depiction of the SoS Implementer's View (Aka 'Wave Model') Figure 2 shows how the original SoS Wave Model was tailored to support the integration, test, and experimentation of autonomous unmanned systems. This model partitions a wave into four overlapping phases: Conduct Analysis, Evolve System Architecture, Integrate Capability Enhancements, and Validate System -- decoupling the development and integration processes and expediting delivery of critical technologies. This section provides an overview of the activities and artifacts associated with the different phase of the evolutionary systems engineering model. Figure 2: Adaptation of the Wave Approach for Unmanned Systems Elements of the Integration Strategy The system engineering approach provides the backplane for implementation of the Unmanned Systems Integration, Test, and Experimentation capability (UxSITE) at SSC Pacific. This section will provide an overview of the elements developed to support the implementation of this larger capability. It will discuss how each of the elements, seen in Figure 3, align with and are supported by the systems engineering approach. Additionally, this section discusses how a modular open architecture for the systems, a comprehensive test and experimentation plan [3], and the continuous integration environment promote a common understanding of success and accountability within the team. Figure 3: Elements of the Integration Strategy Experimental Results Finally this paper demonstrates how the UxSITE capability produces a more robust and reliable autonomous navigation capability through product lifecycle management. Experimental results shown in Figure 4 highlight the integrity of the model by illustrating dramatic improvement in the performance of an autonomous ground system in a short-period of time. This section describes how a systematic testing approach contributes to all stages of the development lifecycle [3], facilitating the agile and rapid evolution of unmanned systems. This continuous testing approach allows for ongoing assessment of progress toward development objectives and highlights the value of this systems engineering approach. Finally, this paper discusses how the implementation of the UxSITE capability provides product lifecycle management that supports the DoD Better Buying Power initiative [5] by producing systems that are more maintainable and extendible. Figure 4: Results References [1]DoD. Defense Acquisition Guidebook. Washington, D.C.: Pentagon, May 2013. [2] J. Dahmann, G. Rebovich, J.A. Lane, R. Lowry, K. Baldwin "An Implementers' View of Systems Engineering for Systems of Systems" Proceedings of IEEE International Systems Conference 2011, April 4-7, 2011, Montreal, Quebec, Canada. [3] Dahmann, Judith, and Rob Hellman, et al. "SoS Systems Engineering and Test & Evaluation: Final Report of the NDIA SE Division SoS SE and T&E Committees ." Results of a National Defense Industrial Association (NDIA) Systems Engineering Division task implemented by members of the Systems of Systems and Test and Evaluation Committees, March 2012. [4] DoD. 2008. Systems Engineering Guide for Systems of Systems, version 1.0. Washington, DC, USA: U.S. Department of Defense (DoD). August 2008. [5] DoD. Implementation Directive for Better Buying Power 3.0 - Achieving Dominant Capabilities through Technical Excellence and Innovation. Washington D.C.: Under Secretary of Defense, April 2015.
10:30 Exile: A Natural Consequence of Autonomy and Belonging in Systems-of-Systems
Alejandro Salado (Virginia Tech, USA)
Governance is one of the key differentiating elements between traditional systems and systems of systems. While systems are governed by a single authority, systems within a system of systems are often independently governed or governed by fully empowered entities. Such independence is a necessary condition for the autonomy of each constituent system and for enabling the concept of belonging. At the same time, the capability to be autonomous and the voluntary nature of belonging, enables a system of systems to voluntarily expel or exile one or more of its constituent systems. Yet, research has not addressed so far the implications and modeling of potential exile into the operational effectiveness of a system within a system of systems and its impacts on the engineering of systems of systems. This paper presents the concept of system exile as a philosophical necessity in the definition of systems of systems, it discusses some visions to measure the risk of exile, and proposes a way forward to explore mitigation techniques.
10:50 The System of Systems Engineering and Integration "Vee" Model
Warren K. Vaneman (Naval Postgraduate School, USA)
In the twenty-first century, mission success will require unprecedented interoperability among disparate constituent systems resulting in a System of Systems (SoS). Unlike traditional systems engineering where systems are created based on a set of user needs, a SoS is composed of multiple constituent systems, at various stages within their lifecycles (i.e. new start systems, systems in-development, legacy systems), to satisfy needed mission capabilities. System of Systems Engineering has been exploring methods for SoS Engineering for almost two decades. This paper takes the discussion further by introducing a System of Systems Engineering and Integration (SOSE&I) methodology. SoSE&I is the planning, analyzing, and integrating constituent systems into an SoS capability greater than the sum of those individual systems. The SoSE&I "Vee" process model is introduced and discusses how it is used to engineer the SoS throughout its lifecycle to increase systems integration and interoperability, and directly impact the operational success.
11:10 Reengineering urban operations management and administration by constructing and using urban hierarchical vulnerability indices:An implication of system of systems and big data
Shiyong Liu (Southwestern University of Finance and Economics, P.R. China); Konstantinos Triantis (National Science Foundation/Virginia Tech, USA); Judy Xu (Southwestern University of Finance and Economics, P.R. China)
Big data bring great opportunities to gain better understanding on complexities and dynamics of the system of systems (SoS) of modern megacities and manage them in a sustainable manner. One of the major indicators in performance of city operations management is to guarantee that the city has low vulnerability to natural and man-made hazards. It is necessary to construct urban hierarchical vulnerability indices (HVIs) to quantify and capture how resilience of the urban SoS to disastrous events such as flood, pandemics, explosion, infrastructure collapse, terrorist attack, and financial crisis etc. It is also imperative for the city governors and administration to understand how hazardous event(s) might change the HVIs at different levels, how policy interventions might positively or negatively affect the HVIs, and how structural changes (addition of new system or systems components i.e., new roads, buildings, hospitals) can change the HVIs and in what manner. This paper presents a conceptual framework for constructing HVIs by capturing the characteristics of urban SoS and taking advantage of the value generated by big data. We categorize the HVIs at different levels by considering characteristics of SoS. We then illustrate different dimensions of HVIs and implicate it to current vulnerability indices in literature and practices. We also exemplify how the value of big data generated and obtained in urban SoS could turn the idea of constructing a comprehensive HVIs into reality and examine how it could be used to better understanding the dynamic impact of ever-increasing complexity and uncertainty of urban SoS. Using HVIs as key performance indicator (KPI) in city operations management enable cities to reengineer and reinvent themselves to promote socioeconomic growth and improve quality of life in a sustainable way.

3B5: Model-Based Systems Engineering IV

Room: Palm DEF
Chair: Otmane Ait Mohamed (Concordia University, Canada)
10:10 Spatial Traffic Prediction for Wireless Cellular System Based on Base Stations Social Network
Zhenglei Yi, Xin Dong, Xing Zhang and Wenbo Wang (Beijing University of Posts and Telecommunications, P.R. China)
Understanding the spatial traffic patterns of wireless cellular system could facilitate the performance analysis and system design. In this paper, a novel method based on base stations social network (BSSN) is proposed for the prediction of spatial traffic in wireless cellular system. Firstly a BSSN, which can describe the spatial traffic relationship between base stations (BSs), is established with the real spatial-temporal cellular system traffic data. Then, the very important base stations (VIBS) can be selected based on BSSN from complex networks perspective. Finally, with the acquired traffic data of these VIBS, all the other BSs traffic can be predicted with the help of Support Vector Regression (SVR). The analytical results show that our proposed prediction method can effective predict the characteristics of traffics of the entire wireless cellular system; and by applying this method, only the traffic of the 8% of the entire BSs are required for an effective prediction with a mean error ratio less than 20%.
10:30 A Model-Based Communication Approach for Distributed and Connected Vehicle Safety Systems
Yaser P. Fallah (University of Central Florida, USA)
Information exchange for the purpose of creating situational awareness is the backbone of distributed systems that rely on communication. For example, connected vehicle safety systems rely on exchange of information between vehicles to allow each vehicle to create a real time map of its surrounding. These systems work by sampling the state of a physical process (vehicle trajectory) and communicating it to others. In this paper, we present a novel approach for communicating information of physical processes (such as vehicles) over a network using models of these processes. In this approach, traditional communication of data samples from a process is replaced by communication of models and model updates. This novel method allows each node in the distributed system to create a model-based view of its surrounding. The resulting model-based situational awareness map will obviously have higher fidelity than only sample position information that is delivered using today's traditional communication method. Using the example of a forward collision detection algorithm we demonstrate the significant performance improvement that is possible using this approach. We demonstrate that the system is able to maintain its performance at rates as low as 1Hz.

3B6: Modeling and Simulation IV

Room: Poinciana AB
Chair: Konstantinos Mykoniatis (University of Central Florida, USA)
10:10 Network based discrete event analysis for coordination processes in crisis response operations
Nadia Saad Noori (La Salle - Universtiat Ramon Llull, Spain); Kristin Paetzold (Universität der Bundeswehr München, Germany); Xavier Vilasís-Cardona (Ramon Llull University, Spain)
In this paper we introduce a novel approach to study emerging organizational relations and coordination structures in crisis response operations. We use dynamic modeling methods to analyze operations in conjunction with network analysis to study relationships among coordinating teams. The goal of the research is to produce a model describing evolution of crisis response operation as network based dynamic system. Ultimately, the model should help create different scenarios of cross-organizational collaboration in crisis events to gage effectiveness of response systems. The envisioned research outcome will impact the future design of response plans in crisis management and hopefully contribute to the shift towards decentralized network based response plans.
10:30 Multi-Method Modeling and Simulation of a Face Detection Robotic System
Konstantinos Mykoniatis (University of Central Florida, USA); Anastasia Angelopoulou (University of Central Florida & Institute for Simulation and Training, USA); Asli Soyler Akbas and Peter Hancock (University of Central Florida, USA)
Robotic systems are currently going through changes at an unprecedented pace. Although the final goal of robotic systems will necessarily focus on real world robots, it is often useful to perform simulation prior to investigation with actual robots. Modeling and Simulation of robotic applications enable evaluation of different robotic system designs prior to implementation. The present work provides a Multi-Method Modeling and Simulation study of a human-robot environment for face and skeleton detection. The study is divided in three areas that include: i) the development of the user interface for testing the face detection algorithm and collecting the appropriate data for the simulation study; ii) the physical experimental design for data collection and analysis; and iii) the simulation of the human-robot environment. Microsoft Robotics Developer Studio, Visual Studio, Kinect Sensor, and AnyLogic were used for defining the robotic tasks, creating the application interface, detecting the human face, and modeling and simulating the system, respectively. Agent-based, discrete event and system dynamics simulation methods were combined for the simulation of the robotic system model. The simulation model includes some of the critical variables that were not included during physical experimentation. The two major findings of this simulation study consisted of the evaluation of the impact of those critical variables on the performance of the face detection algorithm prior to the construction of the actual robot and a simulation model that demonstrates this impact.
10:50 Formation Reconfiguration of Cooperative UAVs via Learning Based Model Predictive Control in an Obstacle-Loaded Environment
Ahmed Taimour Hafez (Queen's University, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada)
Learning Based Model Predictive Control (LBMPC) is a new control policy that combines statistical learning along with control engineering while providing levels of guarantees on safety, robustness and convergence. The designed control policy respects the general rules of flocking such that when static obstacles appear, the UAVs are required to steer around them and also avoid collisions between each other. Also, each UAV in the team match the other team members velocity and stay close to its flockmates during flight. Our main contribution in this paper lays in solving the formation reconfiguration problem for a group of $N$ cooperative UAVs forming a desired formation using LBMPC in the presence of uncertainties and obstacles in simulation.
11:10 System Architecture and Optimization to Support Variability and Flexibility in Design
Abdelkrim Doufene (MIT); Vivek Sakhrani (MIT, USA); Abdullah Alkhenani (King Abdulaziz City for Science and Technology, Saudi Arabia); Bo Yang Yu (MIT, USA); Stephen R Connors (Massachusetts Institute of Technology, USA); Adnan Alsaati (King Abdulaziz City for Science and Technology, Saudi Arabia); Olivier de Weck (Massachusetts Institute of Technology, USA)
The coupling of the desalination process with solar technology is a complex problem. As various types of desalination processes and solar technologies have been developed, the selection of the best combination requires several design criteria. Capital costs, operation and maintenance costs, plant site, salinity of seawater, environmental impacts, and water quantity and quality requirements are examples of the design criteria involved in selecting a suitable desalination process. On the other hand, the selection of a suitable solar system is governed by a number of factors such as plant configuration, energy storage, location, working fluids, etc. Moreover, when integrating the solar technology and desalination processes, more requirements and constraints arise. A generic design would reduce the cost of engineering studies and the time to market thanks to the reuse of existing designs, and the ability to adapt a technical solution according to a given context (the best architectures according to a context (both spatial and temporal)). We use a design framework, completed by multi-objective, multidisciplinary optimization models in order to manage variability (space - different locations then different natural environment characteristics mainly sea water quality, solar radiation and dust) and flexibility (time- increase of demand overtime.)

Wednesday, April 20, 11:50 - 13:40

Best Paper Awards Luncheon

Room: Grand Cypress D

Wednesday, April 20, 13:40 - 15:00

3D1: Microgrids

Room: Grand Cypress A
Chair: Mohsen Azizi (Michigan Technological University, USA)
13:40 Distributed Model Predictive Control of Energy Systems in Microgrids
Paul Stadler (Ecole Polytechnique Federale de Lausanne, Switzerland); Araz Ashouri (Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland); Francois Marechal (Ecole Polytechnique Federale de Lausanne, Switzerland)
This paper presents a flexible and modular control scheme based on distributed model predictive control (DMPC) to achieve optimal operation of decentralized energy systems in smart grids. The proposed approach is used to coordinate multiple distributed energy resources (DERs) in a low voltage (LV) microgrid and therefore, allow virtual power plant (VPP) operation. A sequential and iterative DMPC formulation is shown which incorporates global grid targets along with the local comfort requirements and performance indices. The preliminary results generated by the simulation of a studied case proves the benefits of applying such a control scheme to a benchmark low voltage microgrid
14:00 Robust Load Frequency Control in Islanded Microgrid Systems Using $\mu$-Synthesis and D-K Iteration
Mohsen Azizi (Michigan Technological University, USA); Sayed Ali Khajehoddin (University of Alberta, Canada)
In this paper, a robust controller is designed for the load frequency control (LFC) system in an islanded microgrid with multiple synchronous generation systems. This control strategy can be readily applied to an islanded microgrid which consists of renewable distributed generation and energy storage systems. The robust controller proposed in this paper includes multiple local controllers that are designed and optimized independently by using the $\mu$-synthesis technique and D-K iteration algorithm. This controller accounts for the dynamic coupling among the areas of the microgrid without any communication links required among the local controllers. Moreover, the controller is designed to be robust to the variations of different parameters of the microgrid, and hence improves the frequency control performance significantly. Simulation results of a small islanded microgrid with three generators confirm the effectiveness of the controller design approach proposed in this paper.
14:20 Voltage Profile Improvement of Micro-grids using SMC based STATCOM
Kanungo Barada Mohanty (NIT Rourkela, India); Swagat Pati (SOA University, India)
Voltage profile improvement has become a challenging issue in micro-grids. With addition of renewable sources like wind energy conversion systems, the task becomes more difficult. Static Compensator (STATCOM) is very often utilized to improve the voltage profile of micro-grids. In this work, two micro-grid systems are dynamically modelled and simulated using Matlab/Simulink. The first micro-grid has a single diesel engine driven synchronous generator. The second micro-grid has one diesel engine driven synchronous generator and a doubly fed induction generator based wind energy conversion system. The STATCOM is connected to the load bus in both the micro-grids. Sliding mode controller is used for the control of STATCOM for voltage profile improvement of micro-grid. The performance of sliding mode controller based STATCOM is analysed and compared with the performance of conventional PI controller based STATCOM. The evaluation of performance is done for both the micro-grids. The loads in both the micro-grids are concentrated and time varying. The robustness of the sliding mode controller based STATCOM is established with three different load changes and comparison of the results.
14:40 High-Level Multi-Objective Model for Microgrid Design
Siamak Talebi, Aldo Fabregas and Troy Nguyen (Florida Institute of Technology, USA)
Recently, there is vast inclination toward using renewable energy in rural areas away from the main electricity generation plants. Small airports are facilities located in rural areas which are of the interest to supply their power consumption by sustainable energy. According to the Intermittency of the renewable sources, it is important to have energy storage to guarantee an uninterrupted and stable power supply for end users. In this paper high level micro-grid model is proposed considering making decisions about different factors such as different renewable sources capacity, storage and system capacity. A multi-objective model for a microgrid high-level design was used to formulate design alternatives. This can enable engineering teams to refine requirements for subsequent stages of the system engineering process. The main goal of the proposed approach is to minimize the implementation cost and environmental impact for a 15 years lifespan of the proposed microgrid. Different components of the micro-grid with energy storage device are selected and sized considering both the initial cost and operational cost with optimal coordination between storage and intermittent resources such as solar and wind is proposed while meeting load and system operating requirements. Diesel generator is anticipated as back up supply for emergency cases, however to comply with green energy policy, it is objected to be minimized. The Multi-objective linear programming is used for proposed optimization. A case study of integrated micro-grid located in Aydin airport in Turkey is presented to verify the advantages of the proposed optimal decision making method. In addition to the system self-constraint, certain constraints of the airport design also are taken into account. Global predilection for using renewable sources including solar and wind has been increasing because of various reasons. Decreasing reliance on fossil fuel because of environmental and geopolitics concerns and price reduction of the renewable sources equipment make clean energy sources attractive. Governments are taking efforts to prioritize supporting and investing in renewable power generation. This leads to new developments in smart- grid technologies with emerging system design and configuration. According to the National Renewable Energy Laboratory (NREL), renewable energy potentially will support about 80% of the total electricity consumption in the U.S. in 2050 [1]. Airports as important elements in global transportation system by supporting 57 million jobs and approximately 3.5% of global GDP also try to adopt to this trend of using sustainable energy[2]. As example of using sustainable energy, Denver International Airport implemented a solar panel with capacity of 10 megawatts to meet 6% of the whole airport requirement [3]. Los Angeles Airport uses 8,000 tons of food waste to produce methane, which is then transferred to a power generation plant and converted into electricity, reducing the cost of electricity usage. These examples are indicative of major ongoing undertakings at several larger airports. Conversely, for small airports in the U.S. and other countries, activities on sustainability practices remain relatively low. The total aviation system in the U.S. has 3,345 airports that are designated as either primary or non-primary. Non-primary airports are public airports that service general aviation and can be categorized as national, regional, local, basic, and unclassified. There are 125 non-primary commercial service airports that have between 2,500 to 10,000 annual passenger enplanements. In addition, there are 2,553 General Aviation airports that account for 36 percent of the United States general aviation fleet, and have an average of 29 base aircrafts[4]. Although these smaller airports do not account for commercial flights, they play a major role in flight training, law enforcements, and for aviation emergencies. They also provide the closest source of air transportation to many people in rural areas. According to the Federal Aviation Administration (FAA) - National Plan of Integrated Airport Systems (NPIAS) [4], there are approximately 209,034 general aviation pilots who are heavy users of small airports. Small airports provide the majority of the flight instruction and training, which in turn produces the pilots needed for commercial airlines. The need for airports that provide such a service is imperative allowing continuation of operations and future airfield sustainability, master planning, and environmental analysis. As mentioned, smaller airports comprise a considerable percentage of the U.S. general aviation fleet and non-primary commercial services. Furthermore, their rural locations make them prime candidates for renewable energy systems that help reduce energy use and increase electric power reliability. The areas of concept development and requirements analysis to support the renewable energy system design and development process had addressed in [5] .Design and development of renewable energy systems for small airports involve the determination of detailed technical requirements that provide a basis for the evaluation of different renewable alternatives. Identification of an appropriate type of renewable energy source is based primarily on how that option would perform against the defined technical requirements and the return on capital investment. In order to implement more reliable system that can support loads all the time including emergency situations such as outages, using storage devices is mandatory. High up-front costs for renewable sources in addition to maintenance costs are some basic concerns that deter investors for considering these clean energies. However, new governmental policies for lessening reliance on fossil fuel based energies with some good incentives such as tax credits, buying produced extra energy all the time in addition to easy loan condition lead people to think about the profitability in long term period.

3D2: Space and Communication Systems II

Room: Grand Cypress B
Chair: Frank Riffel (KLS GmbH, Germany)
13:40 Exergy Based Optimization of Rocket System Staging Times
Andrew Gilbert and Bryan Mesmer (University of Alabama in Huntsville, USA); Michael Watson (NASA Marshall Space Flight Center, USA)
Exergy is defined as the useful work available to a system. Exergy efficiency is an overall system metric that integrates across disciplines, using exergy as a "common currency." It allows for an overall performance metric which accounts for interactions and couplings between systems. As a holistic metric designers can focus on modifying design variables to improve exergy efficiency. Optimization methods and sensitivity analysis are key tools for designers to perform such tasks. The proposed paper will explore the utilization of MDO tools with exergy based analysis in order to optimize the staging times of a rocket system.
14:00 Satellite ground station virtualization: Secure sharing of ground stations using software defined networking
Frank Riffel (KLS GmbH, Germany); Robert Gould (Natural Resources Canada, Canada)
This paper describes the idea of a virtualized satellite ground station and its potential applications. A virtual satellite ground station in this context is a system that transparently exposes the operation and data path of a real ground station to an external Operating Entity (OE) for a defined amount of time. The first part describes the background and motivation why such an idea is needed: Satellite ground station virtualization is done to facilitate sharing of ground station systems in an easy and secure way. This idea is hence applicable for a number of scenarios. These scenarios include, but are not limited to: remote operations of a foreign site, Antenna pooling, spare capacity marketing, testing. By sharing a win-win situation can be realized: The owner of the ground station can gain extra revenue for "lending" the system to the external operating entity. The external operations entity on the other side can reduce Capital and Operational Expenditures (CAPEX/OPEX) by not needing to procure/maintain an own system. The supported missions will see additional benefit in the form of higher number of supportable contacts, risk or cost reduction. The paper further lists other benefits and challenges. Challenges for sharing are not only technical. Initially management must be convinced that potential risks and cost does not outweigh the business value. The paper tries to address the concerns by considering key constraints and presenting a matching technical concept. In the core part of the paper a matching technical concept is developed. Needed functions are identified and the virtualization layering options are traded off against each other. We found that three new functions are needed for ground station virtualization: An allocation and planning function on the side of the external operating entity, a scheduling and arbitration function on the ground station owner side, and finally an enforcement function that must be present on both sides. A suitable virtualization architecture is proposed. Properties of the necessary scheduling interface are are briefly mentioned and an existing solution is described. The proposed ground station virtualization architecture features Software Defined Networking (SDN) as key element. As security is one of the driving requirements the use of SDN for the enforcement and isolation mechanisms of the proposed architecture are explained in greater detail. A short case study complements the paper. The proposed architecture is applied in the context of an antenna pooling scenario for the Inuvik Satellite Station Facility. Contacts of 17 satellites that are relevant for the site are simulated for a time frame of one year. The scenario is analyzed with respect to contact capacity. One remarkable result of this analysis is that with antenna pooling the utilization of the antenna systems at the site can be doubled. From a business perspective cooperation thus can be very lucrative. The proposed overall antenna pool system structure is described. Details include structural view and an appropriate hardware selection. Expected costs are for this scenario are given. We conclude that the proposed virtual ground station architecture is an enabler for maximizing business value through ground station sharing.
14:20 Mixed-Integer Linear Programming for Multibeam Satellite Systems Design: Application to the Beam Layout Optimization
Jean-Thomas Camino (LAAS-CNRS & Airbus Defence and Space, France); Christian Artigues (University of Toulouse, France); Laurent Houssin (Laas-CNRS & University of Toulouse, France); Stephane Mourgues (Airbus Defence and Space)
In a society where the demand for multimedia applications and data exchange is experiencing an unstoppable growth, multibeam systems have proven to be one of the most relevant solutions for satellite-based communication systems. Though already well represented among the geostationary satellites today, there are still several unresolved design optimization challenges for these complex systems that could lead to improved performances and to better system costs. The satellite platform, the repeater, and the antennas are examples of subsystems that should be designed jointly in order to reach an optimized technical solution that fulfills the service requirements. Traditionally, such complex tasks are addressed through a decomposition of the overall system design into a sequence of smaller decision problems. In this article, we propose to rely on operations research techniques to, on the one hand, take into account explicitly the interdependencies of these decomposed problems, and on the other hand, to handle the own constraints of each subsystem and their interactions. In this paper, the focus is laid on the optimization of the beam layouts of the multibeam satellites. Indeed, in addition to being a perfect example of the aforementioned importance of dealing with subsystem constraints, this problem appears early in the chain of design of a multibeam satellite system and is therefore critical for the quality of the telecommunication system: the weaknesses of a beam layout cannot be made up for later on in the system design. For this crucial optimization phase, the strength of the methodology we propose in this paper is to use linear programming to incorporate explicitly technological feasibility constraints of the subsystems involved, while preparing at best the subsequent design problems. Most importantly, our approach allows to overcome several resisting flaws of the already existing algorithms.
14:40 Affordable Processing for Long Coherent Integration of Weak Debris-Scattered GNSS Signals with Inconsistent Doppler
Md Sohrab Mahmud, Sana Qaisar and Craig R Benson (University of New South Wales, Australia)
Space Surveillance and Tracking is the capacity to identify and anticipate the kinematics of space debris in orbit around the Earth. It is important for avoiding collisions with the International Space Station, operational satellites and other spacecraft. GNSS signals have been proposed as illumination sources for passive radar to track space debris. Detecting space debris by scattered GNSS signals requires extremely long integration to gather sufficient energy. Such long coherent processing is computationally expensive, and non-uniformities in the signal phase variations confound many existing processing aids, such as the Fast Fourier Transform. In this paper, we propose a novel multi-step processing strategy which is shown to be able to reduce the processing burden to an affordable level. The first step correlates the received signal with a replica of the expected signal, followed by an integrate and dump. This reduces the bandwidth of the signal to Audio frequencies, taking care that the target uncertainty volume is preserved. In the final step, the full length coherent integration is formed by summing the audio samples with the necessary phase adjustments, for example allowing the residual phase to vary a second order non-linear function over time. This processing technique is demonstrated using real data collected from GPS satellites, highlighting the ability to synthesize coherent integrations within a reasonable uncertainty volume at low cost using just the audio rate signal.

3D3: Biomedical Systems

Room: Grand Cypress C
Chair: Turki Turki (King Abdulaziz University & New Jersey Institute of Technology, Saudi Arabia)
13:40 A System Based on Genetic Algorithms For On-line Single-Trial P300 Detection
Riley Magee (RMC, Canada); Sidney Givigi (Royal Millitary College of Canada, Canada)
A brain-machine interface (BMI) systems collect and classify electroencephalogram (EEG) data to predict the desired command of the user. The P300 EEG signal is passively produced when a user observes or hears a desired stimulus. The P300 can be used with a visual display to allow a BMI user to select commands from an array of selections. The visual stimuli are often repeated and averaged to increase classification accuracy. In this paper we explored classification of single-epoch P300 signals. An EEG BMI system was constructed to allow offline training and live testing. Using a genetic algorithm to select features of data we achieved 78.3% signal detection accuracy using a Support Vector Machine classifier. Using this classifier we constructed a simulated mobile robot steering system, which could be controlled with little training and achieved up to 7.5 commands/minute.
14:00 Complex System Modelling of the Spread of Tuberculosis in Nigeria
Oluyemi Badmus and Sergio Camorlinga (University of Winnipeg, Canada)
Infectious diseases and their transmission are good examples of complex systems, with several interacting and interdependent components. We develop an agent-based simulation of the living condition of a typical slum setting in Nigeria which is considered to have a higher incidence and prevalence of tuberculosis. We consider the epidemiology of the disease and create a dynamic model, incorporating both environmental conditions and immune suppressed health conditions that have been observed in previous studies to be associated with emergence of the disease. Based on the developed model, we observe the pattern of transmission and we compare the results obtained with the estimates provided by the World Health Organization (WHO) and other individual surveys carried out in the past. The results obtained showed that increasing the air changes per hour (ACH) reduces the number of new latent tuberculosis infections among close contacts. Incorporating 7 air changes per hour which is the recommended value for a bedroom had a significance impact in reducing the number of new latent infections. Furthermore, introducing about 13 air changes per hour which is the recommended value for a smoking environment had a further reduction in the number of new latent infections. The result obtained also showed that individuals living with both HIV and diabetes have the highest risk of progressing to the active tuberculosis disease. Finally, the results also revealed that close contacts of people living with the active tuberculosis disease have a higher risk of developing the latent tuberculosis infection. These insights are useful findings to support public health policies for tuberculosis management.
14:20 A Greedy-Based Oversampling Approach to Improve the Prediction of Mortality in MERS Patients
Turki Turki (King Abdulaziz University & New Jersey Institute of Technology, Saudi Arabia); Zhi Wei (New Jersey Institute of Technology, USA)
Predicting mortality of Middle East respiratory syndrome (MERS) patients with identified outcomes is a core goal for hospitals in deciding whether a new patient should be hospitalized or not in the presence of limited resources of the hospitals. We present an oversampling approach that we call Greedy-Based Oversampling Approach (GBOA). We evaluate our approach and compare it against the standard oversampling approach from a classification perspective on real dataset collected from the Saudi Minister of Health using two popular supervised classification methods, Random Forests and Support Vector Machines. Our results demonstrate that our approach outperforms the other standard approach from a classification perspective by giving the highest accuracy with statistical significance on the 20 simulations of the real dataset.
14:40 Markov Decision Process Formulation for Managing Human Weight Loss
Mukesh Chippa (The University of Akron, USA); Shivakumar Sastry (University of Akron, USA)
We present a Markov Decision Process (MDP) approach to compute policies that can aid the management of human weight loss. We show that the problem can be formulated as a Markov Chain under a reasonable set of assumptions. The states represent the quantized weight of a participant. The transitions between the states represent nutrition and exercise actions. A policy computed using this model represents an intervention strategy for a participant. Given the participant's initial weight and target weight, we show that the computed policy is sensitive to the reward functions that are associated with the actions. In the future, such an approach can be used to offer wellness interventions to participants.

3D4: Engineering Systems-of-Systems II

Room: Palm ABC
Chair: Alparslan E Bayrak (The University of Michigan, USA)
13:40 A Computational Concept Generation Method for a Modular Vehicle Fleet Design
Alparslan E Bayrak (The University of Michigan, USA); Bogdan Epureanu, Panos Papalambros and Arianne Collopy (University of Michigan, USA)
Modularity for ground vehicle systems has been viewed as a potential solution for military to meet a variety of mission demands without keeping a large and diverse inventory of vehicles in the fleet. Development of modular concepts is a key element in the design process of these modular vehicle platforms significantly impacting effectiveness of the solution. In this paper, we propose a functional synthesis method to design a set of modules defining a modular fleet of ground vehicles for an overall fleet-level objective. The proposed method starts from a functional decomposition of a baseline conventional fleet capable of performing a given fleet mission. We then formulate a functional synthesis problem to draw the boundaries that define modules for a modular fleet having the same mission capability with minimum cost. In order to justify the effort, we present initial results proving the value of modularity with significant cost reductions at the expense of increased personnel requirements for a given initial modular concept.
14:00 Case Study on the Benefits of an Operational Concept Demo for GPS OCX
Sarah Law and Chuck Corwin (Raytheon, USA); Jabari Loving (Infinity Systems Engineering, USA); Steve Sorensen and Walid Al-Masyabi (Raytheon, USA)
The GPS Next Generation Operational Control System (OCX), currently under development by Raytheon, provides major improvements on current GPS system control capabilities, including greater accuracy, integrity and availability; information assurance to protect against current and emerging cyber threats; operational control of all new civil and military signals; automation to reduce operational crew size; and flexibility to meet evolving user needs. In order to validate the new operational concepts, Raytheon developed and employed a GPS operational concept (OPSCON) testbed. This testbed can be used to validate the system architecture before implementation; collapsing the systems engineering "V" to increase the confidence in the implementation earlier in the engineering life cycle. The OPSCON testbed combines experienced operators, emerging OCX software tools, and updated procedures to demonstrate how a smaller crew can efficiently and effectively manage the modernized GPS III mission using the new system capabilities. It is used to demonstrate GPS III mission threads including constellation management, military protection, positioning navigation and timing determination and users services. The threads increase understanding among stakeholders, operators, system developers and the acquisition community. By establishing a forum for soliciting early feedback about planned and future system capabilities the testbed helps to align end-state visions across the community. One of many hurdles in the development process for large complicated systems is how to articulate the usefulness of the systems capabilities for its acquirers and future operators. Often it is sufficient to demonstrate specific tools or applications; however these types of demonstrations may not be adequate to convey large conceptual changes. For example, one of the major benefits of OCX is the ability to automate day-to-day operations. This concept is difficult to demonstrate through small individual tools because the viewer cannot grasp the scope of the benefits of automation across all of the tools and crew positions simultaneously. To address some of these issues Raytheon took a new approach to customer demonstrations. Instead of demonstrating stand-alone tools we decided to demonstrate the OCX Operational Concept (OPSCON). According to Data Item Description DI-ISPC-81430A "an OPSCON description describes a proposed system in terms of the user needs it will fulfill, its relationship to existing systems or procedures, and the ways it will be used." Generally OPSCON descriptions are written documents that may contain diagrams, tables, matrices etc. To meet the intent of an OPSCON demonstration we have to show the various roles of the system operators, the suite of OCX tools, and actual GPS procedures. These three main areas and an understanding of the GPS mission from the operator's perspective are essential to our success. The associated paper describes how the OPSCON demonstration combined People, Tools, and Operational Procedures to bring OCX's OPSCON to life. The demonstrated combination of operators, tools, and procedures created an operationally realistic environment for showing individual operator as well as combined crew activities during a shift. Realistic scenarios and responses highlighted how OCX has made revolutionary improvements that allow operations to meet current and emerging needs for civilians and warfighters worldwide. A wide variety of audience members were able to benefit from the OPSCON demo. General Officers liked the mission focus of the demo and were able to grasp how OCX capabilities improved overall GPS operations. Current GPS operators could understand how OCX's automation would streamline their day-to-day operations and enable new capabilities to support warfighters. Systems and software engineers currently working on developing OCX were able to see how the tools they are developing fit within a larger context. The figure below shows how various audience types benefitted from the OPSCON demo and provided input for current and future demonstrations. After our initial demonstrations, we have received multiple requests to show the demo to GPS directorate engineers and leadership. Also, due to high interest and demand, Raytheon will be developing another series of mission threads using the OPSCON testbed to show the impact of various changes in ops tempo and how future capabilities can further improve GPS operations.

3D5: Sensors Integration and Applications I

Room: Palm DEF
Chair: Mahmoud Efatmaneshnik (University of New South Wales - Canberra & Australian Defence Force Academy, Australia)
13:40 Automated Stacker System Design and Development: Structural Support, End-Effector, and Control Subsystems
In this paper, the results of a research and development project sponsored by Nucor Steel Corporation, Marion, Ohio, is presented. This project is conducted at Michigan Technological University with the objective to design and develop an automated robotic system for precise stacking of highway signposts, while complying with the required stacking pattern as well as time constraints. The design and development is fulfilled for the three main subsystems including structural support, end-effector, and control subsystems. The completed stacker system is tested at Michigan Technological University and delivered to Nucor Steel plant to be installed at Press #2 of the production line in highway products division. The rapid growth of robotics and automation and its current positive impact and future predicted impact on the economy are very promising. Millions of industrial and domestic robots are already on the market worldwide to perform our dirty, dangerous or dull works, which include tasks that people either may not want to perform such as vacuuming and lawn mowing, or are not able to do safely such as dismantling bombs. Global competition, productivity demands, advances in technology, and affordability will force companies to increase the use of robots in the foreseeable future. While the automotive industry was the first to use robotics, other industries such as aerospace, oil and gas, food, and steel now also rely on robotic automation. Nucor Steel Corporation is a pioneer in the steel industry. The Nucor Steel Corporation bar mill relies heavily on manual work force in its highway products division. A highly manual process introduces many safety hazards as well as inefficiencies and inconsistencies. One hazardous position is the bundling of heavy signposts, which are manually raked into bundles before being manually banded, and workers are at risk for overuse injuries. Moreover, the signposts are randomly positioned within a bundle, and hence the disorganized bundle is much larger than an organized stack of the same count. Disorganized bundles also hinder further automation processes downstream the production line, such as banding and powder coating the signposts. In this industry-sponsored research and development project, a new automated robotic stacker for highway products is designed and developed utilizing Fanuc robot manipulators, custom-built end-effectors, and programmable logic controller (PLC) that will result in smaller and organized stacks as compared with the current disorganized bundles, and the removal of a worker from the hazardous position in the process. Organized stacks will also allow for further automation of the other processes downstream the production line, such as banding and powder coating signposts. The automated robotic system is responsible to pick up signposts from a starting position, move them, and stack them in a predetermined pattern. A new post is available to be picked every three seconds according to the current product flow in Nucor Steel Corporation. This means that the system will have to accomplish its tasks within a three second cycle time. Given this time limit, the system is designed to pick up two posts simultaneously in order to provide a six second cycle time that includes the following four main stages: (a) the two robots cooperatively grab two signposts on the conveyor, (b) carry them to the stacking cart and position them on the preprogrammed locations, (c) release posts, and (d) return to the original position for the next cycle.
14:00 Automatic Sleep and Wake Classifier with Heart Rate and Pulse Oximetry: Derived Dynamic Time Warping Features and Logistic Model
Yanqing Ye, Kewei Yang, Jiang Jiang and Bingfeng Ge (National University of Defense Technology, P.R. China)
This paper presents a newly sleep/wake classification method based on heart rate and pulse oximetry, using logistic model with derived dynamic time warping and correlation features introduced, which were used to classify sleep stages. 100 sleep recordings obtained from the Sleep Heart Health Study dataset, which is available on websites, were used to validate the proposed method. Using the features extracted by our research, classification performance of a LD classifier and feedforward neural classifier were compared to the proposed logistic classifier. The classification accuracy and AUC of the logistic classifier was found to be better(83.8%, 0.924) than those of the two other classifiers (80.1% , 0.732 for LD and 64.0%,0.801 for neural classifier). The result demonstrated that the proposed logistic classifier using the derived dynamic time warping and correlation features extracted from heart rate and pulse oximetry signals can classify sleep stages efficiently and effectively, which can provide a novel way to carry out automatic sleep stage classification with a wearable device.
14:20 Development of a Tire Cavity Sound Measurement System for the Application of Field Operational Tests
Johannes Masino, Berthold Daubner, Michael Frey and Frank Gauterin (Karlsruhe Institute of Technology, Germany)
Tire road noise is considered as the major noise source in traffic noise and even as a health problem today. Existing methods to measure the tire road noise suffer from several disadvantages, which led to the development of an alternative method, namely the tire cavity sound measurement method. Due to the high costs of the developed prototypes, this method is not applicable for field operational tests and to conduct measurements comprehensively. In this paper we present a new tire cavity sound measurement system to fill this gap. To accomplish the requirement of a low cost microphone to measure high sound pressure of approximately 150 dB with a low total harmonic distortion in the tire cavity we show a modification of the sensor. We verify the quality of our system by conducting selected experimental examinations and comparing the results of our system with a former developed highly accurate prototype. The costs of our tire cavity sound measurement with approximately 200 USD decreased by 97.14 % compared to former prototypes (7,000 USD) without reducing the quality significantly. Our system can now be applied for field operational tests to distinguish different road surfaces and to measure the quality of the road infrastructure to improve it efficiently and to reduce the tire road noise.
14:40 Self Organized Scale-Free Distributed Classification Using a Distributed, Unreliable Set of Limited Capability Components
Paul Gaynor and Daniel N Coore (The University of the West Indies, Jamaica)
Bandwidth requirements for networks that monitor the environment often result in scale limitations. Such limitations can be avoided by applying in-network data classification to reduce required bandwidth, however, traditional strategies that require \emph{a priori} network configuration become expensive as networks begin to scale. We describe a self-organizing strategy that allows a scale-free set of independent sensing components to operate as a distributed classification system. A shared storage protocol is run over a set of simple, distributed physical components, which creates an abstraction of a reliable in-network storage device. The emergent storage device is used to record classification templates. Each component that encounters environmental information uses data in the network to perform data classification, and collaborates with neighbours to increase the information content. Consensus techniques are used to disambiguate between conflicting classifications. We apply the strategy to implement a distributed face recognition implementation. The strategy allows macro-level configuration of a self organizing process that supports an application. New deployment paradigms that depend on an extremely large number of components, such as nano-technology, are well suited to benefit from the strategy.

3D6: System Architecture I

Room: Poinciana AB
Chair: Mohsen Mosleh (Stevens Institute of Technology, USA)
13:40 Enclaves for Operating System Protection
Brent Sherman (Intel Corporation, USA); Jacob Torrey (Assured Informaiton Security, Inc, USA)
As networks become increasingly targeted by attackers in search of sensitive data, a new data protection model is arising -- one in which data must be protected even on contested networks. In this new paradigm, a stronger isolation boundary is needed than the current process model of the status quo -- hardware-enforced {\em enclaves} are a step towards true data protection in contested networks. This paper provides a background of enclaves through two example implementations: HARES and Intel SGX followed by three example case studies of well-known malware that could have been prevented through the deployment of enclave technologies. Finally a discussion on the weaknesses of the current enclave technologies is provided before concluding remarks.
14:00 Customer Individual Product Development - Methodology for Product Architecture Modification
Maik Ploetner, Immanuel Straub and Michael Roth (Technical University of Munich, Germany); Udo Lindemann (Technische Universitaet Muenchen, Germany)
In order to withstand the increasingly global competition companies try to differentiate themselves by fulfilling as many individual customer requirements as possible. Promising, in this regard, are so-called individualized products, where customers actively intervene in the development process by using web-based toolkits. However, avoiding functional or safety-relevant feedback between individualisable and non-individualisable product components under the condition to maximize the geometric solution space on part of the customers represent major challenges for the product development. In order to cope with that, a three phase methodology for product architecture modifications is presented in this paper. In conclusion an evaluation is presented which illustrates the usability and traceability of the methodical approach.
14:20 The Internet: A System of Interconnected Autonomous Systems
Mehmet Engin Tozal (University of Louisiana at Lafayette, USA)
The Internet, a global system of interconnected networks, has already become a de-facto utility serving billions of people worldwide. Individuals, companies, educational institutions and government agencies use the Internet for communication, entertainment, marketing, administration, collaboration and citizen participation. On the other hand, the Internet is a highly engineered, globally scaled, complex system formed by thousands of autonomous networks operating independently. In this study, we first present a taxonomy of autonomous systems (ASes) which is definite and compatible with the current AS-level structure of the Internet. Then, we analyze different classes of ASes to shed light on the complex structure of the Internet. We believe that our approach and findings will help telecom practitioners gain more insight into the structural and operational characteristics of the Internet and enhance their network infrastructures.
14:40 Resource Allocation through Network Architecture in Systems of Systems: A Complex Networks Framework
Mohsen Mosleh, Peter Ludlow and Babak Heydari (Stevens Institute of Technology, USA)
Traditional Systems Engineering methods and theories are not sufficient for analyzing and explaining the dynamics of the resource allocation for Systems of Systems (SoS) with autonomous parts. This paper introduces a framework, using complex network models, for studying the interaction of autonomous components and the design of the system connectivity structure in SoS as well as their impact on resource management. The framework introduced in this paper explicitly incorporates costs of connection and the benefits that are received by direct and indirect access to resources and provides measures of the optimality of connectivity structures. We discuss central and a distributed schemes that, respectively, represent systems in which a central planner decides the connectivity structure and systems in which distributed components are allowed to add and sever connections to optimize their own resource access.

Wednesday, April 20, 15:00 - 15:40

Coffee Break

Room: Grand Cypress D

Wednesday, April 20, 15:40 - 17:20

3E1: Cyber Security Issues I

Room: Grand Cypress A
Chair: Laurent Njilla (Air Force Research Laboratory, USA)
15:40 A Game-Theoretic Approach on Resource Allocation With Colluding Nodes in MANETs
Laurent Njilla (Air Force Research Laboratory, USA); Patricia Echual (California State University at Long Beach, USA); Niki Pissinou (Florida International University, USA); Kia Makki (Technological University of America (TUA), USA)
Prevalent concerns with dynamic networks typically involve security. Especially with resource constraints in dynamic networks such as mobile ad-hoc networks (MANETs), security needs to be of particular consideration. In this paper, we first analyze the solution concept involved in optimizing resource allocation and data packet forwarding. In a MANET, the availability of having data packets forwarded may be insubstantial due to the presence of selfish nodes. Nodes may not want to participate in the network to preserve their own resources. We propose a packet-forwarding problem model with a negotiation game, where an arbitrator acts as a cluster head and initiates a bargaining game. Thereafter, We consider the possibility of having some group of nodes exhibit malicious behavior and collude to subvert the MANET. We investigate the problem by finding the optimal Nash Equilibrium (NE) strategies of the negotiation game. Then, we simulate the effect of the coalition of malicious nodes in a mobile environment. Simulation results support our model.
16:00 A Modbus command and control channel
Antoine Lemay (École Polytechnique de Montréal, Canada); José M. Fernandez (Ecole Polytechnique de Montreal, Canada); Scott Knight (RMC, Canada)
Since the discovery of Stuxnet, it is no secret that skilled adversaries target industrial control systems. To defend against this threat, defenders increasingly rely on intrusion detection and segmentation. As the security posture improves, it is likely that the attackers will move to stealthier approaches, such as covert channels. This paper presents a command and control (C&C) covert channel over the Modbus/TCP protocol that represents the next logical step for the attackers and evaluates its suitability. The channel stores information in the least significant bits of holding registers to carry information using Modbus read and write methods. This offers an explicit tradeoff between the bandwidth and stealth of the channel that can be set by the attacker.
16:20 A Methodology for Systematic Attack Trees Generation for Interoperable Medical Devices
Jian Xu and Krishna Kumar Venkatasubramanian (Worcester Polytechnic Institute, USA); Vasiliki Sfyrla (Unaffiliated)
Security for medical devices has gained some traction in the recent years following some well-publicized attacks on individual devices, such as pacemakers and insulin pumps. This has resulted in solutions being proposed for securing these devices, usually in stand-alone mode. Medical devices are however becoming increasingly interconnected and interoperable as a way to improve patient safety, decrease false alarms, and reduce clinician cognitive workload. Given the nature of interoperable medical devices (IMDs), attacks on IMDs can have devastating consequences. This work outlines our effort in understanding the threats faced by IMDs, an important first step in eventually designing secure interoperability architectures. A useful way of performing threat analysis of any system is to use attack trees. Attack trees are conceptual, multi-leveled diagrams showing how an asset, or target, might be attacked. They provide a formal, methodical way of describing the threats to a system. Developing attack trees for any system is however non-trivial and requires considerable expertise in identifying the various attack vectors. IMDs are typically deployed in hospitals by clinicians and clinical engineers who may not possess such expertise. We therefore develop a methodology that will enable the automated generation of attack trees for IMDs based on a description of the IMD operational workflow and list of safety hazards that need to be avoided during its operation. Both these pieces of information can be provided by the users of IMDs in a care facility. The contributions of this paper are: (1) a methodology for automated generation of attack trees for IMDs using process modeling and hazard analysis, and (2) a demonstration of the viability of the methodology for a specific IMD setup called Patient Controlled Analgesia (PCA-IMD), which is used for delivering pain medication to patients in hospitals.

3E2: Space and Communication Systems III

Room: Grand Cypress B
Chair: Henry Yeh (California State University Long Beach, USA)
15:40 Optimal Reliable Routing Path Identification in MANET With FTR-AHP Model
Sureddy R M krishna (JNTU-H, India); M N SeetaRama Nath (A U, India); V Kamakshi Prasad (JNTUH School of Information Technology, India)
Accurate and reliable prediction of the optimal paths between source and destination nodes is always significant and essential characteristic in Mobile Adhoc Network(MANET). It is always a challenging task because of the mobility nature of nodes and scarcity in the infrastructure of the network.On top of it MANET is in dynamic nature. So in the present study, an attempt is made to identify reliable optimal paths in rank wise from the random routing paths with the proposed hybrid model called Fuzzy Topsis Rough Set Based Analytical Hierarchy Process(FTR-AHP). The model utilizes fuzzy sets and rough sets for classification, Technique for order of performance by similarity to ideal solution(TOPSIS) for obtaining best and worst routes in the network.Resultant final ranked routes helpful in overcoming dynamic traffic routing problem in MANET.
16:00 Beamforming Solvability in 3D Space using Active Scattering Devices
Kyle Ying (California State University, Long Beach, USA); Hen-Geul Yeh (California State University Long Beach, USA); Donald Chang (Spatial Digital Systems, USA); Joe Lee (SDS, Inc., USA)
Most multiple-input multiple-output (MIMO) wireless communication systems currently rely on probabilistic or stochastic channel models. However, in this paper we will describe a deterministic approach to the channel model in order to beam form an indoor signal from multiple transmitters to a single receiver in a group with the non-receiving elements effectively receiving noise only. The use of Composited Transfer Functions (CTF) with the channel state information (CSI) in a deterministic approach allows for precise multipath path loss calculations that permit for the generation of orthogonal beams at specific receiving elements. Frequency reuse is possible such that only the specified device is receiving signals and allows for multiple users to transmit and receive data on a single frequency. We show that through the additional use of active gain scattering devices, optimizer solvability increases and allows for larger areas and more diverse geometries of transmit and receiving elements. The capabilities and effectiveness of our deterministic beamforming approach with and without the use of active scatterers are discussed in this paper.
16:20 Conjugate ICI Cancellation Techniques for 2x1 and 4x1 Space-Time Transmit Diversity OFDM Systems
Hen-Geul Yeh (California State University Long Beach, USA); Samet Yildiz (California State University Long Beach, USA)
Orthogonal frequency division multiplexing (OFDM) systems require orthogonality among subcarriers. Unfortunately, the conditions, such as, residual carrier frequency offset, time variations because of Doppler shift or phase noise, which thus destroys the orthogonality at the receiver, hence results in inter-carrier interference (ICI) and degrades the bit error rate (BER) performance. In this paper, the conjugate cancellation (CC) to mitigate the ICI of OFDM systems is combined with 2x1 and 4x1 space-time (ST) code to form STCC systems. Even though any full orthogonal transmission matrix is not available, the STCC-OFDM systems with four transmit antennas not only significantly improves the effect of non-orthogonality of the transmission matrix, but also provides a higher transmission rate compared the full orthogonal with the coding rate 0.5 transmission under the same antenna construction. Furthermore, the error floor provided by 4x1 STCC-OFDM systems with coding rate 1 is significantly lower than the regular CC, ST, and 4x1 ST-OFDM with coding rate 0.5. This STCC-OFDM system may serve as the needed function for the fifth generation (5G) in the multiple-input and multiple-output MIMO-OFDM systems without increasing power, bandwidth, and the computational load

3E3: Air and Space Systems

Room: Grand Cypress C
Chair: Luis Daniel Otero (Florida Institute of Technology, USA)
15:40 Designed and Developed A Civil Airport Safety Management System
Ming Cheng and Leping Yuan (Civil Aviation University of China, P.R. China)
Objective Researching a SMS tool, it can manage airports' safety in a high efficient level.Methods Based on B/S structure, It was established that an airport safety management system. It has implemented the daily work of airport safety management and risk management integration. Results It designs a function framwork, including predictive subsystem, proactive subsystem, reactive subsystem and reestablish subsystem. Each subsystem involves several funtional moduls. It provides a daily safety management tools and the emergency decision support tools. Conclusion The tool achieves all functions required by ICAO.It give a solution for airport SMS.
16:00 Proposal of Hardware-in-the-loop control platform for small fixed-wing UAVs
Rauhe Abdulhamid (Technological Institute of Aeronautics, Brazil); Neusa Maria F. Oliveira (Instituto Tecnologico de Aeronautica, Brazil); Roberto Amore (Instituto Tecnológico de Aeronáutica, Brazil)
Unmanned Aerial Vehicles (UAVs) have received considerable attention from the academic community and technology solutions companies, given its civilian and military applications. The autopilot system should be thoroughly tested in lab because an accident may cause irreversible damage to an UAV. This article presents the proposal of a Hardware-In-the-Loop platform for testing and validation of a small fixed-wing aircraft. Besides the communication system it has been developed a deflection measurement platform of the aircraft control surfaces. This measurement platform has been validated, enabling future work on implementing embedded control algorithms.
16:20 Preliminary sUAV Component Evaluation for Inspecting Transportation Infrastructure Systems
Luis Daniel Otero (Florida Institute of Technology, USA)
In the transportation systems engineering field, there is a fast growing interest to use small unmanned aerial systems (UAS) for structural inspections. An application of particular importance is the inspection of bridges and high mast luminaires (HML). This paper describes an ongoing research effort -and results obtained—to evaluate small unmanned aerial vehicles (sUAV) for various environmental conditions and mission objectives. Various propeller types, battery types, and configurations were used in the evaluation of this sUAV to estimate its maximum working altitude, motor power capabilities, battery life expectancy under different loads, and maneuverability constraints. The preliminary research results presented in this paper support the use of sUAS to assist in bridge and HML inspections. Altitude, payload, and maneuverability tests were conducted using quad and hexacopters to understand performance and limitation parameters that would directly relate to the use of UAVs for transportation infrastructure inspections. Future research areas are identified to extend the work presented in this paper.

3E4: Systems Engineering Theory

Room: Palm ABC
Chair: Mahmoud Efatmaneshnik (University of New South Wales - Canberra & Australian Defence Force Academy, Australia)
15:40 Resilience of Initiatives to Shifting Management Priorities Under Emergent and Future Conditions
Attention of industry, government, and academia has been directed toward the design and implementation of resilient systems, with applications spanning environmental management, engineering, business and finance, and the social sciences. However, less attention has been paid to assessing the resilience of strategic plans, of which the systems under consideration are a result. While a particular infrastructure asset or technology may be resilient in a given scenario, the overriding strategic plan may be vulnerable to changes in decision maker preferences, regulatory restrictions, or other emergent shifts in values and objectives. In this paper, we describe scenario planning and decision making tools that are useful to assess the resilience of strategic plans across a multitude of emergent conditions. Several case studies are presented, including two examples, 1) creating sustainable biofuel supply chains for the aviation industry, and 2) prioritization of infrastructure development opportunities in Afghanistan.
16:00 Defining an Architecture for the Systems Engineering Body of Knowledge
Richard Adcock (Cranfield University & BKCASE, United Kingdom); Nicole Hutchison (Stevens Institute of Technology, USA); Claus Nielsen (Cranfield University, United Kingdom)
Originally sponsored by the US Department of Defense (DoD) and managed by the Systems Engineering Research Center (SERC), the 3-year Body of Knowledge and Curriculum to Advance Systems Engineering (BKCASE) project produced a baseline Guide to the Systems Engineering Body of Knowledge (SEBoK) and associated Graduate Reference Curriculum (GRCSE). The primary aim was to bound, define and document the knowledge related to systems engineering (SE) practice to support education, research, professional development and practice. Given the size of the task, SEBoK was never expected to be complete and definitive at the end of its initial 3-years of intensive activity. BKCASE team aims for the first version of the product: 1. To create an overall structure or architecture for the SEBoK that is as complete and robust as possible. 2. To identify the baseline SE knowledge to be identified and discussed within that structure. 3. To define a process for ongoing evolution and maintenance that would allow maturing of the SEBoK content to continue. These aims where achieved in 2012 with the release of SEBoK version 1.0. Since then, BKCASE has been sponsored by INCOSE, IEEE-CS, and the SERC and is now run as a community-led resource under an Editorial Board. This Board continues to update SEBoK content twice a year to ensure the SEBoK is current and relevant with evolving SE knowledge. Additional detail is available at www.BKCASE.org. This update process is intended to both build on the aims of the original project and to continue the task of following the evolution of SE knowledge. Of course, completing the SEBoK is an impossible task, as it should be in a rapidly growing and evolving discipline such as SE. This paper will discuss the logic and success of our solution to aim 1 above. It will outline the ongoing SEBoK update process and discuss what we have learnt after 3 years of SEBoK stewardship.
16:20 Stability Analysis of Network of Similar-Plants via Feedback Network
Behzad Shahrasbi (University of Central Florida, USA)
Here, we study a networked control system with similar linear time-invariant plants. Using master stability function method we propose a network optimization method to minimize the feedback network order in the sense Frobenius norm. Then we verify our results with numerical example. We show that this method outperforms the known feedback network optimization methods namely matching condition.

3E5: Sensors Integration and Applications II

Room: Palm DEF
Chair: Kanungo Barada Mohanty (NIT Rourkela, India)
15:40 Low-cost Intelligent Static Gesture Recognition System
Harini Sekar and Rajagopalan Rajashekar (Solarillion Foundation, India); Gosakan Srinivasan (Easwari Engineering College & Solarillion Foundation, India); Priyanka Suresh (Sri Sivasubramaniya Nadar College of Engineering & Solarillion Foundation, India); Vineeth Vijayaraghavan (Solarillion Foundation, India)
This paper presents prototype implementation of low-cost, open hardware, static - gesture recognition system. The implemented system has three major components: A Glove and Sensor Unit (GSU) - consisting of a pair of gloves embedded with custom made, low-cost flex and contact sensors, a Primary Supporting Hardware (PSH) that maps change in input values from GSU, a Secondary Supporting Hardware (SSH) that processes the input values and recognizes the gesture accurately. When a gesture is signed, the GSU tracks the change in orientation of the fingers, which results in a change in voltage levels of the sensors. This change is mapped by the PSH and passed on to SSH which comprises of two ATmega328P microcontrollers, one connected to each of the glove. The two microcontrollers are connected in a master-slave configuration and communication between them is facilitated through an XBee module. The performance of this gesture recognition system is evaluated using a data set comprising of 36 unique gestures. These gestures represent a total of 120 gestures that include all gestures across five globally used sign languages. A gesture recognition engine that resides in the master microcontroller processes the input and identifies the gesture. The gesture recognition engine comprises of a two stage selection-elimination embedded intelligence algorithm that is used to enhance the system efficiency from 83.1% to 94.5% without any additional hardware. The cost of the system is USD 30, which the authors believe on commercialization, could be brought under USD 9.
16:00 RF-Localize: An RFID-based Localization Algorithm for Internet-of-Things
Saeed Manaffam and Amirhossein Jabalameli (University of Central Florida, USA)
In this paper an RFID-based localization algorithm for Internet of Things (IoT) is proposed. The responses of RFID tags to different readers are employed to determine the location of the IoT devices. The proposed algorithm does not rely on the on the unreliable power level reading and the sophisticated direction-of-arrival-based methods. First, we assume that the location of the RFID readers are known a priori. Later, in this paper, we extend our results to the scenario where neither the location of IoT devices nor the RFID readers are known. In addition, we provide analytical bounds on the number of readers in the network to have a reliable localization results. The results of the numerical experiment show a highly accurate localization of IoT devices when our proposed RF-Localize algorithm is employed.
16:20 Force Control of Electrohydraulic Systems using Super-Twisting Algorithm
Suwat Kuntanapreeda (King Mongkut's University of Technology North Bangkok & Thailand, Thailand)
Control of complex dynamical systems such as electrohydraulic systems is challenging. This paper presents force control of an electrohydraulic actuator using the super-twisting algorithm (STA). The STA is one of the most promising high-order sliding-mode control algorithms. It guarantees robustness with respect to modeling errors, uncertainties and external disturbances while reducing the chattering phenomenon found in the conventional sliding-mode control (SMC) algorithm. In this paper, the STA was utilized to achieve a robust nonlinear controller for force control of an electrohydraulic actuator. The benefits offered by the STA-based control compared to the conventional SMC-based control were demonstrated through numerical simulations. Finally, experiments on a real-life electrohydraulic system were conducted to illustrate the success and effectiveness of the STA-based controller.
16:40 Fuzzy Logic Controller based STATCOM for Voltage Profile Improvement in a Micro-Grid
Kanungo Barada Mohanty (NIT Rourkela, India); Swagat Pati (SOA University, India)
Microgrids constitute a very essential part in today's power system for fulfilling the increasing power demand but when working under isolated conditions microgrids become very vulnerable due to the fact that normally microgrids have small generation capacities and most of the generation sources are renewable/constant power sources, which reduces the transient as well as voltage stability limits of a microgrid when working under isolated conditions. In this work a single synchronous generator based isolated microgrid system is studied. The microgrid system feeds power to a concentrated time varying load. Fluctuation in load causes variation in the load bus voltage which reduces the stability of the system .A STATCOM is used to improve the voltage profile of the load bus. Two different control i.e. PI and fuzzy logic controllers are used for the control purpose of the STATCOM. The performance of the STATCOM with both the controller are evaluated with different load conditions i.e. linear RL load, nonlinear load and dynamic load and the results are analyzed and compared. The whole simulation is done in Matlab/Simulink.
17:00 Controlled Flooding with Passive Anti-Flooding for Urgent Messages in Body Area Sensor Networks
Jinze Yang, Yan Sun and Jesus Requena Carrion (Queen Mary University of London, United Kingdom)
This paper proposes a novel routing scheme, named Controlled Flooding with Passive Anti-Flooding (CFPAF), aiming at handling Urgent Messages (UM) efficiently in Body Area Sensor Network (BASN). Due to the low frequency of UMs, most of the existing routing schemes employ common flooding methods for forwarding UMs towards the destination to increase the probability of successful delivery at the expense of consuming greater radio resources. Treating the UMs in a cost-effective way, CFPAF provides a new controlled flooding solution with passive anti-flooding capabilities. In addition, gossip is adopted in CFPAF for further reducing the UM flooding storm. Simulation results show that comparing to the classic Multi-Path Forwarding method, CFPAF dramatically reduces the package numbers without significantly affecting the average end-to-end delay.

3E6: System Architecture II

Room: Poinciana AB
Chair: Mehmet Engin Tozal (University of Louisiana at Lafayette, USA)
15:40 Integrating Object-Process Methodology with Attribute Driven Design
Nil Ergin (Penn State University, USA); Colin Neill (PSU, USA); Raghvinder Sangwan (Penn State University, USA)
System architecture is a key driver in defining a system's form and function. Object-process methodology (OPM) integrates system function, structure and behavior in one model for the study of system architectures. While the methodology eliminates the challenges of managing multiple views of a system architecture, it is a descriptive approach and does not explicitly address the life cycle properties (-ilities) of a system. Thus, the quality of architecture developed depends on the experience and skill levels of its architect. Integrating OPM with other architecting methodologies can address this gap. In particular, Attribute Driven Design (ADD) is an approach widely used in software-centric applications for generating architectures with desirable life cycle properties. In this paper, we integrate OPM with ADD in order to leverage the advantages of each and demonstrate its use via an illustrative system development project. The integrated approach provides explicit guidance to the architect to capture form, function and life-cycle properties early in the conceptual architecting process.
16:00 Assessing Design Dependencies in Modular Systems
Henry Wong (UNSW Canberra, Australia); Sana Qaisar and Michael J Ryan (University of New South Wales, Australia)
Modular systems or products offers several benefits such as design flexibility, upgrade, extension and inter-operability across multiple vendors. System modularity is strongly driven by design inter-dependencies. Several academic models have been proposed for assessing such inter-dependencies. However, there is lack of consistency across these models in terms of performance metric and scope of application. In this paper, we compare three existing system-dependency assessment models to identify elements required for developing a more comprehensive assessment methodology. The three models are applied on a computer-mouse architecture to conduct assessments. It is identified that by including module composition, inter-module interface complexity and functional dependencies, a more comprehensive measure of system-dependency can be obtained. This work provides a foundation for further research and development of methodologies and tools for design-dependency assessment, that determine the level of modularity for coping with the system changes in future.
16:20 A Process for DoDAF Based Systems Architecting
Matthew Amissah and Holly Handley (Old Dominion University, USA)
The Department of Defense Architectural Framework (DoDAF) is the DoD's mandated method to document system architectures. The current version of DoDAF i.e. v2.02 advocates a data-centric process focused on eliciting data to facilitate decision support. It prescribes a domain ontology i.e. the DoDAF Meta-Model (DM2) aimed at ensuring conformance and interoperability of architecture models. Although DoDAF currently does not prescribe or proscribe specific methodologies or tools, the mismatch between DoDAF's underlying modeling approach and mainstream methodologies and supporting technology in Systems Engineering (SE) has been alluded to in the literature [1-3]. This paper proposes a process for architecting of systems in line with the DoDAF, aimed at bridging this gap. Additionally it offers a discussion of supporting tools in this regard.
16:40 Dependability in Autonomous Maritime Vehicles: Building Resilience into Service-Oriented Agent Robots
Carlos C. Insaurralde (Teesside University, United Kingdom); Yvan Petillot (Heriot Watt University, United Kingdom)
Technologies for oceans engineering are being based in autonomous solutions (mainly adaptation capabilities) able to tackle more complex maritime missions. This paper presents how an Intelligent Vehicle Control Architecture (IVCA) for marine robots deals with fault-tolerant capabilities in order to build a robust approach. The IVCA moves away from fixed mission plans and very basic diagnostics schemes. It is able to handle unexpected faults at vehicle, sensor and sensor processing levels based on either hardware failure or environmental changes. This paper provides details of techniques of onboard diagnosis and mitigation of faults. The operation context, the use cases, and experimental results from a particular scenario are presented. Future research work is also presented.

Wednesday, April 20, 17:30 - 19:00

Intelligent Transportation Design Committee Meeting

Room: Poinciana AB

The Meeting of the Intelligent Transportation Design (ITD) TC http://ieeesystemscouncil.org/content/intelligent-transportation-design-tecnical-committee of the IEEE Systems Council will be held at 5:30 pm onWednesday, April 20, 2016. All SysCon 2016 conference attendees are invited to join for review and planning of the ITD Technical Committee activities in 2016-2017. The ITD TC aims to capture the essence of machine-enabled human and cargo transportation through all modes of travel. Issues associated with the safe implementation of electromechanical systems supporting autonomous and semi-autonomous transport will be identified, predicted, and their mitigation strategies will be recommended. Particular attention will be paid to the power processes, internal and external electronics, and the material integrity of electromechanical systems. The TC will also contribute to some of the global development of advanced technology in the future.

Thursday, April 21

Thursday, April 21, 07:00 - 11:50

Registration

Room: Registration Counter 1

Thursday, April 21, 08:00 - 09:40

4A1: Robotic Systems III

Room: Grand Cypress A
Chair: Ahmed Abdelhadi (Virginia Tech, USA)
08:00 Subsumption Model Implemented on ROS for Mobile Robots
Minglong Li (National University of Defense Technology, P.R. China); Xiaodong Yi (State Key Laboratory of High Performance Computing (HPCL), School of Computer, P.R. China); Yanzhen Wang and Zhongxuan Cai (National University of Defense Technology, P.R. China); Yongjun Zhang (National University of Defence Technology, P.R. China)
The agent-based subsumption model is widely acknowledged as the control systems for mobile robots. In this model, incremental layers can be stacked together by inhibitors and suppressors, which increasingly leads to complex and coordinate behaviors. ROS (Robot Operating System) is an open source robot software platform and gradually becoming the de facto standard for robot applications. There are abundant reusable function units in ROS, which can be coupled by a distributed messaging mechanism. This paper describes a template based on ROS for implementing the subsumption model, such that one can develop control systems for mobile robots by leveraging ROS-provided software resources. The publish/subscribe messaging mechanism in ROS is used to connect the loosely coupled modules of each layer. The behaviors inside a module are formalized as easy-to-use ROS-based finite state machines. The inhibitors and suppressors among layers are represented as ROS nodes and implemented as templates, which can be easily instantiated. And the work is demonstrated by two experiments. First, a three-layer autonomous wander robot, which is originally designed by Brooks, is reproduced in the ROS simulating environment. Second, a six-layer security patrol robot application, which is controlled by the subsumption model, is constructed and tested in the real world.
08:20 Position Estimation of Robotic Mobile Nodes in Wireless Testbed using GENI
Ahmed Abdelhadi (Virginia Tech, USA); Felipe Rechia (Arizona State University, USA); Arvind Narayanan (University of Minnesota, USA); Thiago Teixeira (University of Massachusetts, USA); Ricardo Lent and Driss Benhaddou (University of Houston, USA); Hyunwoo Lee (Seoul National University, Korea); T. Charles Clancy (Virginia Tech, USA)
We present a low complexity experimental RF-based indoor localization system based on the collection and processing of WiFi RSSI signals and processing using a RSS-based multi-lateration algorithm to determine a robotic mobile node's location. We use a real indoor wireless testbed called w-iLab.t that is deployed in Zwijnaarde, Ghent, Belgium. One of the unique attributes of this testbed is that it provides tools and interfaces using Global Environment for Network Innovations (GENI) project to easily create reproducible wireless network experiments in a controlled environment. We provide a low complexity algorithm to estimate the location of the mobile robots in the indoor environment. In addition, we provide a comparison between some of our collected measurements with their corresponding location estimation and the actual robot location.
08:40 Embedded System Design of a Real-time Parking Guidance System
Omkar Dokur, Noureddine Elmehraz and Srinivas Katkoori (University of South Florida, USA)
The primary objective of this work is to design a parking guidance system to reliably detect entering/exiting vehicles to a parking garage in a cost-efficient manner. Existing solutions (inductive loops, RFID based systems, and video image processors) at shopping malls, universities, airports etc., are expensive due to high installation and maintenance costs. There is a need for a parking guidance system that is reliable, accurate, and cost-effective. The proposed parking guidance system is designed to optimize the use of parking spaces and to reduce wait times. Based on a literature review we identify that the ultrasonic sensor is suitable to detect an entering/exiting vehicle. Initial experiments were performed to test the sensor using an Arduino based embedded system. Detection logic was then developed to identify a car after analyzing the initial test results. This logic was extended to trigger a camera to take an image of the vehicle for validation purposes. This system consists of Arduino, ultrasonic sensor, and a temperature sensor. It was installed and tested in Richard Beard Garage at the University of South Florida for five days. The test results of each trial are provided and average error for all the trials is calculated. The error cases occur due to golf carts, straddling cars on both entry/exit lanes, and people walking under the sensor. The average error of the system is 5.36% over five days (120 hrs). The estimated cost for one detector per lane is approximately $30.
09:00 Integrated Value Engineering - Increasing the value of a forklift subsystem
Sebastian Maisenbacher and Florian Behncke (Technische Universität München, Germany); Michael Roth (Technical University of Munich, Germany); Franz Fleckenstein and Reinhard Roos (Linde Material Handling GmbH, Germany)
National and international competition demands companies to sell products with maximum value for the customer, which is reflected by high functionality for low costs. Approaches in cost management like value engineering and target costing support practitioners in developing valuable products and to reduce costs. The relatively new approach of integrated value engineering (IVE) uses matrices to combine target costing and value engineering in a structural model. The main objective of this work is to define milestones and deliverables for the IVE basic process for a better understanding of the process and tracking of its progress in application. The deliverables are explained based on an industrial use case. The case shows the application of the IVE approach to optimize a forklift system to reach higher customer value with less costs for its next release.

4A2: Modeling and Simulation V

Room: Grand Cypress B
Chair: Paul T Grogan (Stevens Institute of Technology, USA)
08:00 Research of Acceleration Algorithm in Power System Risk Assessment Based on Scattered Sampling and Heuristic Local Load Shedding
Yixin Zhuo, Chong Chen, Zhicheng Wang and Pengyi Liao (Huazhong University of Science and Technology, P.R. China); Xiangning Lin (Huazhong University of Science & Technology, P.R. China); Jingyou Xu (Stage Grid Hubei Electric Power Company, P.R. China)
Monte Carlo simulation is a practical and flexible method in power system risk assessment. However, when assessing a large system, the Monte Carlo simulation needs more time for convergence. High accuracy of risk indices demands large-scale sampling calculations and lots of time. This paper proposes a new acceleration algorithm based on scattered sampling and heuristic local load shedding while guaranteeing high calculation accuracy of risk indices. The Roy Billinton Test System and IEEE RTS79 System are used to illustrate the methodology and present the results. A number of risk indices such as LOLP, EENS and EDNS are calculated strictly based on the proposed methodology, which proves its accuracy and efficiency.
08:20 Reliability Block Diagram Extensions for Non-Parametric Probabilistic Analysis
Philip C Davis, Mitchell A Thornton and Theodore Manikas (Southern Methodist University, USA)
Multi-Valued Reliability Block Diagrams (MVRBD) are introduced as a generalization of classical reliability block diagrams (RBD) commonly used in system analysis. MVRBD offer the advantage of allowing systems and subsystem components to be modeled with arbitrary hazard failure rate relationships. MVRBD are based upon a multiple-valued discrete switching algebra that is functionally complete with constants thereby affording a corresponding model to be formulated for any system that is capable of being modeled in a reliability block diagram form. The utility of this new model is that system failure and reliability analysis can be performed without restricting component hazard rate relationships to the binary case of either "failure" or "fully operational." The incorporation of any desired number of "degraded" states into the MVRBD model allows for non-parametric probability mass functions (pmf) to be used. Any arbitrary number of intermediate system states of "degraded" may be incorporated into the MVRBD model without significant increase in the complexity of the analyses methodologies.
08:40 Bounding the Value of Collaboration in Federated Systems
Paul T Grogan (Stevens Institute of Technology, USA); Koki Ho (University of Illinois at Urbana-Champaign, USA); Alessandro Golkar (Skolkovo Institute for Science and Technology, Russia); Olivier de Weck (Massachusetts Institute of Technology, USA)
Design methods for federated systems must consider local incentives and interactive effects among independent decision-makers. This paper extends value-centric design methodology (VCDM) to multi-actor cases using game theoretic principles. Federated systems can be represented as a Stag Hunt game where players choose between risk-dominant (non-cooperative) and payoff-dominant (cooperative) strategies. An independent strategy is a lower bound to federated value and a centralized strategy controlled by a federation authority is an upper bound under special cases. An application case considers a stylized system value model (SVM) of a federated satellite system (FSS) with two players and a tradespace of 530 symmetric design decisions. A federated concept with opportunistic, fixed-cost communication services demonstrates the effect of lower and upper bounds on system value. Risk in federated systems arises from misaligned strategies between players and can be quantitatively assessed with a subjective estimate of cooperation.

4A3: Cyber Security Issues II

Room: Grand Cypress C
Chairs: Antoine Lemay (École Polytechnique de Montréal, Canada), Jacob Torrey (Assured Informaiton Security, Inc, USA)
08:00 Modeling of Data Security in Cloud Computing
Zoltán Balogh (Constantine the Philosopher University in Nitra & Faculty of Natural Sciences, Slovakia)
The paper describes the modeling design of data security in Cloud Computing. Data security can be defined as confidence and integrity maintenance of data processed by an organization. In the paper, dealing with data security in all layers of cloud computing is discussed. The standard cloud storage uses three-level data security model in cloud computing which can be extended by a forth level responsible for the data integrity check. The model of data security in cloud computing was designed as an expansion of the present standard use of data storage model in cloud computing. The paper presents the design of a four-level data security model in cloud computing within which by the use of Petri nets it describes each part of data security in cloud computing.
08:20 A Novel Block Cipher Design Paradigm for Secured Communication
Robert Sparrow, A Adekunle, Robert Berry and Richard Farnish (University of Greenwich, United Kingdom)
Unmanned aerial vehicles (UAV) are commonly used to conduct tasks (e.g. monitor and surveillance) in various civilian applications from a remote location. Wireless communication (i.e. radio frequency) are often used to remotely pilot the UAV and stream data back to the operator. The characteristics of the wireless communication channel allows attackers to monitor and manipulate the operation of the UAV through passive and active attacks. Cryptography is selected as a countermeasure to mitigate these threats; however, a drawback of using cryptography is the impact on the real-time operation and performance of the UAV. This paper proposes the Permutation Substitution Network (PSN) design paradigm with an instance presented which is the Alternative Advanced Encryption Standard (AAES) and analysis of its performance against the standardised Substitution Permutation Network (SPN) design paradigm the Advanced Encryption Standard (AES). Results indicate that using the PSN paradigm is a feasible approach in comparison to the SPN design paradigm.
08:40 Assessing Prescriptive Improvements to a System's Cyber Security and Resilience
Scott Musman (MITRE, USA)
In the process of creating new operational capabilities, and improving the efficiency of existing operational processes, as a society, we have become dependent on information and communications technology (ICT). ICT is integral to almost every aspect of our daily activities. The detrimental impact of these ICT dependencies, is that business operations become susceptible to possible impacts from cyber incidents. Criminals can steal and extort money or information, terrorists can disrupt society or cause loss of life, and the effectiveness of a military can be degraded, all as a result of an ability to create incident effects in cyberspace. Protecting ICT from cyber incident effects or reducing their impacts on operational activities has become a problem of national importance. Concomitantly, there is an escalating imperative to identify and minimize operational cyber risk. In this paper we describe a program that allows us to play a cybersecurity game with an objective to minimize a systems' cyber risk, given a mission context (i.e. a use-case or mission thread). In essence, we have formulated an automated cyber-security game that plays out the actions of a cyber red-team, and allows a defender to assess methods to minimize the identified cyber risks. We call our software CSG the Cyber Security Game. We describe the specifics of the game we have formulated: how one describes a mission system, how the ability of the system to achieve mission outcomes (i.e., the cyber defender's objective function) is computed, and how to represent the security and resilience methods that the defender may want to apply to the system to reduce risk. Once these elements are defined, CSG can be run, making it possible to compute cyber risk and to prescriptively identify the optimal set of security tools and resilience techniques to reduce the mission systems' cyber risk for any given defender cost. We will show that this approach is theoretically sound and practically useful. Our approach is demonstrated by using it to analyze a reference architecture for Healthcare ICT.

Thursday, April 21, 09:40 - 10:10

Coffee Break

Room: Grand Cypress D

Thursday, April 21, 10:10 - 11:50

4B1: Model-Based Systems Engineering V

Room: Grand Cypress A
Chair: Yaniv Mordecai (Technion - Israel Institute of Technology, Israel)
10:10 Requirement Analysis of Inspection Equipment for Integrative Mechatronic Product and Production System Development: Model-Based Systems Engineering Approach
Meinolf Lukei (Fraunhofer IEM & Mechatronic Systems Design, Germany); Bassem Hassan (Fraunhofer Institute for Production Technology, Germany); Roman Dumitrescu (Fraunhofer Institute for Production Technology IPT, Germany); Thorsten Sigges and Viktor Derksen (Karl E. Brinkmann GmbH, Germany)
Quality control is an essential part in the production of mechatronic systems. Especially, the quality inspection of the overall system at the end of the production is of extraordinary importance. Inspection equipment, which is used for the end of the production inspection are also mechatronic systems. This inspection equipment is often explicitly designed, developed and manufactured for one product with large expenditure of time. In order to have the needed inspection equipment ready by the start of production (SoP) and to be sure that the product requirements, which results from the inspection equipment concept, transfers into the product development, an integrative mechatronic product and inspection equipment development procedure is necessary. Nowadays the development of mechatronic systems is carried out with the help of model-based systems engineering (MBSE) methods. The first necessary step to develop inspection equipment is the requirements analysis. Therefore, this paper describes an approach to determine the inspection equipment requirements in early development phases based on MBSE product models and further methods.
10:30 Model-Based Operational-Functional Unified Specification for Mission Systems
Yaniv Mordecai (Technion - Israel Institute of Technology, Israel); Dov Dori (Technion, Israel Institute of Technology, Israel)
Joint architecting, design, and simulation of operational-functional requirements and scenarios on the one hand and system functionality and technological requirements on the other hand is a common challenge in systems engineering. We propose a model-based systems engineering approach that addresses this challenge. The confusion between operational and functional behavior stems from systems engineers' inherent tendency to adopt a system-centric perspective and employ modeling techniques that are inadequate for the task at hand. Functional models provide a biased perspective on the overarching operational context, while business-process-oriented models provide partial coverage of the system's architecture—its structure-behavior combination. The problem intensifies when the operational scenario is indefinite and dependent on responsive system behavior and when the operational scenario consists of the utilization of functionality in multiple systems. Operational mission management in various domains utilizes planning, execution, and control functionality in several systems, and copes with the operational-functional distinction.
10:50 Towards a Seamless Requirements Management in System Design Using a Higraph-Based Model
Hycham Aboutaleb and Bruno Monsuez (ENSTA ParisTech, France)
The application of efficient requirement management processes in industrial environments faces many threats. Most of existing tools for requirement statement, edition and traceability are spreadsheet-like and text-based. However, text-based tools are sequential and often suffer of being either ambiguous or hard to process; while graphical tools are multidimensional with diagrams containing implicit information that can be easily drawn by the user. Besides, requirement management activities are often almost disconnected from the other design and development activities. This situation leads to impossibility to keep consistency between requirements themselves and between requirements and development steps such as functional architecture, physical architecture, and simulation. To give an answer to the issues stated, this paper presents a solution to perform an efficient requirement modeling during the system design cycle using a higraph-based formalism. It explores several axes: system representation using metamodels and working/filter views. Through these axes, we focus on requirements from several perspectives: graphical representations, semantic representations and the aiding formalism to keep efficient traceability all over the development cycle.

4B2: Modeling and Simulation VI

Room: Grand Cypress B
Chair: Kanungo Barada Mohanty (NIT Rourkela, India)
10:10 A Systematic Approach to Mission and Scenario Planning for UAVs
Niloofar Shadab (University of Maryland, USA)
As unmanned autonomous vehicles (UAVs) are being widely utilized in military and civil applications, concerns about the mission safety and how to integrate different parts of mission designs are growing significantly. One of the most important barriers to a cost-effective and timely safety certification process for UAVs is the lack of a systematic approach for bridging the gap between understanding high-level commander/pilot intent and implementation of intent through low-level UAV behaviors. In this paper we demonstrate an entire systems design process for a representative UAV mission, beginning from an operational concept and requirements and ending with a simulation frame- work for segments of the mission design, such as route (path) planning and decision making on how to intruders avoidance.
10:30 An Online Delay Efficient Packet Scheduler for M2M Traffic in Industrial Automation
Akshay Kumar (Virginia Polytechnic Institute and State University, USA); Ahmed Abdelhadi and T. Charles Clancy (Virginia Tech, USA)
Some Machine-to-Machine (M2M) communication links particularly those in a industrial automation plant have stringent latency requirements. In this paper, we study the delay-performance for the M2M uplink from the sensors to a Programmable Logic Controller (PLC) in a industrial automation scenario. The uplink traffic can be broadly classified as either Periodic Update (PU) and Event Driven (ED). The PU arrivals from different sensors are periodic, synchronized by the PLC and need to be processed by a prespecified firm latency deadline. On the other hand, the ED arrivals are random, have low-arrival rate, but may need to be processed quickly depending upon the criticality of the application. To accommodate these contrasting Quality-of-Service (QoS) requirements, we model the utility of PU and ED packets using step function and sigmoidal functions of latency respectively. Our goal is to maximize the overall system utility while being proportionally fair to both PU and ED data. To this end, we propose a novel online QoS-aware packet scheduler that gives priority to ED data as long as that results the latency deadline is met for PU data. However as the size of networks increases, we drop the PU packets that fail to meet latency deadline which reduces congestion and improves overall system utility. Using extensive simulations, we compare the performance of our scheme with various scheduling policies such as First-Come-First-Serve (FCFS), Earliest-Due-Date (EDD) and (preemptive) priority. We show that our scheme outperforms the existing schemes for various simulation scenarios.
10:50 Survey of Automated Software Deployment for Computational and Engineering Research
John J Prevost (University of Texas at San Antonio, USA); James O Benson (The University of Texas at San Antonio, USA); Paul Rad (Rackspace, USA)
Software deployment is essential in today's modern cloud systems. With advances in cloud technology, on demand cloud services offered by public providers are becoming increasingly powerful, anchoring the ecosystem of cloud services. Cloud infrastructure services are appealing in part because they enable customers to acquire and release infrastructure resources on demand for applications in response to load surges. This paper addresses the challenge of building an effective multi-cloud application deployment controller as a customer add-on outside of the cloud utility service itself. Such external controllers must function within the constraints of the cloud providers' APIs. In this paper, we also describe the different steps necessary to deploy applications using such external controller. Then as a candidate for such external controllers, we use the defined taxonomy to survey several existing management tools such as Chef, SaltStack, and Ansible for application automation on cloud computing services based on the defined model. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research.

4B3: Space and Communication Systems IV

Room: Grand Cypress C
Chair: Henry Yeh (California State University Long Beach, USA)
10:10 4x1 Space-Time MIMO-OFDM Parallel Cancellation Schemes for Mitigating ICI
Hen-Geul Yeh (California State University Long Beach, USA); Samet Yildiz (California State University Long Beach, USA); Ran Ren (California State University Long Beach, USA)
Based on the orthogonality of space-time (ST) code, this paper concentrates on ST code transmission combined with inter-carrier interference (ICI) parallel cancellation (PC) scheme to form STPC systems. With known channel state information (CSI) and orthogonal frequency division multiplexing (OFDM), we further develop this STPC-OFDM system with code rates 1 and 0.5 without increasing in power, bandwidth, and computational load for OFDM access (OFDMA) downlink from base station (BS) to mobile unit (MU) terminals. Simulation results show that this 4 × 1 STPC-OFDM system with code rate 1 provides outstanding bit error rate (BER) performance than that of the conventional 4 × 1 ST block coded OFDM systems with code rate 0.5 in COST 207 slow (typical urban) and fast (bad urban) frequency selective fading channels. Moreover, this STPC-OFDM can be employed as the fundamental building block for ST-MIMO-OFDM systems.
10:30 Impact of Interleaving Length on Satellite Communication Performance in Helicopter Environment
Miriam Ugarte Querejeta and Pavan Bhave (Inmarsat Global Ltd, United Kingdom); Panagiotis Fines (Wireless Intelligent Systems Ltd., United Kingdom)
Satellite communications is used to deliver data and voice services to various markets including but not limited to aeronautical, maritime, land vehicular and land portable. The applications of satellites are innumerable, however one of the most important application is aiding crisis management where there is no connectivity provided by standard terrestrial network. In areas where there is no terrestrial coverage, satellites are the best means to deliver data and voice services. In particular, geostationary satellites are of particular interest due to the ease of locating and pointing to the satellites. One of the most challenging use-case is the delivery of voice and data to helicopters which may be used quite heavily in co-ordination activities during a crisis. The challenge in particular is to have data communications when the antenna is located below the rotor blades. Very little work has be done to study the propagation and the effect of blockages on a helicopter channel model. The satellite helicopter propagation model consists of blockages from the helicopter rotor blades, reflections from the rotor blades and fuselage and multipath reflections typically from water bodies or snow-clad surfaces. The paper analyses the impact of the helicopter channel model on the physical layer performance on the forward link and explores designs wherein the effect of blockage due to the rotor blades can be mitigated. A simulation tool was developed to model the satellite communications link in order to provide an overall system performance analysis. A generic physical layer waveform was considered for the forward channel (to mobile). The forward channel follows a Time Division Multiple Access (TDMA) scheme. The channel consists of frames which have a fixed time duration. The frames are further divided into sub-frames which form a logical information block. There may be eight or more sub-frames in a frame. Each sub frame has a forward error correction (FEC) code and is termed as a FEC block in this paper. The analysis was then repeated with a real time implementation of the user terminal. A Physical Layer Tester (PLT) has been used in order to support the realistic physical layer testing of different channel types. This papers studies techniques like FEC block repetition and FEC interleaving which are very effective techniques against blockages. A crude approach to counter blockage may be to just repeat the information and use Maximum Ratio Combining to recover the lost information. However, the overall spectral efficiency of such a method is low. A better approach would be to use an optimum interleaving length to counter blockage. It can be seen that the longer the interleaving length the better the performance. However, a longer interleaver also adds to the system latency. Therefore, it is necessary to adapt the interleaving length based on feedback from the user terminal and the nature of blockage anticipated. The paper proposes a novel approach of implementing adaptive interleavers based on the user terminal environmental characteristic. The proposal can also improve performance in other operational scenarios which suffer from frequent deep periodic blockages of the line of sight (LOS) signal such as: vehicular (trees, rural, urban, etc.) and trains (power lines, bridges, etc.) . The simulation results prove that there is significant improvement in physical layer performance by using the proposed design.
10:50 Multiple Orthogonal Beams with Multi-User and Frequency Reuse Via Active Scattering Devices
Victoria Campana (California State University, Long Beach, USA); Hen-Geul Yeh (California State University Long Beach, USA); Donald Chang (Spatial Digital Systems, USA); Joe Lee (SDS, Inc., USA)
This paper explores a multi user wireless deterministic channel in the presence of active scattering devices, or active scatters. Optimization of weighting vectors is employed on the transmitter side to form multiple orthogonal beams that feature frequency reuse at the same time. We focus on the possibility of forming two orthogonal beams. With the use of MATLAB and CVX, the solvability of optimal weighting vectors can be determined. Each orthogonal beam is dedicated to one specific receive antenna element with at least 20 dBm suppression of the beam pattern at other receive antenna elements. The peak of one beam is aligned with nulls (i.e. -20 dBm) of other beams. We simulate the system by adding QPSK modulation and determine the performance of the proposed system that transmits signals via multiple orthogonal beams.