For full instructions, visit the IEEE ISSE website: http://ieeeisse.org
Program for 2016 IEEE International Symposium on Systems Engineering (ISSE)
Monday, October 3
Monday, October 3 7:00 - 17:00
Monday, October 3 9:00 - 12:00
Tutorial: Electric Flight, a Challenge for Systems Engineering
There is no substitute for rigorous engineering when designing advanced systems. Advanced systems generally employ new capabilities by the application of new or evolving technologies that have inherent undocumented risks. Engineers must understand where the acceptable risks are and how to methodically build in redundancy for advanced systems.
The tutorial provides engineers with perspective in "how to" solve technically challenging problems. A step-by-step process is discussed for enabling a specified outcome to this specific dynamic electromechanical problem. Fielding electric aircraft that can be immediately competitive with existing flight systems will not be trivial, but is possible. The best engineers today are those that are not only attentive to research and development but also to the life-cycle issues associated with particular technologies.
The tutorial suggests design criteria derivative from the stated goal. From the design concept, research steps are established and measurable objectives are defined that will reflect a target unit price. Objectives considered will include scalable flight, thrust controls, energy management, adaptable flight surfaces, pre-/post-/in-flight operations, and other mechanical considerations. Once metrics are in place for the objectives, required resources are reconsidered in order to optimize the advanced system capabilities. The tutorial will elucidate these processes. It will show that when the resources have been applied and scientific study has verified the advanced capabilities, the accomplished research objectives can be integrated and folded into the prototype system. Thorough testing of the prototype system leads not only to documented evidence of its airworthiness but also to its manufacturability. The tutorial demonstrates how analyses of alternatives define uncertainties that are in turn used to establish decision-space throughout the research and development, fielding, logistics, and disposal processes. Tutorial participants will have an opportunity to exercise the systems engineering construct and expand upon the objectives, metrics, analysis of alternatives, and associated program impacts.
Experienced engineers have intuitive perspectives that are readily brought to bear on advanced systems problems. While necessary and appropriate during exploratory analysis, intuition is not enough to make engineering decisions. The tutorial promotes the engineering rigor required to field safe and reliable flight systems.
Monday, October 3 10:00 - 10:15
Monday, October 3 12:00 - 13:00
Monday, October 3 13:00 - 15:00
Tutorial: Modeling and Simulation for System Reliability Analysis: The RAMSAS Method
Reliability analysis of modern large-scale systems is a challenging task which could benefit from the joint exploitation of recent model-based approaches and simulation techniques to flexibly evaluate the system reliability performances and compare different design choices. In this context, the tutorial presents RAMSAS, a model based method that supports the reliability analysis of systems through simulation by combining the benefits of popular OMG modeling languages (SysML/UML) with widely adopted simulation and analysis environments (Simulink/Modelica). RAMSAS can be easily plugged into various phases of a typical system development process ranging from the design to the testing phases so as to complement other well-known and widely adopted techniques for system reliability analysis (e. g. FMECA, FTA, RBD) by providing additional analysis capabilities. The present version of RAMSAS is the result of intensive experimentation in several application domains (aerospace, automotive, railway) which allows improving the effectiveness of the method, especially in the modeling of both the intended and dysfunctional system behavior. During the tutorial a case study concerning the reliability analysis of an Attitude Determination and Control System (ADCS) of a satellite will be presented. The seminar will conclude with a discussion about the specific aspects of the reliability analysis of System of Systems (SoS) and how RAMSAS can be further extended to effectively support it.
Monday, October 3 15:00 - 15:15
Monday, October 3 15:15 - 17:15
Model-Based Systems Engineering with Object-Process Methodology: ISO 19450
Model-Based Systems Engineering (MBSE) provides a framework for effective and consistent system engineering teamwork. MBSE relies on modeling languages, such as Object-Process Methodology (OPM). The major advantage of MBSE using a formal language such as OPM, is the integrated view of the system, allowing for structured and informed management, reasoning, decision-making, and identification of risks and opportunities. Object-Process Methodology, OPM, is a holistic MBSE paradigm and language for complex systems and processes, standardized as ISO 19450. OPM covers the structural, procedural, and functional aspects of the system in a unified manner, using only one diagram kind - the Object-Process Diagram (OPD). Complexity is managed via hierarchical organization, refinement, and abstraction of the OPDs in the model. OPM is founded on a minimal universal ontology of stateful objects, processes, and links - which are all the elements needed to describe any system in the universe - natural or artificial. OPM is bimodal: it has a formal textual representation next to the graphical representation. Each graphical construct is specified by a formal sentence in natural language - Object-Process Language, OPL - a subset of English. OPM has a free open-source CASE tool, OPCAT, which covers almost all of the OPM notation, and is presently evolved into a cloud-based modeling studio called OPCloud. In this tutorial, participants will learn the basics of OPM and the conceptual modeling paradigm. We will also discuss the OPM-based modeling and design process, which includes a gradual transition from the problem domain model and stakeholder requirements capturing to the conceptual solution design model and functional requirements elaborating. We will demonstrate the methodology on a real-life case involving complex system modeling.
Tuesday, October 4
Tuesday, October 4 7:00 - 17:30
Tuesday, October 4 8:15 - 8:30
Tuesday, October 4 8:30 - 9:30
Keynote Speaker: Professor Saeid Nahavandi
Saeid Nahavandi received his BSc (Hons), MSc and PhD in Control Engineering from Durham University, UK. Saeid is an Alfred Deakin Professor, Pro Vice-Chancellor (Defence Technology) and the Director for the Institute for Intelligent Systems Research and Innovation at Deakin University in Australia. Professor Nahavandi is a Fellow member of IET, IEAust and Senior Member of IEEE and has published over 550 refereed papers and been awarded several competitive Australian Research Council (ARC) grants over the past 18 years. His talk will focus on modelling, simulation and analysis of airport operations providing greater understanding of airport security.
Tuesday, October 4 9:30 - 10:00
Tuesday, October 4 10:00 - 12:00
2B1: Special session on Theoretical Foundations of Systems Engineering - THEFOSE
- 10:00 Ontology Reconciliation for System Engineering
- In a related work, the author developed an approach to support model-based system engineering (MBSE) which ensures the conformance to a standard metamodel such as UML, SysML, or NAF, with dedicated project ontologies in order to ease the understanding and creation of models. The proposed mix approach brings flexibility and opens perspectives to support MBSE via predefined templates. However the usage of specific ontologies, handled by distinct stakeholders, yields to new consistency concerns. The present paper aims to extend this approach in order to support the continuous alignment of those ontologies against reconciliation constraints defined between them. Such constraints deals with semantic relationships which allow for understanding how different perspectives overlap. The approach extension is presented from a user perspective and then theoretically formalised to pursue the purpose of a strong foundation that drives the understanding for limits and constraints in concrete applications.
- 10:25 Requirement Hierarchy in the Responsive and Formal Design Process
- Requirements, stakeholders' visions, drive any systems engineering process. But in design process practices, there is a gap often present between stakeholders' visions and requirements representation. This limits the impact that the stakeholders can put in the process. Then, the process may end up with errors or provide a product which is unwanted. We proposed a design methodology called the Responsive and Formal Design (RFD) process that directly involves the stakeholders input at each level of the requirement elicitation, shows its effect on the total system performance, and integrates high level requirements with domain specific considerations and verifies formally. It consists of a set of levels of representation. Each level represents a set of requirements and its associated models, simulations, and the relationship between them. The levels of representation are related with refinement and abstraction relations. The refinement helps to make clear the connection with parametric considerations. As levels of RFD proceed towards refinement, the design process becomes a local or discipline specific activity, though always with a global perspective. Following our framework, this paper presents the implementation of the refinement process. Each level of the RFD process has its own level of granularity. This is true for the model, logical representation, and simulation of the requirements. In this paper, we define the pair of functions, refinement and abstraction, that exist between two models (system and component models in table form called classifications) and their logical representations (called theories) with different level of granularity. We also show how high level requirements are interpreted in the refined level. We use an example of data from three small satellites, whose goal is to image the auroral ovals around Earth's magnetic poles, to demonstrate our development.
- 10:50 Pixel matrices: An elementary technique for solving nonlinear systems
- A new technique for approximating the entire solution set, in any bounding box, for a nonlinear system of relations is presented. Such relations include equations and inequalities between functions that are algebraic, smooth, or even merely continuous functions. The technique is to first plot each function as a pixel matrix---such as the one shown in black and white on a computer monitor---consisting of boolean values that indicate whether the function passes through a particular pixel or not. The method does not require that the relations be described by equations, however; one may input already-drawn graphs, including those collected from sensors, as pixel matrices. Thus one is freed from having to choose a model and fit the data to it. Once each relation in the system has been plotted as a pixel matrix, the various relations are stitched together based on how variables are shared between them. The stitching operations take on the form of basic matrix operations, such as matrix multiplication, Kronecker (tensor) product, and partial trace operations. Since the entries of the matrix are booleans, these operations consist of conjunctions and disjunctions (ANDs and ORs) that organize the search for solutions. After performing these matrix operations, the result is again a pixel matrix, which graphs the approximated simultaneous solution set for the system. One can replace the booleans in the original pixel matrices by natural numbers or densities (nonnegative real numbers), indicating the expected number of solutions in a given pixel for a given relation; in fact any semi-ring will do. The matrix operations for stitching the relations together will return values in the same semiring---e.g., natural numbers or densities---indicating the number of solutions in each pixel for the entire system. This research is in a very early stage of development, and many questions including the algorithmic complexity and the accuracy of this approach remain to be answered.
2B2: Sensors and Systems I
- 10:00 Testing Wi-Fi and Bluetooth Low Energy technologies for a hybrid indoor positioning system
- Indoor positioning systems are becoming a required sub-system in many ambient assisted living scenarios. Also IoT (Internet of Things) device interaction would greatly benefit from the enriched context of localization. However, at this moment there is not a satisfying technology or approach for precise indoor positioning. In our paper we test the accuracy for Wi-Fi and Bluetooth Low Energy RSSI measurements and we propose a prototype system for smartphone simulation. Changes of healthcare paradigms require new technology enablers for implementation. As typical healthcare systems are overwhelmed by costs due to ageing populations, ambulatory treatment and home-based care scenarios are proposed for efficiency. Such settings, falling under the umbrella of the patient empowerment paradigm , require many policy but also technological breakthroughs to achieve their goal and to become sustainable alternatives. While patient empowerment is supported by many emerging systems such as Personal Health Records or healthcare oriented social networks, we focus our paper on systems involving indoor tracking of patients but also of medical equipment. Such systems can be independent systems or sub-systems in larger AAL complex systems. The current paper describes our experiment to compare in a real setting positioning accuracy using Bluetooth and Wi-Fi. In section 2 we present the state of the art for wireless indoor positioning technologies, in section 3 we present the methods for our experiment, in section 4 we discuss the results. In section 5 we present a prototype for positioning using smartphone and beacons and in the last section we draw the conclusion.
- 10:25 Computationally Efficient Environmental Monitoring with Electronic Nose: A Potential Technology for Ambient Assisted Living
- Recently, ambient assisted living technologies have emerged to improve the quality of life of ageing populations. Identification of health-endangering indoor gases with a hardware-friendly solution may provide an early warning of unhealthy living conditions. Electronic nose technology, using an array of non-selective gas sensors, is a potential candidate to achieve this objective, but state-of-the-art gas classifiers hinder the development of low-cost and compact solutions. In this paper, we introduce a very simple classifier that transforms the multi-gas identification problem into pair-wise binary classification problems. This classifier is based on the resultant sign of the difference between values of the sensors' features for all possible pairs of sensors in each binary classification problem. A classifier qualification metric is defined to evaluate its suitability with given data of the target gases. As a case study, experimental data of four health-endangering gases, namely, formaldehyde, carbon monoxide, nitrogen dioxide and sulfur dioxide is acquired in the laboratory by developing an array of commercially available gas sensors fabricated by Figaro Inc. and FIS Inc. A classification accuracy of 94.56% is achieved in distinguishing the target gasses with our proposed classifier. This performance is comparable to the computations intensive state-of-the-art gas classifiers despite the simple implementation of our classifier.
- 10:50 Quantification of Carcinogenic Odor of Formaldehyde with Electronic Nose Technology
- Formaldehyde is a strong-smelling chemical and extensively used to make household products and building materials. Building residents are exposed to formaldehyde when it is emitted from the products which contain this chemical. Typically, homes with new construction have higher formaldehyde level as compared to old ones. Different experimental studies report carcinogenicity of formaldehyde in humans after its long-term exposure and based on these findings it has been classified as a human carcinogen by International Agency for Research on Cancer (IARC) . Nasopharyngeal cancer is one of the mostly reported cancer linked to the formaldehyde long-term exposure. A short-term (15-20 minutes) exposure of 2 parts per million (ppm) formaldehyde level may cause nasal and throat infections and its exposure beyond this limit may cause cancer. Its permissible exposure limit is set to 0.75 ppm for eight hours . Continuous monitoring of formaldehyde levels in the residential and work places is important to support healthy living. Few commercial solutions of spectro-fluorimetry and gas chromatography can be used for the said purpose but due to their test expenses and bulky equipment, these methods cannot be afforded for long-term monitoring . Providing a compact formaldehyde monitoring system at an affordable cost with the features of long-term monitoring and friendly deployment is the key challenge to substitute the available solutions.
- 11:15 Wireless Sensor Network Architecture for Remote Non-invasive Museum Monitoring
- This paper describes a sensor network architecture developed for museum monitoring. The proposed solution is based on a distributed environment composed of small, nearly invisible measuring nodes that wireless connect to Arduino-based WiFi routes to reach a distributed cloud. Users can connect to the measuring system by using their smart-phones and can receive alerts and measurements in real-time. The entire architecture is based on the available $\mu Panel$ environment and can be deployed and adapted to the actual conditions in a timely and easy manner. The application described in this paper makes reference to the museum environment, but can seamlessly adapted to several other situations. The problem of environmental monitoring inside museums as well as inside many other building where a controlled environment is required is becoming more and more requested. In most cases the constraints related to the invasiveness of the system prevent the use of off-the-shelf solutions, and specific solutions have to be developed that have to be tailored for the specific application. This paper describes a flexible architecture based on an enhanced HTML syntax referred to as HCTML that permits an extremely easy and fast deployment of complex monitoring systems. The solution permits to have real time clients based on personal smart-phones and processing clients to show the monitoring history. The real time system is designed to work also in case of networks with limited throughput and support the data push to timely deliver alarm to the registered users.
2B3: Systems Frameworks
- 10:00 Aligning Systems of Systems Engineering with Goal-Oriented Approaches Using the i * Framework
- Systems of Systems (SoS) are complex systems that result from the integration of a set of independent constituent systems - which are complex systems themselves - in order to achieve new functionalities and goals. The emerging interdisciplinary area of SoS and Systems of Systems Engineering (SoSE) is largely driven by stakeholders' goals and needs. In addition, Goal-Oriented Requirements Engineering (GORE) is a promising approach in the SoS context to be used in the identification, modelling and management of goals to be achieved by the overall SoS and as the starting point to requirements engineering of SoS. The i* goal-oriented approach has been used in the requirements specifications of monolithic systems, but has not been engaged so far in the derivation of requirements specifications in SoS. In this paper, we propose a novel approach by utilising the i* framework in developing a Goal-Oriented Requirements Engineering framework for Systems of Systems (SoSGORE). The approach drives the SoS Requirements Engineering (RE) process to explore, model and manage the goals of different stakeholders at two levels - the SoS-level goals and the constituent systems-level goals - in order to derive purposeful SoS requirements that are well-aligned with the users' goals and needs.
- 10:25 Human Executive Control of Autonomous Systems: A Conceptual Framework
- Autonomous systems are expected to be capable of performing many core functions with little or no human intervention. Many of the other functions required for overall success, however, still involve human supervision, manual control, or direct performance by a human. This paper presents the conceptual framework known as operator role theory, and specifically the concept of executive control (rather than supervisory control), in the context of autonomous systems. An autonomous ground transportation system (self-driving car) is used as an example. It is the functions associated with vehicle control and monitoring that are expected to be under executive control. In conventionally driven vehicles these would be performed under manual or supervisory control. The fact that these functions will be performed under human executive control is the reason for calling the system autonomous. Such vehicles appear to be capable of driving the programmed route, controlling speed and lateral position, while avoiding roadway obstacles and collisions. They can execute normal driving maneuvers, and monitor vehicle health and the status of the trip against the plan, both spatially and temporally. A human executive controller will be capable of enabling and disabling these capabilities as desired, at least in some vehicles. Operator role theory provides a conceptual framework that will help guide understanding of the systems engineering issues, especially associated with function allocation, in designing and developing autonomous systems.
- 10:50 Capella Based System Engineering Modelling and Multi-Objective Optimization of Avionics Systems
- Capella is a system engineering public domain tool which has been recently released by THALES. It is a model based systems engineering tool that implements the Architecture Analysis & Design Integrated Approach (ARCADIA) framework. This paper proposes a process specification, design and optimization of a distributed avionics system. Capella is used as a design tool for Distributed Integrated Modular Avionics (DIMA). The DIMA architecture has interesting power, weight and cost metrics which are highly demanded by aerospace industry. The main challenges faced by DIMA system architects are related to functions allocation and device physical allocation. This problem refers to the system functions translation into tasks and further hardware allocation. These problems are hard to solve manually due to the high number of functions in modern systems. The design and development of DIMA systems can be dramatically improved using optimization techniques. Moreover, allocation strategies based on different figure of merit can be evaluated at a smaller cost. In this paper we develop a simplified DIMA model using the Capella tool and the framework ARCADIA. The model is extended using developed viewpoints for specifying additional system constraints. Model parameters are extracted to specify a binary integer problem for the system allocation process automation. Different cost functions are evaluated for a simple study case.
- 11:15 Persistent architecture for optimizing web service for e-government implementation
- Information in government sector is increasing day by day. The government is providing its citizens e-services which are getting cluttered and difficult to use and consume by the citizens of the country. Information redundancy is becoming a critical topic for all the governmental transactions and thus overloading the databases and the information pool. In this research we focus on creating an architecture for optimizing the information flow within the governmental database schemas which will pull and push information as and when required by the services. We will be designing web services which will act as tuners and transmitters that pull and push data from the data warehouse and transmit the required data to the requesting web services. Thus reducing the amount of information redundancy in the warehouse. We will be creating optimization web services which will inform us about information redundancy in the e-government services. Thus by doing so we improve the throughput and efficiency of the e-government system.There are various e-government models which have been proposed in various research articles but they do not specify the outcomes and meet the desired needs of E-gov. In our research work we are using some optimization algorithms some of which are Randomized Algorithms, Randomized Low-rank Approximation, Randomized K-means Clustering, Randomized Least-squares regression, Randomized Classification (Regression), Randomized Kernel methods and parallel selective algorithm for data optimization.
Tuesday, October 4 12:00 - 13:00
Tuesday, October 4 13:00 - 15:00
2C1: Systems of Systems I
- 13:00 Application of System Engineering by Armored Vehicles Manufacturers in Developing Countries
- The armor industry all along with the manufacturing of main battle tanks has been progressively established in the industrialized countries. The protection capability for the vehicle has spread over the years and reached a wide range of vehicle types such as including small armored fighting vehicles and even commercial passenger vehicles used for politicians and VIPs for instance. The large manufacturers of armors, mainly in the industrialized countries, have developed the armor capabilities in parallel with the development of different types of bullets and blast threats. Thus, it led them to accumulate a considerable amount of experiences inside the developers' premises and minds but also technically and manufacturability. For example, large enterprises in the UK, the USA mainly and in some other well-industrialized countries have been implementing a comprehensive structure consisting of Prime departments. These have the responsibility to implement efficiently all requirements raised by the customers in order to in order to make the best product that fits their needs. Moreover, large enterprises are also annually exploring and evaluating the inside and the surrounding environments in order to ensure their market continuity and in the same time to improve the quality and the range of their services to the customers. On the other side of the world, especially in the emerging countries such as UAE and Saudi Arabia, some entrepreneurs sensed the opportunity of the increasing demands for armor vehicles during the Afghanistan conflict for instance and established armored manufacturers of small and medium sizes (SMEs). In contrary to western companies, the latter are usually managed by an individual only who is the owner. Furthermore, this type of companies has a flat structure and are desperately in short planning body. In consequence of the increase of conflicts in Iraq and other surrounding countries during the last two decades, the number of this type of companies has also increased in correlation with the demand for armored vehicles. This offered a good opportunity for even more investors to enter that market. Accordingly, to the customers from these surrounding countries, all the requested needs for armoured vehicle services were directed to local SMEs manufacturers only. In this regards, one question that comes first to mind is; Are these SMEs capable of meeting the same standards in term of quality and service that if found in industrialized countries? Unlike the large organization, the SMEs tend to perform their tasks without applying adequate planning which results in an uncertain outcome. Therefore, AVMs in the development countries should imitate what the large enterprises do in term of evaluation of their environments to then plan for better practices and performances. The consequences would certainly result in better products and service, higher customer satisfaction and sustainability, and it will ensure the AVMs continuity in the market. The objectives of this paper are as follow: 1. To examine the strategic evaluation and business plan of the Small and Medium Enterprises (SMEs) in the developing countries. 2. The paper aims to focus on Armor Vehicles Manufacturers (AVMs) performances in developing countries particularly in UAE and in Saudi Arabia. Investigation will be led to check whether if the AVMs in these countries are following the strategic thinking applied in the large western enterprises or if they need recommendation in that domain. 3. Finally, this paper will examine if the recommended strategy would efficiently help the firm to better match their customer satisfaction.
- 13:25 A Requirements Engineering and Management Process in Concept Phase of Complex Systems
- The effort to well define requirements early in a life cycle is one of the major challenges in engineering programs. Studies have shown that both technical and managerial factors influence requirements quality. However, despite the numerous requirements engineering techniques and methods, there is little knowledge of how to integrate them into a systematic requirements process that considers technical and managerial activities. In this paper, a requirements engineering and management process for concept phase of complex system life cycles is proposed. The process integrates, systematically, a set of methods and techniques of requirements engineering and project management. It was used successfully in the concept phase of a complex defense system. The business process approach is used for the process modeling. The effectiveness of the process is analyzed against a set of lean enablers recommendations to deal with unstable, unclear, and incomplete requirements in managing engineering programs. The involvement of organizations in strategic, tactical and operational levels provides acceptance and commitment of stakeholders and ensured that the requirements are defined, understood and interpreted in different contexts. The results suggest that the requirements engineering and management process allows a requirements engineering work organization: to create an effective means of planning, coordination, and stakeholder engagement; to produce requirements that meet customer needs; and to define the correct system.
- 13:50 AIMMS System Framework: Automatic Dental Pathologies Recognition from DICOM files
- This paper is made within the context of the research UEFISCDI project no. 31/2014, "AIMMS - Application for Using Image Data Mining and 3D Modeling in Dental Screening". The project is interdisciplinary and wants to build a system framework composed of several sub-systems: dental pathology recognition part, 3D printing part, user interface part. The main emphasis of this article regards the pathology recognition part that is one of the core areas of our framework. Dental pathologies can be detected automatically from Cone Beam Computed Tomography (CBCT) data extracted from DICOM files. Dental CBCTs have a lower dose of radiation compared to Computerized Tomography (CT) scans, offering 3D information that sustains the diagnosis, treatment planning and analysis of the patient oral cavity. Our approach consists of novel procedures like semantically annotating the tomography which is facile to be processed. The analyzed oral pathologies are edentation and dental cavities using our own adaptive threshold filter and edge detection algorithm. An important role is played by the knowledge base of the mouth cavity during the overall processing of the DICOM input data. The cost of our solution is lower compared to the existent systems on the market, incorporating features that sustain its synergistic property.
- 14:15 Ontology based multi-system for SME knowledge workers
- The objective of this paper is to present a framework that helps small and medium enterprise to exploit their available informational space. While large enterprise contains dedicated information management departments and software, the actual software framework implementations lack orientation towards small and medium companies with fewer employees and smaller budget. We propose a framework implementation which couple with legacy data systems that are usually used by small and medium companies. By the use of ontologies, this framework implementation adds semantically enabled information integration and will also provide for the employees a work process embedded, context-sensitive information services. In this paper we focus on the framework's main architectures and present advances related to the ontology generation framework. The research presented in this paper is carried under the EU Eurostar EUR 8949 PrEmISES project - Multi-agent based middle-ware Providing Semantically-Enabled Information for small and medium enterprise (SmES) knowledge workers by a consortium formed by members from academia and industry from Romania and Spain. The aim of the project is to help small and medium enterprises to improve business performance and exploiting knowledge without time consuming and big financial efforts - a company capacity for adapting and innovating is critical on today's markets.
2C2: Systems Reliability and Testing I
- 13:00 Benchmarking MD systems simulations on the Graphics Processing Unit and Multi-Core Systems
- Molecular dynamics facilitates the simulation of a complex system to be analyzed at molecular and atomic levels. Simulations can last a long period of time, even months. Due to this cause the graphics processing units (GPUs) and multi-core systems are used as solutions to overcome this impediment. The current paper describes a comparison done between these two kinds of systems. The first system used implies the graphics processing unit, respectively CUDA with the OpenMM molecular dynamics package and OpenCL that allows the kernels to run on the GPU. This simulation is done on a new thermostat which mixes the Berendsen thermostat with the Langevin dynamics. The second comprises the molecular dynamics simulation and energy minimization package GROMACS which is based on a parallelization through MPI (Message Passing Interface) on multi-core systems. The second simulation uses another new thermostat algorithm related respectively, dissipative particle dynamics - isotropic type (DPD-ISO). Both thermostats are innovative, based on a new theory developed by us. Results show that parallelization on multi-core systems has a performance up to 33 times greater than the one performed on the graphics processing unit. In both cases temperature of the system was maintained close to the one taken as reference. For the simulation using the CUDA GPU, the faster runtime was obtained when the number of processors was equal to four, the simulation speed being 3.67 times faster compared to the case of only one processor.
- 13:25 Smart Home Simulation System
- Our paper is addressing an actual need of the modern society, taking advantage of Internet of Things by creating smart environments in order to improve everyday life. We propose a solution for letting the user test his own smart environment, without the actual equipment in order to find out the best configuration for transforming his house into a connected, intelligent one We live in a connected world, an Internet of Things world, and this brings fundamental changes to society and to consumers. The IoT era brings new resources that can make life better. By sensing the surrounding environment, the IoT devices are going to create many practical improvements, increasing the health, safety and the comfort of the users. Even though it is still an emerging domain, the Internet of Things is gaining more and more support by people everywhere. In 2013, it was estimated that there is 1 device connected to the internet for each person on earth. By 2020, it is forecasted that this number will increase to 9 devices. One of the biggest problems when designing such complex environments is finding the best way to arrange the sensors, actuators and other smart devices. Moreover, these are not exactly cheap devices. We will present a solution, designed to allow people test their own smart environments, without the actual equipment in order to find the best configuration for transforming their houses into connected ones.
- 13:50 Heuristics for Resilience - A Richer Metric Than Reliability
- Resiliency has been proposed as yet another needed capability for today's increasingly complex "smart" systems. Understandably, system architects and design engineers are hesitant to add yet another "ilities-like" requirement unless needed and with measurable results. What is resiliency, especially when applied to the engineering of complex hardware-software systems? Resilient systems have the capacity to survive, adapt and recover in the face of change and uncertainty. Sometimes this change is environmental, but more often it is caused by an adversary in the form of a physical or cyber attach. Smart recovery systems contain the capacity to evaluate and act on situational inputs via multi-discipline (EE, ME, CE, etc) and often reconfigurable hardware, software and connectivity subsystems. What is the difference between resiliency and reliability, availability, maintainability and safety? How can engineers incorporate resilient systems that would measurably restore partial or full functionality over a specified period of time and in a specified environment? This paper will answer these questions. Further, it will propose that resilience is a richer metric than system reliability. A set of design principals and heuristics will be provided to help guide the creation of resilient systems, while cautiously acknowledging that the overarching nature of resiliency makes it difficult to follow specific, formula driven approach to resilience. Finally, several case studies will be presented illustrating how hardware-software system can be designed for resilience. Outline: 1. What is resilience and why should it be added as yet another "ilities" design consideration 2. Design Principals and Heuristics for resilience of technical hardware-software systems. 3. Case Studies: > Capacity > Redundancy vs. Resilience > Diversity (in the Internet of Things (IoT) connected world)
- 14:15 Virtual Reality in Satellite Integration and Testing
- It is well known that an important part of the satellite validation process is made in a visual environment that allows the test engineer to represent the satellite using a static graphical representations of the Unit Under Test linked to telemetry and telecommand parameters displaying in real time the satellite status. The graphical visualization is left to the Satellite integrator that can freely choose various layouts using the elementary objects included in the existing graphical tool to represent the Equipment Under Test. Yet the task of defining and implementing this kind of representation of the EUT is not always straightforward and might well imply a considerable effort as well as knowledge of the environment. Furthermore different AIV test engineers might have different preferences on the way data are represented. Verification and test procedures as well as operational satellites, generate a huge quantity of raw telemetry data that can be stored for offline analysis, necessary when investigation is required for failures occurred in flight or when a comparison is required between test results on a complete production batch. The use of latest Big Data technologies permits to safely store and archive these data for later retrieval and mining. Visual analytics tools allow to analyse data in a graphical, interactive and user-friendly way, but remain at level of abstraction that is completely unrelated with the specific system. Before any assembly, integration and test activities, the engineering team defines the satellite mechanical layout (including harness) using a CAD program, and this representation, detailed at component level, is already used to support and de-criticize satellite mechanical assembly and integration procedures taking advantage from the use of a virtual environment. Purpose of this paper is to present an advanced Virtual Reality -based synoptic representation that links the parameter representation with the satellite physical layout, taking advantage of the strong coherence between the CAD model, virtual technology and the real satellite. In live tests, parameter figures can be associated to the physical point of measure, and a color coding is used to highlight the system devices/components depending of their status (e.g. Nominal, Redundant, Failure), so that in case of anomaly it is extremely straightforward to identify and inspect the location (equipment, connector, pin…) and to quickly react thanks to a virtual visual inspection. Moreover, the same virtual representation can be used to replay archived testing sessions or operational satellite telemetry, giving to the user a global view of the equipment while allowing her to select the parameters of interest and to display their connection to the physical components. The design stage of the graphical representation is strongly reduced, and the visualization can be configured according to operator preferences: adding/removing visualized parameters at run-time, or navigating the CAD model to focus on a subsystem to highlight tested paths vs. untested ones, and even presenting multiple displays of the same test to better cope with the different emerging needs. The use of the same virtual satellite view can be conceived as a means to browse in the huge amount of archived data by implementing a link between the data mining software and the virtual reality tool. By acting on sub-parts of the CAD model the user could filter from the list of parameters those related to the selected component, obtaining the needed information easily, quickly and in a very intuitive way.
2C3: Special Session on Modeling and Simulation-based Systems Engineering
- 13:00 On formal cyber physical system properties modeling: a new temporal logic language and a Modelica-based solution
- Modeling and Simulation methods, tools and techniques aim at supporting the different phases of the lifecycle of modern systems, going from requirements analysis to system design and operation. However, their effective application requires investigating several aspects such as the formal modeling of system requirements and the binding and automated composition between heterogeneous models (e.g. requirements models, architectural models, behavioral models). In this context, the paper presents a new formal requirement modeling language based on temporal logic, called FORM-L, and a software library, based on the Modelica language, that implements the constructs provided by FORM-L so as to enable the visual modeling of system properties as well as their verification through simulation. The effectiveness of the proposal is shown on a real case study concerning an Intermediate Cooling System.
- 13:25 Integration of different MBSE approaches within the design of a control maintenance system applied to the aircraft fuel system
- The design of a control maintenance system (CMs) deeply deals with the mission, the on-board systems interfaces and the identification of their behaviour in operation. This paper describes how the Model Based Systems Engineering (MBSE) was applied to an industrial test case to perform the functional design of an innovative CMs to be integrated with the aircraft fuel system (Fs). The impact of different approaches applied when modelling the two systems through the SysML on their integration was investigated. As the IBM Rational Rhapsody® tool was used, the Harmony® methodology was applied to the CMs, while a MBSE customized approach was implemented for the Fs, even to cope with some differences in coupling an avionic system to a physical one.
- 13:50 Setting Systems and Simulation Life Cycle Processes Side by Side
- The long lasting close interaction between modeling and simulation (M&S) and systems engineering disciplines is leading to a more integrative approach, namely M&S based systems engineering. It emphasizes the extensive employment of modeling and simulation all through the life cycle of systems engineering efforts. The success of M&S based systems engineering depends on the quality of simulations utilized. Further, simulations are also man-made systems, thus they necessitate a system engineering approach. Utilization of systems engineering for engineering of simulation systems is called simulation systems engineering. While ISO/IEC/IEEE 15288:2015 proposes process descriptions for life cycle of systems, IEEE 1730-2010 recommends a life cycle process framework for simulations. This paper addresses the comparison, integration and augmentation of these two standards, thereby attempting to contribute towards an integrative life cycle process.
- 14:15 Solving Time-Dependent Coupled Systems Through FMI Co-Simulation and BPMN Process Orchestration
- In this work we present a synergic integration of the Functional Mock-Up Interface (FMI) and Business Process Model and Notation (BPMN) standards aimed at managing coupled system simulations. The expressiveness of the BPMN diagrams enabled us to define the relationship between the involved systems and guarantees a one-to-one correspondence with an XML file which is the starting point for the automation and the Functional Mock-Up Unit (FMU) orchestration. For that purpose we describe a typical (although non-standard) master algorithm governing the time-dependent simulation of a coupled system. The dependency diagram and the execution algorithm rely on a very limited set of BPMN extension elements since the standard already offers a range of basic elements which facilitate the implementation of a specific execution environment for FMI co-simulation. This study explores the theoretical issues behind the FMI-BPMN integration and the practical implementation problems. The final result is the complete BPMN diagram for the master algorithm, fully interfaced with the FMI functions of the FMU execution blocks.
Tuesday, October 4 15:00 - 15:30
Tuesday, October 4 15:30 - 17:30
2D1: Model-based Systems Engineering I
- 15:30 A concept for managing information in early stages of product engineering by integrating MBSE and workflow management systems
- Searching for and handling of information is one of the most time consuming activities within the development of technical products. It is often unclear which information is available and its quality is questionable due to document based information handling. Inconsistencies, redundancies and discipline specific documents are reasons for that. The problem increases in the early design stages as information is available mostly within unstructured PowerPoint slides or in employees' minds. Several tools aim to solve this problem but have their strength in later stages of the development or just focus single activities - mainly technical issues. Model-Based Systems Engineering is a promising approach for the improvement of interdisciplinary collaboration in product development. It suggests a system model as single source of truth for product and project specific information - according to its basic concept Systems Engineering. Up to now, enterprises struggle implementing a holistic MBSE. It is often unclear which information is needed in which models at what time by whom. This can cause 'modeling for its own sake'. MBSE may contribute to the improvement of information handling and information quality. To enable this, we propose a workflow-based concept to support information handling using MBSE within the early phase of product engineering. The concept offers the potential to significantly ease access to information, reduce the time for searching for information and brings the breakthrough for MBSE as it connects engineering and process activities.
- 15:55 Holistic development of a full-active electric vehicle by means of a model-based systems engineering
- The present paper focuses on the model-based systems engineering of a battery electric vehicle called FReDy with intelligent chassis systems. This research vehicle is equipped with two electric drives on the rear axle that are positioned close to the wheels. FReDy combines electric traction with an intelligent chassis consisting of four independent active wheel modules and is a fully active electric vehicle. Using a hierarchically structured vehicle management the research vehicle FReDy enables driving that is optimized with regard as well to energy consumption as to driving dynamics and driving safety. Conception and realization of the vehicle are performed by means of a structured procedure in a model-based, hierarchical approach of systems engineering. The vehicle management of FReDy has to interpret, filter and correct the driver's demands concerning drivability and to communicate these demands to the subordinated information processings. These are a chassis management with underlain algorithms to influence the vehicle dynamics such as active steering, active body control, active toe-in or torque vectoring, and an electric energy management (EEM) to manage electric power flows. It is thinkable to connect several cars with an intelligent vehicle management. The focus of this paper lies on the vehicle management with the detailed description of the model-based design approach of the EEM with the subordinated battery and battery management system. Furthermore the active toe-in as an example of the chassis management is described before the overall integration to the vehicle management is done.
- 16:20 A hierarchical set of SysML Model-based objects for tolerance specification
- Modern engineering systems are getting complex and integrate multi-physical objects. The Model-Based System Engineering (MBSE) seems to be the best way to manage complex system design and the Systems Modeling Language (SysML) may be considered one of the computer languages to perform the designing of a complex system. MBSE also seems to be a valid solution to integrate tolerance specification into design process. In particular, in the present work SysML is used to create a set of libraries containing simple and complex volumes, primary datum and tolerance zones, according to ASME Y14.5M and ISO 1101 standards. The generation of these libraries is based on the Technologically and Topologically Related Surfaces (TTRS) model and uses the set of thirteen positioning constraints able to represent every condition between assembly features. The paper summarizes the characteristics of the created SysML objects, able to represent Datum, Datum Reference Frame (DRF) and tolerance zones. In particular, the Datum included in ASME Y14.5-M are modelled. Then, all the tolerance zones included in both standards are modelled. Finally, a three-step procedure is summarized to preliminary illustrate the way of use the developed set of SysML objects.
- 16:45 Challenges in Integrating Requirements in Model Based Development Processes in the Machinery and Plant Building Industry
- For the development of modern mechatronic systems in the machine and plant manufacturing industry various disciplines, e.g., mechanical, electrical/electronic, and software engineering, have to collaborate and exchange information along the development process. Thereby, the different viewpoints and development steps are often dependent from each other, e.g. a requirement has to be fulfilled by a certain component in the system. Model-Driven Engineering is a currently growing approach in the machine and plant manufacturing industry to face the described challenge of interdisciplinary development for complex systems. The goal of the Model-Driven approaches is to facilitate the communication between the different involved disciplines and achieve a more complete view on the system. Based on the Unified Modeling Language (UML), which is the de-facto standard modeling language in software engineering, the Systems Modeling Language (SysML) has been developed and specified by the Object Management Group. SysML is a semi-formal, graphical modeling language for various systems. It shall support all design and development phases through appropriate modeling elements and diagrams. However, especially in comparison to UML, SysML up to now has still not reached a similar dissemination in industry. While behavioral and structural aspects are covered already very detailed in SysML or respective profiles, e.g. SysML4Mechatronics, the modeling of requirements is still predominantly text based. This however poses a significant disadvantage, if the requirement shall be traced and checked if a respective system component fulfills the specific properties of the requirement. Thus, the main goal of this paper, based on the experience from an industrial case study of a complex mechatronic system, is to derive from the current engineering methodology how Model-Driven Engineering could support the development process and identify challenges, especially regarding requirements modeling, for the interdisciplinary Model-Based development using SysML in the machine and plant manufacturing industry.
2D2: Modeling and Simulation I
- 15:30 Impact of the MBSE on the design of a mechatronic flywheel-based energy storage system
- The design of a flywheel system for energy storage is herein performed through the MBE as an example of mechatronic product development and innovation.Some relevant advantages of the MBSE applied to a material mechatronic system in some activities as the requirement analysis, the identification of system capabilities,the definition of architecture and the system validation are identified.Moreover,the paper is aimed at investigating some critical issues arising when the MBSE tools are applied to the machine design, being still a domain poorly inclined to exploit this holistic approach. That trend might be related to technical difficulties in assessing an interoperable framework of simulators aimed at dealing with both the functional and physical modeling. In addition the technological scouting looks poorly connected within the standard SYSML diagrams,without introducing some additional charts.A demonstration of the fruitful impact of the MBSE on the machine design is proposed, to show some advantages provided as a mechatronic system is conceived and how a complete risk analysis could be based on a dysfunctional analysis supported by the MBSE modeling activity. As a result of the described activity it was demonstrated that the trade - off analysis among different solutions could be effectively driven by these tools,which exhibited a good interoperability at least when the IBM Doors, IBM Rhapsody and The Matworks Simulink environments were interconnected for the implementation of the proposed test case.
- 15:55 Process Integration and Design Optimization technologies for modelling improvement
- This paper aims to present a methodology that helps improving systems modelling by using Process Integration and Design Optimization technologies. A real application was done and the methodology was exploited to ameliorate the hybrid vehicle TOYOTA PRIUS III modelling. This work has been carried out in the context of the international research program PLACIS (PLAteforme Collaborative d'Ingénierie Système). It is the result of the collaboration between Institut Polytechnique Grand Paris and Esslingen University. The experimental data needed for the study were measured on a test bench and a detailed model developed in DYMOLA-MODELICA environment of the vehicle powertrain was given by the company Bosch. Thereby, in order to develop and validate the model, an optimization tool was used to identify the unknown parameters of the system and integrate the energy data management into the simulation. In this paragraph, a methodology to overpass M&S issues is presented. Because complex systems are dominated by the interaction of several physical domains, a broad range of domain specific modeling and simulation tools are available . Nevertheless, the most challenging task demanded to M&S tools, is to provide one integrated environment that enables engineers performing a multi-domain and optimal design. This need is satisfied by Process Integration and Design Optimization (PIDO) tools, an emerging class of software with the ability to revolutionize the product development . Since the beginnings of PIDO in the 1990s, accelerating software development together with practitioner achievements have solidified and expanded the reality of design space exploration for manufactured product development. PIDO help decision-makers assess the state of today's technology and discern and select among contemporary software offerings . The main goals of a PIDO tool are: • Automate and manage the setup and execution of digital simulation and analysis • Integrate/coordinate analysis results from multiple disciplines and domains to produce a more holistic model of product performance • Optimize one or more aspects of a design by iterating analyses across a range of parameter values toward specified target conditions These products work together as a client-server environment where the client links data from one server to any other across the network. The first step is to encapsulate each simulation tool of the simulation workflow so that a common client can interface the key parameters. Using an analysis server, components can be created from any disparate application as Excel, Dymola, Matlab, SysML, STK, or applications developed in-house from any private vendor. The analysis server creates a "wrapper" that parses information and exposes them in a common API . Fig. 1: PIDO tool wrapper In the wrapper, the useful parameters and variables are defined as inputs and outputs. The second step is to integrate the simulation workflow in a graphical environment and link the key data between each component. After that, the optimization phase can start. Simple tools, such as Design of Experiments (DOE), can be used to set up trade space runs while optimization routines are called upon to pinpoint the best design based on constraints. A number of different types of algorithms are included in the framework (gradient algorithms, genetic algorithms, algorithms based on response surface models, etc.). In some cases a wizard is available to help users choose the algorithms that are most appropriate for solving their problem. Users can add their own custom algorithms to the framework using the Algorithm Development Toolkit. This kind of devices can extremely contribute to ameliorate modelling process. Firstly, the integration enables to automate annoying, repetitive and time consuming procedures, such as the integration of experimental data into the model or the evaluation of simulation data in post-processing tools, decreasing, in this manner, the possibility to miss or get wrong the model information. Meanwhile, PIDO tools enable also to identify the missing parameters in modelling, when the measurement are not accessible or too costly. The current method consists on isolating the part of the model that contains the unknown parameters of either the physical system or the model. The experimental data should be added in the same model, and the error between the simulation and experimentation will be minimized using an optimization tool. Based on the type of parameters, we can decide rather if we should do a multi-objective or a mono-objective optimization. If the unknown parameters depend on each other, it will be essential to do the multi-objective optimization, but this kind of analysis can be time and resource consuming. In the other hand, if the parameters are independent, a mono-objective analysis is enough to have reliable values.
- 16:20 A system-of-systems architecture-driven modeling method for combat system effectiveness simulation
- One typical characteristic of modern war is the counterwork between equipment system-of-systems (SoS), which brings new challenges for effectiveness simulation based equipment acquisition. Under the background of SoS counterwork, it is more apparent that the simulation modeling of the equipment should be domain-specific, formal, automatic and composable. Current effectiveness simulation modeling approaches are mainly based on the methods and techniques from the modeling and simulation (M&S) community and software engineering community, and they are not capable to satisfy these four requirements systematically. Thus the system engineering architecture model and domain knowledge should be used to support effectiveness simulation modeling more effectively. This research conforms to the principle of architecture-driven development, uses ontology techniques to build equipment SoS architecture model and build sub-domain ontology under the constraint of this architecture model; performs architecture driven simulation modeling to realize the transformation from architecture model to simulation model framework; employs ontological metamodeling to design domain specific modeling languages (DSML) based on the comprehensive usage of architecture model, sub-domain ontologies and M&S formalisms, supports domain-specific modeling; integrates domain-specific simulation models from various domain using the model framework, and supports the composable development of simulation applications. The key techniques of the method is discussed to fully put it into practice.
- 16:45 Building A Virtual System of Systems Using Docker Swarm in Multiple Clouds
- The software industry has been embracing the multi-cloud infrastructure for the design and adaptation of complex and distributed software systems. This new hybrid cloud infrastructure makes it possible to mix and match platforms and cloud providers for various software development activities. There are several benefits of the multi-cloud infrastructure such as lower level of vendor lock-in and minimize the risk of widespread data loss or downtime. However, it has many challenges such as non- standard and inherent complexity due to different technologies, interfaces, and services. Docker has introduced container-based software development approach in the past few years and gaining popularity in the software industry. It has recently introduced its distributed system development tool called Swarm, which extends the Docker container-based software development process on multiple hosts in multiple clouds without any interoperability issue. Docker Swarm-based distributed software development is a newborn approach for the cloud industry; nonetheless, it has a huge potential to provide multi-cloud development environment without worrying the complexity of it. This paper presents the simulation of building a virtual system of systems (SoS) for the distributed software development process on multiple clouds. This simulation of virtual SoS is based on Docker Swarm, VirtualBox, Mac OS X, nginx and redis. However, the same SoS can be created on any of the Docker supported cloud by just changing the driver name to the desired cloud name such as Amazon Web Services, Microsoft Azure, Digital Ocean, Google Compute Engine, Exoscale, Generic, OpenStack, Rackspace, IBM Softlayer, VMware vCloud Air.
2D3: Network Architectures and Services
- 15:30 Dynamic Composition of Protocol Sub-Systems for Agile Network Services
- The paper describes a management model where a service provider (SP) maintains multiple protocol modules to exercise the infrastructure resources (e.g., bandwidth and processing cycles) under various environment conditions. Here, each protocol exhibits a different level of performance optimality and service resilience in distinct operating regions of the network infrastructure and the environment. At run-time, the SP selects one of the protocol modules that can meet the client-requested Quality of Service (QoS) specs against the prevailing operating conditions. A single shoe does not fit all sizes !! Our model allows a dynamic switching from one protocol module to another at run-time based on the changing environment conditions. The paper describes a management-oriented case study to exemplify the protocol switching as a foundation to build network services. We study a content distribution network (CDN) system from a standpoint of optimal protocol selection to cache a content at intermediate sites in the network. The caching system is either a client-driven protocol that pulls the content from a most up-to-date site in the vicinity of clients or a server-driven protocol that pushes the content to all the caching sites. The tradeoffs between the push and pull protocols is used in dynamically configuring the content delivery system.
- 15:55 Connecting Google Cloud System with Organizational Systems for Effortless Data Analysis by Anyone, Anytime, Anywhere
- The exigency of data analysis has been accelerating for routine operations in organizations. Every organization gath- ers a large amount of heterogeneous data every day. Subsequently, they develop their current and future strategies based on the analysis of the collected data. However, most small and medium organizations have been dealing with two major issues in the field of data analysis: requirements of several expensive analysis tools and IT infrastructure, and IT skills of their staff. One of the most effective solutions for them would be the cost-effective and on-demand IT infrastructure and software resources in the cloud. Google Cloud System is one of the biggest and complex cloud systems, which offers a variety of services including free services such as Google Drive. This paper presents a most economical and effortless approach for making a system of systems (SoS) based on Google Cloud System and SAML/OpenID Connect. In which, Google Drive can be securely connected to the organizational system using the popular SAML or OpenID Connect framework; subsequently, data analysis can be performed using the complete set of Google Drive tools: Google Sheets, Google Refine, Google Fusion Tables, Google Charts and Google Maps. This system of systems is not only the cost-effective and user-friendly solution but can be used by anyone, anytime, anywhere. The experimental simulation also demonstrates the effortlessness of the proposed data analysis approach using these Google Drive tools.
- 16:20 Agent-Based Modeling of an IoT Network
- As the Internet of Things (IoT) becomes more of a reality, there is an increasing number of wireless and wired devices that are connected to the Internet. Future generations of telecommunications networks must evolve to deal with the anticipated high demand of the radio spectrum. As telecommunications networks become more complex, modeling networks as complex systems that are flexible in the face of future advancements becomes increasingly important. As such, this study investigating the utility of agent-based modeling (ABM) as a method of modeling IoT networks. ABM provides a method of modeling complex systems by modeling systems from the ground up, which allows for a deeper investigation of the interactions that shape the ultimate system performance. In order to demonstrate this utility, herein we model an IoT-based road traffic management system. Specifically, we conduct an investigation of the impact of MAC protocol selection on communication performance in terms of spectrum utilization and accuracy of information. Additionally, we characterize impact of the MAC protocol selection on the application performance in terms of vehicular waiting time. In doing so, we provide an examination of the factors that drive the performance of IoT-based systems and an indication of the value added to communication system design by the focused consideration of system interactions enabled by ABM.
- 16:45 Internet of Everything (IoE) Exploiting Organisational Inside Threats: Global Network of Smart Devices (GNSD)
- Various disruptive technologies have evolved a paradigm for setting up the Internet-of-Everything (IoE) and continuous evolution of technologies has demanded those devices to communicate and exchange information with each other. The marketing research company Gartner has surveyed and estimated that around 30 billion devices would be interconnected and exchanging information by 2020. The IoE is accompanied by several substantial opportunities in all fields of life, ranging from the energy sector to the healthcare industry. The trail of IoE development has opened new doors of attack vectors. The successful emergence of IoE depends on moving from the conventional mobile computing scenarios to intelligently embedding the existing objects in our environment. It is quite evident that information and communication networks used to exchange information are enjoying continued growth through the presence of Wi-Fi on a global scale. Modern users are well informed about the technology they use and are conscious of using the IoE. This requires an understanding of the appliances and its users, the pervasive communication network and software architecture, and smart and autonomous behavior of IoE in relation to analytics. The context-aware computation and smart connectivity could be accomplished on these fundamental grounds. The evolution in the plethora of ubiquitous devices has raised several insider threats, thus generating concerns of security and privacy within enterprises who have framed the use of smart devices through IoE. This paper discusses a different perspective of security and privacy issues in IoE by considering insiders bringing personal smart devices to use within the enterprise. The central theme of this research is to identify the challenges and explore the extent of security and privacy issues a global network of smart devices could exacerbate within the enterprise.
Tuesday, October 4 17:30 - 18:30
Wednesday, October 5
Wednesday, October 5 7:00 - 18:00
Wednesday, October 5 8:00 - 10:00
3A1: System Architecture
- 8:00 Reference Architecture and Maturity Levels for Cyber-Physical Systems in the Mechanical Engineering Industry
- Nowadays mechanical engineering products change from mechatronic systems to Cyber-Physical Systems (CPS). CPS are connected, embedded systems which directly record physical data using sensors and affect physical processes using actuators. They evaluate and save recorded data, use globally available services and interact with operators via multimodal human-machine-interfaces. In context of industrial production CPS change production processes radically. "Industry 4.0" refers to a radical change in production processes and stands for a new stage of organizing and controlling complex value-added processes. Cyber-Physical Systems are a main driver of Industry 4.0. Due to the change of technical systems, equipment suppliers, especially companies of the mechanical engineering industry, face the challenges of a rising complexity and a nearly unmanageable amount of new solutions based on information and communication technology. The contribution at hand analyzes existing architectures of CPS as well approaches for CPS maturity levels. Based on the analysis of existing literature we provide a reference architecture and maturity levels for CPS. The reference architecture serves as an universal blueprint to structure CPS and to visualize all components and relationships. Two sets of CPS maturity levels help companies to assess the status quo, to determine the target state and to define concrete actions for improving their systems.
- 8:25 Using Internet of Things Technology to Create a Really Platform Independent Robotics Framework
- Control software systems for robots are very complicated. A variety of hardware and software parts must work together seamlessly, where each component due to heterogeneity and dynamics as well as scalability and real-time requirements of the system and environment already provides challenges to the developers on their own. The development of such complex software systems is a very cost and time intensive project and can hardly be done by a single developer or a small team alone. Therefore, in the past already a variety of different platforms and frameworks were created to simplify the implementation of control software for robots. Robot control systems and platforms are typically distributed systems based on some communication middleware. They often include supporting robot control and simulation tools, development tools and robotics algorithms, drivers and the like in different forms and extent. While most of such robotic frameworks built on some uniform middleware control system for all distributed nodes which implies restrictions e.g. on their operating systems, supported languages, and component design, we aim at a more flexible and open architecture of autonomous components in the style of Internet of Things (IoT). In this paper, we introduce a platform independent framework based on IoT technology. We illustrate our approach here using examples of robots that are to explore their surroundings. We discuss the encountered difficulties and advantages of our novel approach.
- 8:50 An Information-dominated and Capability-oriented Architecture Development Methodology for Net-Info Centric Systems
- Architecture development is critical for Net-Info Centric Systems (NICSs) as it can influence whole lifecycle of NICSs. However, existing architecture design or development methods cannot be directly applied to NICSs because of their lack of information-dominated and capability-oriented architecture description. In this paper, an information-dominated and capability-oriented architecture development methodology is proposed for NICSs. First, we introduced the overview of information-dominated and capability-oriented architecture development of NICSs, and respectively analyzed the concerns in information-dominated architecture development and capability-oriented architecture development of NICSs. Second, based on the analysis, we captured the key concepts and their relationships involved in the architecture development of NICSs, defined two architecture viewpoints (information Activity Viewpoint and Capability Viewpoint) and eight models, and provided a process of developing NICSs architecture by applying our methodology. Finally, to evaluate our proposed methodology, a case study is conducted by applying our methodology to describe the architecture of battlefield situation cognition in NICSs. Results show that our methodology is applicable for the architecture description of NICSs from the perspective of information-dominated and capability-oriented. At last, we discussed how to develop NICSs architecture by combining our methodology and other architecture frameworks and architecture development methods, and proposed future works of analyzing and evaluating NICSs architecture described using our methodology.
3A2: Systems Reliability and Testing II
- 8:00 A Multivariate Statistical Approach for Improved and Automated Process Control and Anomaly Detection in Mechanical Systems
- Content: This paper will describe an approach for applying multivariate statistical process control techniques to improve and automate anomaly detection in mechanical systems. Examples of the implementation of the methodology will be drawn from jet engine trending, aircraft operation, cooling tower performance monitoring, and other projects where the technique has been successfully employed in real world applications. Conclusions: Multivariate process control techniques provide high sensitivity and low false alarm rate monitoring opportunities. Multivariate process control techniques can also be implemented to improve anomaly detection in environments where it is impossible or impractical for cost or performance reasons to employ additional sensors for fault detection. Significance: The approach and techniques outlined in this paper represent a method to improve and automate anomaly detection in systems where direct measurement and condition monitoring for known faults does not adequately cover all possible failure modes. The presentation will show that improved detection and diagnostic capabilities have been developed, automated, and implemented using advanced statistical methods to better determine the health and condition of expensive systems and for complex process monitoring. The automation of this method reduces the time and costs required to monitor and manage these systems. This method also reduces the incidence of undetected component and process anomalies which lead to unscheduled maintenance actions, reduced asset availability, and catastrophic asset failures.
- 8:25 Reliability Allocation assessment using MEOWA method in complex redundant systems
- Reliability assessment represent a key issue in many advanced technology applications, in particular where the systems under analysis are required to satisfy high levels of reliability and guarantee environment, personnel and system safety. This paper deals with the Reliability Allocation (RA), a top-down technique that allows apportioning of the system reliability goal between its components. The apportionment of reliability values among the various items and subsystems can be made on the basis of estimated achievable reliability, criticality, complexity, or any other factors considered appropriate by the analyst making the allocation. In particular, this paper is focused on the MEOWA Allocation technique that turned out to be the most effective procedure to assess Reliability Allocation in presence of redundant architectures. This RA method was tested on two case studies in order to extend its applicability in presence of more complex redundant architectures. It is important to note that redundant structures are fundamental in several critical application (i.e. avionics, automotive, military, biomedical and so on) due to the fact that allow to correctly perform its specified tasks in the presence of one or more failures. Therefore the apportionment of reliability should be made with particular attention in order to optimize costs and reliability requirements.
- 8:50 A case study in the application of failure analysis techniques to Antarctic Systems: EDEN ISS
- This paper presents the application of the FMECA technique to Antarctic equipment development. Dependability techniques which are more traditionally applied to aerospace systems, can also benefit Antarctic systems, improving them from the perspective of reliability, availability, maintainability and safety. As a case study to demonstrate their utility, general failure analysis principles and the standard ECSS-Q-ST-30-02C are applied to the Antarctic space analogue project EDEN ISS. The EDEN ISS project intends to demonstrate plant cultivation technologies for safe food production in future space missions, by deploying a greenhouse module to the German Neumayer III Antarctic station. The long-term operation of the EDEN ISS Mobile Test Facility will advance operational procedures and the technology readiness of numerous plant production system technologies for flight. The Mobile Test Facility was broken down into the following subsystems for failure analysis; air management system, command and data handling system, illumination system, nutrient and delivery system, power control and distribution system, plant health monitoring system and thermal control system. Therefore the Mobile Test Facility systems can be decomposed into additionally several subsystems and blocks up to component level for a better Functional FMECA analysis as referred to in the literature. The aim of this paper is to demonstrate the advantages of applying reliability techniques such as FMECA to Antarctica missions/systems, in order to minimize mission failure probability, to reduce logistics requirements and to better comply with the Antarctic Treaty requirements. The results of the FMECA have benefited the EDEN ISS Mobile Test Facility by redesigning reliability block diagrams, improving the quality of the diagrams for further assembly, exploiting components and systems blocks functions for extra safety provisions across different systems, decreasing the number of spare parts and optimizing the maintenance tasks and procedures decreasing crew workload.
3A3: Modeling and Simulation II
- 8:00 Increased Intraocular Pressure Simulation and its Effect on Acuity and Field of Vision of the Human Eye
- In the development of this project, the human eye simulation is shown focused on the damages that are caused by the increased intraocular pressure, IOP, as the main cause of Glaucoma and the strong impact it has for being the second cause of blindness in Ecuador. This simulation intended to collaborate with the early detection and diagnosis of that illness. It is used a visual examination database of Santa Lucia Medical Clinic's patients from Quito city. Through the analyses that were done, the value of the intraocular pressure IOP is determined, the thickness of the retinal nerve fiber layer condition, and the visual field, in such a way that relevant points can be characterized in the simulation for a better finding and a possible prevention. This project is part of the program of research in early detection of glaucoma, the correlation between the intraocular pressure and central corneal thickness is based with this first results. Accurate measurement of intraocular pressure (IOP) is vital for the diagnosis and treatment of glaucoma, since, as demonstrated by all major assays glaucoma, is the only variable that can be altered to prevent or delay the onset and / or progression of glaucoma. The biological plausibility of this hypothesis could be supported by the compromised tissues (cornea and optic nerve) come from similar structures and share structural alterations. Whereupon intends to conduct studies in which more specific characterization methods anatomical and biomedical used in patients with and without glaucoma.
- 8:25 Six-Port Interferometer for Direction-of-Arrival Detection System
- Phase measurement DOA detection is commonly used nowadays. However, the traditional approach are based on conventional mixer architecture or expensive direct conversion devices which has high complexity and costly while comprising robustness. Therefore a new interferometer DOA detection system based on six-port network has been lately investigated and is showing several advantages in respect to traditional solutions The phase measurement using a six-port interferometer for Direction-of-Arrival (DOA) Detection for Ultrawideband (UWB) application is presented. The presented six-port interferometer is formed by Wilkinson power divider (WPD) and planar 3dB/90° hybrid couplers. The WPD and hybrid couplers are fabricated and the S-parameter performances are measured using Network Analyzer. The measured results of WPD and hybrid coupler are then modelled as six-port interferometer in the Keysight Advanced Design System (ADS) simulation software. The six-port interferometer is modelled by combining measured scattering parameters of the all ports of six-port network. The constructed scattering matrix is transform into touchstone file format. The DOA system comprises of Low Noise Amplifiers (LNAs), six-port interferometer, power detectors and Operating amplifiers (Op-Amps). Its performance is assessed via schematic simulation ADS. The simulated results show the presented six-port interferometer is able to discriminate the detected wave phase of the signal.
- 8:50 From Simulation to Experimentable Digital Twins - Simulation-based Development and Operation of Complex Technical Systems
- Way beyond its industrial roots, robotics evolved to be a highly interdisciplinary field with a variety of applications in a smart world. The eRobotics methodology addresses this evolution by providing platforms where roboticist can exchange ideas and collaborate with experts from other disciplines for developing complex technical systems and automated solutions. Virtual Testbeds are the central method in eRobotics, where complex technical systems and their interaction with prospective working environments are first designed, programmed, controlled and optimized in simulation before commissioning the real system. On the other hand, Industry 4.0 concepts promote the notion of "Digital Twins", virtual substitutes of real world objects consisting of virtual representations and communication capabilities making up smart objects acting as intelligent nodes inside the internet of things and services. Combining these two approaches, Virtual Testbeds and Digital Twins, leads to a new kind of "Experimentable Digital Twins" breaking new ground in the simulation-based development and operation of complex technical systems. In this contribution, we describe how such "Experimentable Digital Twins" can act as the very core of simulation-based development processes streamlining the development process, enabling detailed simulations at system level and realizing intelligent systems. Besides this, the multiple use of models and simulations in various scenarios significantly reduces the effort for the use of simulation technology throughout the life cycle of complex technical systems.
Wednesday, October 5 10:00 - 10:30
Wednesday, October 5 10:30 - 12:30
3B1: Model-based Systems Engineering II
- 10:30 Feature Model Based Interface Design for Development of Mechatronic Systems
- In the field of mechatronics, simultaneous and concurrent engineering is generally encouraged. The products are described by models belonging to different domains, such as mechanical, electrical, and electromechanical. These models interact with each other in a complex way. To analyze and describe the behavior and the interaction pattern of the products subsystems and their components, cooperation and coordination of the involved domains are required, which fit the varying modeling objectives and analysis goals. The mechatronic dynamic models are developed traditionally independently in every involved discipline, all of which provide and expand certain aspects. They were integrated to analyze the complex interactions and dependencies between them. However a common understanding of the objective and the combined artifacts of the different disciplines can not be achieved without considering all involved interfaces (system boundary) between models in different disciplines simultaneously. An approach for an integrated model-based design process, so-called Multifunctional Model Client (MMC), is presented in previous papers. Here, we focus on interface handling along the presented approach, which facilitates the integration of system models and simulation models and reduces the huge effort for building, maintaining, and synchronizing the required interfaces for simulation models. To illustrate this, we use the application example of two cooperating delta robots.
- 10:55 A Complementary domain specific design environment aiding SysML
- In system engineering, it is a common practice to start exploring the solution space through usage of design mechanisms such as SysML, a modeling language for expressing system design. The usage of such methodologies allow creating design of target solutions through identification of use cases, components, their interfaces, interaction data models and so on. Since it is a generic language, the concepts in SysML require manual mapping to the concepts in the problem domain so that the target design could be expressed in domain terms. This makes its usage effort-intensive since the process of binding with problem domain is highly dependent on domain experts and their understanding. SysML mitigates this by allowing itself to be extended as reusable profiles incorporating problem domain concepts and patterns. However, the process to support this activity isn't well defined. To solve the process of creating design solutions specific to a domain, we propose an approach which includes defining a design environment that is domain aware. It has been referred to as a Domain Specific Design Environment (DSDE). DSDE supports the design creation process more holistically by providing support for the various systems engineering life cycle phases other than designing. The developed DSDE is based on the Model Driven Engineering(MDE) paradigm which enables it to be integrated and viewed in SysML terminology. The environment has support for a Domain Specific Modeling Language(DSML) aided with suitable graphical representations corresponding to SysML standards, keeping it intuitive for classical SysML users. DSDE is created as a plug-in in Eclipse platform. This paper discusses about the DSDE.
- 11:20 MDDP: A Pragmatic Approach to Managing Complex and Complicated MBSE Models
- A central promise of Model-Based Systems Engineering (MBSE) is to provide engineers and other members of the development team with the right tools to manage all lifecycle information. The key to deliver this promise is a pragmatic, concise, consistent, intuitive and traceable methodology to apply Systems Engineering (SE) without introducing new overheads, steep learning curves or the need to buy expensive software. However, the practical use of MBSE is currently impeded by a universal lack of experience, best-practice and integration across development phases and cycles. As problems are diverse and solutions can vary widely, no unambiguous, tried-and-tested body of best practices has been established yet. SysML is rapidly becoming the universal language of choice, but its definition and tool support are changing frequently and there is still room for improvement of its implementation over the whole process chain. One practical implementation of MBSE is the Model-Driven Development Process (MDDP). The process has been devised to develop large and complex systems with a particular focus on supporting the concept phase. These systems are often part of research projects with low technical readiness levels and a wide mix of domain experts collaborating across multiple sites. The main objective of the MDDP is to provide a common engineering framework and a correct semantic model at the same time. The model comprises all Engineering Items (EI), related information objects and artefacts over the whole system life-cycle. This paper illustrates the MDDP using a real-world example of developing a steam-engine. The case study is deliberately kept simple to help concentrating on the process and its modelling steps.
- 11:45 Reducing the Cost and Complexity of Variant Exploration with MBSE & LSP
- Variant exploration is often construed as synonymous with a significant increase in work and a decrease in clarity. Model-Based Systems Engineering (MBSE) has been heralded as a potential solution to streamline the process by supporting exploration of numerous parallel variants within a common multidimensional model. Successful decoupling of variants from their surroundings supports the parallel development of interfacing systems without delay or impediment, thereby permitting the deferment of a final decision on the preferred technology choice. Still, a number of challenges remain. Some have been addressed by previous research, though, not focusing on the effect of variant technology choices in the logical tier, nor on embedding variant management as a native part of the core MBSE model. This paper addresses those challenges via application of Liskov's Substitution Principle (LSP). It defines an MBSE based approach that explicitly documents variance points, identifying their source and scope, and integrates their management into the standard Systems Engineering (SE) workflow. Emphasis is given on providing a pragmatic approach that promotes consistency, visibility, and simplicity. The result is a model that embeds the required information artifacts without adding significant complexity. The proposed strategy does not prescribe excessive variant-specific activities, with the majority of the effort being already part of standard Requirement, Interface, and Logical SE processes. It is tolerant to an iterative approach, and proactively supports concurrent engineering through decoupling.
3B2: Systems Engineering Education
- 10:30 An investigation of the effectiveness of mandatory training among the U.S. defense acquisition workforce
- There is an ongoing debate among defense acquisition leaders regarding the effectiveness of costly workforce training initiatives. In particular, the Defense Acquisition Workforce Improvement Act (DAWIA) career field training has been labeled a government-wide high risk item by the Government Accountability Office (GAO) due to the lack of empirical evidence to demonstrate whether or not the training is effective. Over the past 15 years, the GAO has issued numerous letters to Congress calling for a comprehensive assessment of the training imposed on the defense acquisition workforce. This study responded to the GAO request by proposing and completing an investigation of the effectiveness of training among a sub-population of the defense acquisition workforce. Background research was performed to identify the dominant theories relating to outcome-based learning and workforce competency. This study proposed a state-of-the-art training evaluation model which is both grounded in robust theory and suitable for formal inquiry relating to the efficacy of systems engineering training. The resultant model is comprised of critical competencies of successful systems engineers across three categories: cognitive, skill and knowledge based, and affective/behavioral. As part of the study, all proposed competencies were validated by practicing experts in the field of systems engineering and were proven to be relevant for practical application. A pilot study was conducted on a sub-population of defense acquisition workforce members in the Systems Planning, Research, Development, and Engineering career field. The result of this study offers empirical evidence to demonstrate the effectiveness of DAWIA training on the sub-population under investigation. While the model was tailored in this study for the evaluation of systems engineering training, the methodology is suitably general for export to other fields of practice.
- 10:55 Outcome-based competency model for systems engineering training
- This paper describes how dominant theories relating to outcome-based learning and workforce competency were synthesized into a singular outcome-based competency model to address a general problem of ineffectual training evaluation. A baseline model was developed using leading theories from the academic literature pertaining to competencies for systems engineers across three categories: cognitive, skill and knowledge based, and affective/behavioral. The model was further refined via qualitative and quantitative analysis of formal interviews from subject matter experts in the field of systems engineering workforce management. The resultant theoretical model is both grounded in robust theory and validated by subject matter experts, and is suitable to drive practical evaluations of the efficacy of systems engineering training across multiple contexts. The model classifies the competencies into three tiers of workforce functionality: foundational, specialized, leadership.
- 11:20 Considering Society and Technology in Systems: A proposal for systems engineers' education
- The traditional engineering way of facing obstacles has encountered and overcame countless problems. Although the problems were hard and intricate, they were simple. Simple problems do not mean easy problems. It just means that even the solution requires years of hard work, numerous trials, intuitive and tested models, reflections of trained minds, sophisticated experiments, and collaborations of several experienced and keen scientists, once that the path or even the input has been established, it is possible to foresee the output with reasonable hope. Systems engineers build systems that affect society in many ways. However, few engineers are able to develop systems in which the diversity of human and society factors are considered. In Brazil, engineering schools are not prepared to teach their systems engineering students to developed systems that consider the complexity that is present in the society and technology relationship. This happens because engineering education is based on classical science, and, as consequence, the reductionist view of the problems to be engineered. This paper presents a proposal to change the engineering education, as a way to prepare systems engineers to deal with the knowledge that is necessary to represent, and consider, the society and technology relationship in the engineered systems. The authors developed this proposal as a response to the complex activity of coping with the effects of the systems created by engineering upon people and society in general.
3B3: Sensors and Systems II
- 10:30 A simple magnetic signature vehicles detection and classification system for Smart Cities
- Vehicle recognition is one of the main challenges in Intelligent Transportation Systems (ITS). The need to recognize the vehicle type can help insurance companies, public safety organizations, infomobility, and policy-makers in general. In this paper, we propose a vehicle recognition system based on speed estimation, vehicle length estimation and classification of the vehicle type. We developed a real time system for vehicle recognition based on four steps: a storage of the magnetic signature of the vehicle, speed estimation, estimation of the length of the vehicle and vehicle recognition. The latter has been realized through matching between the measured waveform with information in a database containing magnetic signatures of vehicles. Matching was realized using the Dynamic Time Warping (DTW) method. Experimental results involving 10 vehicles and 50 trials show successful identification of approximately 98% of the considered vehicles. The realized system is able to estimate speed, magnetic length, and to classify the type of vehicles with an error rate of 2%. The DTW algorithm used was shown to be very flexible and simple to implement. results show that the RMS length estimation error is 0.65 m, and in 95% of the acquired records, the error is lower than 1.25 m.
- 10:55 The AYO! Project for Air Quality Monitoring
- The school dropout is a not negligible phenomenon that, if not effectively contrasted, puts at risk the future of young people and expose them to greater risks of unemployement, poverty and social exclusion compared to peers with a good instruction level. In this paper we present AYO!, an extracurricolar laboratory aimed to contrast the high school dropout. AYO! is an acronym that means Assemble Your Objects, and it was choosen, in addition to literal meaning, for the assonance with the dialectal Sardinian's expression Ajo!, which is an exhortation that can be translated into english in "come on!" or "get a move on!". AYO! allows engagement of students in design, realization and use of sensors and actuators systems with the aim of analyzing, monitoring and share data on their living area using a social IoT platform. AYO! was offered, in the spring semester, to two different classes of students, in North Sardinia and Central Sardinia. The lab practical activities provided the knowhow about very interesting profiles for labour market and awareness of how technological innovation can be used to understand phenomena such as pollution, energy consumption, and so on. At the end of the experience, the students presented and discussed their results with the developers of a local company active in the IoT scenario and with the researchers of this University department.
- 11:20 Wideband Slotted Antenna for Microwave Imaging System in Ground Penetrating Radar Applications
- Ground Penetrating Radar (GPR) is one of the non-destructive methods which employ electromagnetic wave to map the buried features in the ground or man-made structures. Basically, GPR consist three main components which are antennas, control unit and display. In this paper, wideband slotted antenna for microwave imaging system in Ground Penetrating Radar (GPR) applications is presented. The design of the proposed antenna is aided using Computer Simulation Technology (CST) 2014. Taconic substrate TLY-5 with dielectric constant of 2.2 and tangent loss of 0.0009 is employed in the design. The slotted antenna demonstrates operational bandwidth of 24% across 1.2 GHz to 1.86 GHz frequency range. The characteristic of the proposed antenna such as return loss, radiation pattern and gain for both simulated and measured result are observed. The wideband slotted antenna performs a good agreement between simulated and measured return loss and gives directional pattern. The antenna gain improved up to 9 dB after the reflector is located at the back of the antenna. The integration between wideband slotted antenna and sandy soil and buried metal object was performs. The scattering data for the simulation result was collected and microwave imaging technique was used to construct the images. The constructed image for the antenna and its integration with and without clutter removal is also presented in this paper.
Wednesday, October 5 12:30 - 13:30
Wednesday, October 5 13:30 - 15:30
3C1: Model-based Systems Engineering III
- 13:30 Interdisciplinary Specification of Functional Structures for Machine Design
- The development process of complex machines requires an intense collaboration of different technical disciplines. In this context, the interdisciplinary development team often lacks a common and continuously updated system model, which consequently leads to high communication effort and delays the overall development process. For a more efficient and effective machine design process, an interdisciplinary modeling language is required that follows a generic design approach. Therefore, we analyzed existing modeling languages and identified drawbacks. Based on our findings, we developed the Interdisciplinary Modeling Language (IML), combining the benefits of existing languages with further extensions. IML includes three diagram types: Functional Structure (FS), Interaction Structure (IS) and Sequential Behavior Diagram (SBD). For a creative and generic machine design, IML supports a functional approach involving all interdisciplinary teams while decreasing communication and documentation effort. For this purpose, IML allows, on the one hand, reusability of former machine designs - and on the other, a direct visualization of each modification, by establishing a consistency of different views. IML improves clarity and comprehension of the machine design and thus offers the possibility of identifying potential process errors early in the concept phase. Furthermore, we identified the embedding of these diagrams into the general development process and demonstrated the utilization of IML on the basis of a use case.
- 13:55 Evaluating and Comparing MBSE Methodologies for Practitioners
- The practical use of Model-Based Systems Engineering (MBSE) is currently impeded by a universal lack of experience, best-practice and integration across the development cycles. This paper therefore compares two MBSE modelling approaches that have been specifically developed to address these issues: the Systems Modelling Toolbox (SYSMOD) in conjunction with Functional Architectures for Systems (FAS) and the Model-Driven Development Process (MDDP). In this paper, the both approaches are applied to model the same, real-world example: A construction-kit steam engine. The limited scope of engineering in this challenge allows to concentrate on understanding and evaluating the approaches. The focus is on how well they support practical, effective and intuitive modelling of a product over all architectural layers. SYSMOD spans the whole development life-cycle to create major development artefacts like requirements, use cases, and different architecture types. Part of SYSMOD is a variant modelling method to create a multidimensional model with variants of the development artefacts and product configurations. The FAS method supplements SYSMOD to derive a functional architecture from SYSMOD artefacts. The FAS and SYSMOD methods were applied to the case study to demonstrate a typical execution sequence of the methods. The result is a model description of the steam engine. The MDDP is an integrated recursive (over the levels of the product breakdown) and iterative (over the steps) process combining best practices from SE and Information Modelling. It strictly separates Engineering Items (EI) from their representations in Diagrams, making it ideal for managing complexity, variants and interfaces for generating unambiguous, dynamic and customizable perspectives for all specialists' needs. The MDDP spans the whole development life-cycle and integrates all cross-cutting management processes by including their artefacts and elements in the model. The MDDP was applied to the case study and produced a complete and consistent model with all its SE artefacts with little modelling effort. It manages the EI with zero redundancy and affords high levels of object reuse and cross-checking capabilities. Both approaches proved to be intuitive, easy to learn and effective. The main difference is that SYSMOD complies with OMG standards and can be implemented using commercially available tools while the MDDP defines a proprietary modelling language and is developed in parallel with a suitable tool. Both models can be used in simulations and support requirements verification and design validation efforts. The limitation of the study prevented to evaluate how well they handle iteration and recursion. This paper is the first in a series of comparison studies. It is planned to continue comparing practical approaches to apply MBSE in real-world projects. The next step will be an all day workshop at the forthcoming German INCOSE conference, where participants with no prior experience will apply and evaluate several candidate methods independently.
- 14:20 Model-Based Object-Oriented Systems Engineering methodology for the conceptual design of a hypersonic transportation system
- This article suggests a Model-Based Object-Oriented approach for the design of a very complex aerospace product. In particular, the proposed application concerns the conceptual design of a hypersonic transportation system aimed at performing suborbital parabolic flights. At first, the theoretical approach is explained. Starting from a detailed analysis of the stakeholders, of the market and of the operative environment in which the system is supposed to operate, mission and system analysis is carried out, considering the product from the functional point of view and then from the physical perspective. Due to the high level of uncertainty that characterizes the design of complex systems in general, and in this specific case due to the very limited number of real aerospace product able to reach hypersonic flight, the use simulation since the very beginning of the process is suggested. After the methodology is theoretically enunciated,a proper tool chain is suggested. It consists of both commercial and ad-hoc, built-in tools. It aims at covering all the activities of the design methodology, reducing development time and cost. Eventually, the results of the application of the methodology and the support tool-chain for the design of a aerial transportation system able to perform suborbital flight guaranteeing Vertical Take-Off and Landing capabilities. In conclusion, main benefits of the approach are highlighted and future developments and applications proposed.
3C2: Systems of Systems II
- 13:30 Design as the Best Differentiator of System-of-Systems
- Systems of Systems is defined as a collection of component systems that produce results not achievable by the individual systems alone. SoS serves to distinguish true multi-domain and multi-discipline systems from others who have taken up the systems mantle, most noticeably software and IT spaces. But SoS is more than just a higher level focus on configuration management (CM), requirements management (RM), data management strategies (CMS) and integrated logistics support (ILS). These factors are primarily concerned with the mechanics of data handling. SoS deals with the process of work-in-progress and how the design drives the implementation. The design is the guiding element for the system or product. This paper will distill the process, design and implementation issues that distinguish Systems-of-Systems from subsystems. These differentiated factors will be illustrated in case studies that focus on safety-critical processes common in aviation, rail, road and automotive development. For example, one commonly required is the need to gauge process maturity as found in the CMMI, Automotive Spice, Aviation ARP4754A and, more recently, the automotive ISO-26262 "Fit-for-Purpose" certifications. All of these strive to measure the worthiness of hardware-software systems. All such systems are in fact assembled from other systems, thus creating true 'system of systems' final implemented products. Without consideration of the system of systems aspects from the initial inception throughout the design flow, such systems inevitably experience numerous iterations, regulatory challenges and delivery delays. By paying appropriate attention to the full broader set of issues - through effectively applying systems engineering approaches and relevant standards from the concept through design and into implementation and delivery, critical improvements in process and product lead to successful results. Outline: 1. What distinguishes a System-of-System from other major systems, eg. in the software and IT spaces? 2. Why should SOS focus on the design and work-in-progress metrics over just CM, RM, CMS and ILS factors? 3. Examples and case studies: > CMMI and Automotive Spice - standards to drive fit-for-function > Safety Analysis for Aviation; ARP4754A. In this example, disparate disciplines covering modeling, requirements management, fault tree analysis and system design come together to deliver an efficient flow with connected information, leading to a more complete initial safety plan and design recommendations across the integrated system of systems. > Reliability data integration into future safety and reliability design: systematic relation of past information with the requirements of new designs helps to drive design considerations for longer-lived products comprised of systems of systems.
- 13:55 Towards Modelling of Modelling in SE
- The engineering of complex systems and systems of systems (SoS) often leads to complex and very-time consuming modelling tasks (MTs). MTs can be distributed in several autonomous and heterogeneous places, a place being a set of stakeholders and their practices. An issue is to mastering MTs while taking into account the constraints of their large complex engineering environments (LCEE). A step toward the resolution of this issue is to deeply understand and model MTs in LCEE. In this paper, we propose to characterize an LCEE as a federation of places in order to keep the capability and autonomy of each involved place, and to apply model-based system principles to modelling of MTs. The global goal is to design a formal holistic support for operation, continuous analysis and improvements/optimizations of MTs.
- 14:20 OntoSoS.CM: A Business Process Architecture driven and Semantically Enriched Change Management Framework for Systems of Systems Engineering
- The emergence of Systems of Systems (SoS) arrangements, with their high level of complexity, mainly due to the different characteristics of the individual systems and their integration into the respective SoS, has brought about new challenges in terms of Configuration Management (CM) in general, and change management in particular. Novel change management frameworks are needed to appropriately address these challenges simultaneously both at the level of individual systems and at the SoS level. To address these challenges, there has been a call for a major paradigm shift, by proposing state-of-the-art approaches that target the investigation of new frameworks, which align various CM activities with newly proposed Systems Engineering (SE) models (i.e. SoSE models). So far, the adopted change management frameworks have been applied for managing changes of software and IT services in complex monolithic system environments only, but they have not been widely proposed to be applied on entire SoS arrangements. This paper introduces an on-going research that aims to propose a novel approach, by investigating the potential of using ontology-driven models combined with a formal Business Process Architecture (BPA) approach and in particular, Riva, in driving the development of a generic semantically enriched change management framework for the software engineering life cycle of SoS, namely the OntoSoS.CM framework.
3C3: Tools and Methods
- 13:30 Systems engineering analysis approach based on interoperability for Reconfigurable Manufacturing Systems
- In this paper, we propose a systems engineering approach for the analysis of reconfigurable manufacturing systems. This work contributes in the implementation of the systems engineering perception in industry 4.0 researches framework. The approach is based on interoperability concept in order to correlate diverse requirements as input for the analysis, and generate a result based on reconfigurability parameters. Beside the approach itself, this paper presents an application on a reconfigurable machine tool that demonstrates applicability of the developed method. The paper contains a literature review about researches and studies related to Industry 4.0, reconfigurable manufacturing systems and systems engineering. A focus was made to distinguish the difference between modularity and reconfigurability. The general framework of our approach is therefore described and the implementation is detailed. For this approach, interoperability between a SysML-based modeling tool and a 3D CAD modeling tool is performed by converting the system specifications into a JSON format. Analysis metrics was developed and implemented in a Python application that uses the JSON data as input parameters. The application generates the possible configurations and calculates the degree of reconfigurability of every solution. Our approach was applied to a reconfigurable machine tool to illustrate how the research of suitable configurations is simplified in an efficient and rapid way.
- 13:55 A holonic-based method for design process of cyber-physical reconfigurable systems
- Recent advances in manufacturing industry has led the way for the most frequently discussed topic among practitioners and academics which is cyber physical systems (CPSs). Design process of cyber physical reconfigurable systems (CPRSs) operating in a dynamic environment is a significant challenge when a meaningful coordination of tasks between human to machine and machine to machine is required. This work presents a method based on system engineering "V" model and the holonic concept to deal with design process of CPRSs, as well as, we propose complete steps to explain how to adopt configuration from the customization to the software implementation in the operational resources. To address this method, an example of CPRSs is provided for showing the interactions of autonomous holonic agents and to give the impact of coordinated tasks in the holonic multi-agent platform in terms of code reusability and modularity. This methodology offers substantive advantages over traditional process-design methods, which are often nonintegrated and require large quantities of iteration and long development times.
Wednesday, October 5 15:30 - 16:00
Wednesday, October 5 16:00 - 18:00
3D1: Model-based Systems Engineering IV
- 16:00 Computational Design Synthesis for Conceptual Design of Robotic Assembly Cells
- Design synthesis is a fundamental engineering task that encompasses mapping a functional specification to a set of physical components and their topological relationships, where the mappings are typically non-unique. Even for moderately complex systems the consideration of all possible mappings is typically not feasible, making a manual exploration of the full design space impossible. In practice, this limitation often leads to sub-optimal designs. In this paper, we introduce a computational approach to design synthesis in conceptual design. Our approach is based around representing a design problem formally using SysML, and transforming this representation to a mixed-integer linear program. This separates the representation of the design problem from its solving method. The generated mathematical optimization problem is then solved, and the mathematically optimal solution is transformed back to a SysML-based representation. Linear programs are used in order to minimize the computational cost for solving the optimization problem. As part of the paper, we introduce a language for computational design synthesis in conceptual design and its embedding in SysML. Models conforming to this language represent specific design problems. We also introduce mappings from these concepts to a mixed-integer linear programming problem that is represented in the YALMIP syntax. We demonstrate our approach using the conceptual design of a robotic manufacturing cell as an example.
- 16:25 Linking Relational Concept Analysis and Variability Model within Context Modeling of Context-Aware Applications
- The development of context-aware applications is a complex process that involves context management. A context life cycle implies 4 essential steps: context acquisition, context modeling, context reasoning and context dissemination . In our previous work , we tackled the issue of context modeling and reasoning by proposing an approach based on Relational Concept Analysis (RCA) and Descriptive Logic (DL), respectively. A main aspect that is neglected in context model is the context variability, which we refer to it as a range or set of context values of an environment along which that environment changes. Context variability can benefit from software product lines . For this reason, this paper proposes an approach describing the relationship between a context RCA-based model and a feature model in order to describe the variability of contexts in which software of a software product line is used. Thus, we represent a feature model as ontology to obtain a semantic model with the Protégé ontology editor. We also define context rules derived from the context model and based on SWRL(Semantic Web Rule Language) that we apply to SPL configurations. We explain the inferred reasoning justifications obtained from the reasoner Pellet and detail the implementation of our proposed approach.
- 16:50 Model-based Engineering of Autonomous Systems using Ontologies and Metamodels
- Our research focuses on engineering processes for autonomous intelligent systems construction with a life-cycle holistic view, by means of a model-based framework. The conceptual core of the framework is ontologically-driven. Our ontological approach consists of two elements. The first one is a domain Ontology for Autonomous Systems (OASys) to capture the autonomous system structure, function and behaviour. The second element is an Ontology-driven Engineering Methodology (ODEM) to develop the target autonomous system. This methodology is based on Model-based Systems Engineering and produces models of the system as core assets. These models are used through the whole system life-cycle, from implementation or validation to operation and maintenance. On the application side, the ontological framework has been used to develop a metacontrol engineering technology for autonomous systems, the OM Engineering Process (OMEP), to improve their runtime adaptivity and resilience. OMEP has been applied to a mobile robot in the form of a metacontroller built on top of the robot's control architecture. It exploits a functional model of the robot (TOMASys Model) to reconfigure its control if required by the situation at runtime. The functional model is based on a metamodel about controller function and structure using concepts form the ontology. The metacontroller was developed using the ontology-driven methodology and a robot control reference architecture.
3D2: Sensors and Systems III
- 16:00 Flooded Streets - A Crowdsourced Sensing System for Disaster Response: A Case Study
- This study is about the disastrous flooding of an Indian metropolitan area of Chennai when the rain had nearly broken the record of 100-years with 374 mm rain falling on December 1, 2015, nearly breaking the November monthly average of 407.4 mm in a day. This city with a population of 6.7 million people came to a standstill. Astonishingly, one of the biggest software development hubs in India had virtually no data available to identify which parts of the city were most affected and vulnerable to such climate phenomena. Three software engineers: Arun Ganesh, Sanjay Bhangar and S Aruna instantly came up with an idea of using a Flood Map tool managed through crowdsourcing to help the citizens of Chennai and prevent further casualties. They developed a map-based tool to report flooded streets using OpenStreetMap (OSM) data. Using this Flood Map, anyone in the crowd could click on a street if they knew it was flooded and update the map information. Within the next 24 hours, over 2,500 streets had been reported as flooded by the citizens of Chennai using the Flood Map tool. An ordinary citizen could zoom into a locality, visualize which streets are reported as flooded and decide their next course of action. This map was also a great aid tool for relief and aid workers to track the flooded paths and provide appropriate aids in that area. The map consists of a base layer of low-lying areas created using elevation models from ISRO and NASA, and flooded areas from UNITAR. The map interactivity was built using Mapbox GL and is hosted on GitHub. This crowdsourced sensing system is an extraordinary example of disaster response using the crowdsourcing concept, which potentially saved millions of lives with the minimum time and resources but with great crowd contributions from both experts and non-experts.
- 16:25 Introduction of Driver's Delay into "Model Checking" for Verification of Safe Interactions Between a Driver and an Automated Driving System
- This study introduces the behavior of human driver's delay in model checking to verify the interaction between a driver and an automated driving system (ADS). An ADS interacts with humans, including other drivers and pedestrians to ensure safety on roads. In particular, human drivers and Level 3 ADSs must take control of the vehicle when the safety may be compromised. Since human driver behavior is sometimes unpredictable, it may cause accidents with other automated vehicles which are on the same roads even though automated vehicles have been introduced to decrease accidents. Hence, the interaction of automated vehicles with human drivers and their behavior should be taken into consideration to better understand when the interactions might fail. We make a model using communicating sequential processes (CSP) to analyze those interactions based on the architecture of the system of systems related to automated vehicles with SysML (Systems Modeling Language). By using CSP that is used to analyze processes of a concurrent system, interactions among the ADS and the driver behavior concurrently during driving can be learned. Wait processes and atomic processes on CSP are utilized to express the driver's processes with delay or without any delay. We demonstrate that the CSP model can express the driver's delay and model checking can verify the interactions among the ADS and the driver behavior.
- 16:50 Modeling and global MPPT for PV system under partial shading conditions using modified artificial fish swarm algorithm
- Due to the non-linear characteristics I-V of the photovoltaic (PV) curve, the tracking of the maximum power point (MPP) under partial shading conditions (PSCs) can sometimes be a challenging task. This paper presents a global MPPT (GMPPT) technique for PV system under PSCs using modified artificial fish swarm algorithm (MAFSA). In MAFSA, Firstly, this algorithm introduce the velocity inertia, memory capacity of each individual and learning or communicating capacity of PSO into the AFSA, as a result, the MAFSA has totally five kinds of behavior pattern as follows: swarming, following, remembering, communicating and searching. Furthermore, according to the average distance between each artificial fish and other five artificial fishes in the neighborhood, visual and step of each artificial fish are adaptively calculated before each iteration to improve the convergence of AFSA. Combining the searching capabilities of the PSO and the self-learning ability of adaptive visual and step for AFSA, the GMPPT technique based on MAFSA is developed. To validate the effectiveness of the novel GMPPT technique, the PV system under PSCs along with the proposed technique is simulated using Matlab/Simulink simscape tool box. Experimental results show that the proposed technique outperforms the other methods for GMPPT in PV system under PSCs.
- 17:15 Controlled Self-Organization in Smart Grids
- With the enhanced development of energy systems to so-called Smart Grids comprising substantial amounts of distributed and renewable energy systems (DER), the (coordination) tasks of all relevant actors like grid operators, system operators and energy aggregators acting as trading parties at the market change tremendously. This results in new requirements on the control systems used for the management of operating resources and DER : First, the integration of a huge amount of these units in control systems leads to costs due to engineering effort, thus raising the need for an efficient integration concept. Secondly, with the enlarged number of energy units, the complexity of the optimization problems increases in such a way that given time limits (e.g. for grid operation tasks) might not be met. Distributed algorithms and multi-agent systems have been studied in Smart Grid research for several years two address both issues: Distributed algorithms, often inspired by self-organization theory, show the scalability needed  and good applicability for several use cases relevant in DER management and control . Agent based generalization of energy units has shown to be a good means to abstract from technological specifics and thus reduce engineering overhead, hence allowing for fast and cheap enlargement of existing DER aggregations like Virtual Power Plants (VPP). When stepping further from research projects to application in the field though, some legitimate concerns are raised by potential system users like grid operators or VPP aggregators: Self-organizing systems can show effects of strong emergence, i.e. unintended behavior that cannot be foreseen during system design and show manifestation during runtime only . Especially in the energy system as a critical infrastructure, these effects might be unacceptable. For some use cases, these systems even might not be permitted due to prequalification processes involving the definition of guaranteed behavior. To overcome these kinds of issues, self-organizing systems might be embedded or even encapsulated in a context that fulfils the requirements regarding full controllability: In the research area of organic systems, the so-called observer/controller (O/C) architecture has been developed and evaluated for different application areas . With the system ISAAC developed by Particon GmbH, this idea has been applied for distributed energy unit control. For this, we developed a system based on the fully distributed heuristic COHDA used for the scheduling of DER . We implemented this heuristic using software agents. The resulting multi-agent system has been embedded in an O/C architecture, thus combining scalability and generalizability of the distributed multi-agent system with the benefits of conventional control systems. To our knowledge, the resulting system ISAAC is the first commercially available software capable of conjointly optimizing hundreds of DER units while fulfilling the application area's requirements. In the full contribution we will first give an introduction on the use cases addressed by ISAAC and the underlying optimization problem. Then we show how we address this problem algorithmically. In the next step we will discuss the requirements in more detail and point out, how we handle these by embedding the aforementioned algorithm in an O/C architecture. REFERENCES  H.-J. Appelrath, H. Kagermann, and C. Mayer, Eds."Future Energy Grid. Migration to the Internet of Energy", acatech STUDY, Munich 2012.  N. A. Lynch: "Distributed Algorithms", Morgan Kaufmann, 1996.  A. Nieße, and M. Sonnenschein (2015), "A Fully Distributed Continuous Planning Approach for Decentralized Energy Units", in GI-Jahrestagung. D. W. Cunningham, P Hofstedt, K. Meer, and I. Schmitt. Eds. 2015, pp. 151-165.  D. K. Hitchins: "Systems Engineering", John Wiley & Sons, 2007  H. Schmeck, C. Müller-Schloer, E. Cakar, M. Mnif, U. Richter: "Adaptivity and Self-Organisation in Organic Computing Systems", ACM Transactions on Autonomous and Adaptive Systems (TAAS), 5, (3), 2010.  C. Hinrichs, S. Lehnhoff, und M. Sonnenschein, "COHDA: A Combinatorial Optimization Heuristic for Distributed Agents", in: Agents and Artificial Intelligence. J. Filipe, and A. Fred, Eds. Springer, 2014, vol. 449, pp. 23-39.
3D3: Systems Thinking
- 16:00 Data Driven Strategy Implementation in Engineering
- Companies in the Engineer-To-Order business are facing many challenges and thus launching numerous improvement projects, driven from an isolated point of view focusing only on individual improvement potentials. Strategic alignment or a prioritization of these improvement projects therefore poses a great challenge for management. As a result the intended improvement targets often are not achieved and improvement projects are terminated prematurely. This paper presents a pilot application of a data driven strategy implementation (DDSIM) approach at a Siemens business unit. Such a data driven approach defines how tasks can be supported by technical data in an optimal way. We provide a tool for management to prioritize and control the implementation of an overall improvement landscape in an iterative and incremental way, based on strategic imperatives for engineering. The pilot application shows that DDSIM provides synergies in the analysis phases when several projects are pursued. Further benefits are methodical guidance and improved strategic alignment possibilities. DDSIM is an appropriate approach to keep individual business needs and approaches, but still following a harmonized improvement approach.
- 16:25 A Holacratic Socio-Technical System Architecture
- A Holacratic Socio-Technical System Architecture (HSTSA) introduces Holacratic Engineering Management (HEM), a proposed new systems engineering and engineering management process model. The purpose of this research study is to determine if HEM arising out of the agile software and agile systems engineering disciplines, delivers on the promise of self-managing, self-organizing, adaptable, resilient and more efficient organizations. By answering the question, "What is the effect of holacratic engineering management architecture on Socio-Technical Systems' performance?" the utility of HEM is investigated. The Holacratic Business Architecture Measurement Instrument (HBAMI) is applied to companies in multiple industries, grouped by Standard Industrial Classification (SIC), to determine HEM architecture and organizational holacracy levels. Correlation analysis between the HBAMI index and dynamic organizational input structure is explored to evaluate value added work production. It is expected that more holacratic enterprise engineering architectures yield higher revenue and intellectual property per employee.
- 16:50 Target-oriented Implementation of Systems Engineering in Medium-Sized Enterprises
- The machinery and plant engineering industry and related industries are undergoing a massive shift from classic mechanic-centered products to mechatronics. Technical systems of tomorrow will go beyond current mechatronics by incorporating inherent intelligence. Information technology and non-technical disciplines, such as cognitive science, neurobiology and linguistics, are developing a variety of methods, technologies and procedures that integrate sensory, actuators and cognitive functions into technical systems. Such systems are called Intelligent Technical Systems. Intelligent technical systems make products and production systems more user-friendly, reliable and resource efficient, with the benefit stemming from interaction between different components and technologies. This places high demands on the product development process, such as the need for a comprehensive understanding of the system and consideration of the full product life cycle. The development of these systems can no longer be analyzed from the perspective of an individual specialist discipline; the established discipline specific methodology reaches its limits as it does not consider the interaction of the disciplines involved. Furthermore, interdisciplinary approaches for the design for mechatronic systems, like e.g., the VDI-guideline 2206 "design methodology for mechatronic systems", do not meet the challenges of future systems. They do not consider all aspects of complex technical systems collectively. Systems engineering (SE) is an approach that has a potential to fulfill these requirements. However, until now it could not be applied through a wide range of different industries and segments, especially in small and medium-sized (SME) enterprises.The cross section project "Systems Engineering it's OWL" of the Leading- Edge Cluster Intelligent Technical Systems OstWestfalenLippe (it's OWL) faces this challenges. Aim is to support a performance improvement by using systems engineering, particularly for machinery and plant engineering enterprises. Within the Leading-Edge Cluster it's OWL transfer projects allow small and medium-sized enterprises to take advantage of the it's OWL technology platform and gain access to methods, tools, software modules and prototype solutions from technology platform. In the paper we describe one transfer project with the objective to define a target-oriented systems engineering process. Based on identified action fields relevant SE approaches were derived and tailored.