For full submission instructions, visit the AUTOTESTCON 2016 website: http://autotestcon.com/

Program for 2016 IEEE AUTOTESTCON

Time Castle AB Exhibit Hall Magic Kingdom 1 Magic Kingdom 4 MK Ballroom 2-3 Monorail AB Monorail C Monorail C & Castle C Registration Counter Sleeping Beauty Sleeping Beauty Pavilion TBD Session

Sunday, September 11

02:00 pm-05:00 pm                 Registration        

Monday, September 12

07:00 am-06:00 pm                 Registration     Coffee Break  
08:00 am-12:00 pm Diagnostics and Design for Built-In Test         Automatic Testing from A to Z            
09:45 am-10:00 am                 Coffee Break  
12:00 pm-01:00 pm                 Lunch      
01:00 pm-05:00 pm VXI, PXI, IVI, LXI and AXIe Standards         ATE and TPS Management            
02:45 pm-03:00 pm                   Coffee Break
06:00 pm-07:30 pm                     Wine & Cheese Welcome Reception    
07:00 pm-11:00 pm     NFL Football Party                

Tuesday, September 13

07:00 am-05:00 pm             Speaker Breakfast   Registration        
07:30 am-08:30 am                       Coffee Break
08:00 am-09:30 am     Keynote: An Executive Perspective of the Test & Maintenance Industry              
09:30 am-10:00 am   Coffee Break                    
10:00 am-12:00 pm     DoD Executive Plenary Panel                
12:00 pm-01:30 pm   Lunch                    
03:30 pm-05:00 pm 1C4: Test Tools   1A4: TPS Technical Approaches 1B4: Test Cost Drivers and Optimization Approaches   1D4: Novel ATE Approaches            
06:00 pm-08:00 pm   Exhibitors' Reception                      

Wednesday, September 14

07:00 am-05:00 pm   Coffee Break           Speaker Breakfast Registration        
08:00 am-09:30 am 2C1: Software and Simulation Testing   2A1: Advances in ATE Technology 2B1: Virtual Instrumentation & Switching   2D1: Prognostics and Health Monitoring 1            
09:30 am-10:00 am   Coffee Break                    
10:00 am-11:30 am 2C2: Component-level Testing   2A2: Management Topics 2B2: Networking Approaches for Test   2D2: Prognostics and Health Monitoring 2            
12:00 pm-01:30 pm         Awards Luncheon              
01:30 pm-03:00 pm 2C3: Test Techniques 1   2A3: Software Advances in ATE 2B3: Design for Testability Panel   2D3: Advanced Instrumentation Approaches            
03:00 pm-03:30 pm   Ice Cream Break                    
03:30 pm-05:00 pm 2C4: Test Interface Solutions   2A4: Design For Testability 2B4: Panel: 2016 Outlook of Modular Instrumentation in the T&M Industry   2D4: Interesting Instrumentation Techniques            
06:00 pm-10:30 pm                         Networking Dinner

Thursday, September 15

07:00 am-12:00 pm             Speaker Breakfast   Registration     Coffee Break  
08:00 am-09:30 am 3C1: Electro-Mechanical Test     3B1: Test Data Management & Security   3D1: High Frequency Testing            
09:30 am-10:30 am   Coffee Break                    
10:30 am-12:00 pm 3C2: Test Techniques 2         3D2: Life Cycle Management Topics            

Sunday, September 11

Sunday, September 11, 14:00 - 17:00

Registration

Room: Registration Counter

Monday, September 12

Monday, September 12, 07:00 - 08:00

Coffee Break

Room: TBD

Monday, September 12, 07:00 - 18:00

Registration

Room: Registration Counter

Monday, September 12, 08:00 - 12:00

Automatic Testing from A to Z

Room: Monorail AB

This Tutorial provides a complete overview of the world of ATE from a practical engineering and management viewpoint. Beginning by examining the ATE interfaces and their limitations, it offers managers and project engineers a quick and purposeful insight into the probable sources and causes of potential technical and management problems. Working from the interfaces, the Tutorial explores analog and digital test methods, examines the impact of new instrument technologies and covers the basics of switching systems and pin electronics.

The Tutorial will explore the elements of ATE SW, examining the role of each and the scaled limitations that they impose at the s level. ATE languages will also be discussed and the different language types analyzed to determine their effect on ATE and TPS performance

Software now makes up over 50% of almost all military systems so no discussion of Automated Testing would be complete without exploring the need to consider SW testing as an integral part of the ATS environment. The Tutorial will discuss the impact of the growth in SW, look at some catastrophic example of what happens when we inadequately test software and discuss test requirements and methods.

The Tutorial will conclude with a discussion of recent changes in DoD acquisition strategies and their potential impact on the future of ATE. Interoperability, net-centric operations, nanotechnology and smart sensors are high on OSD's wish-list for new systems and will become an inherent part of the test and maintenance process. Explore DoD's vision of the next generation of systems, where Test & Evaluation, Condition Based Maintenance, Training and Battle Damage Assessment become by-products of a distributed hierarchical, real-time information network. The future may be closer than you think!

Diagnostics and Design for Built-In Test

Room: Castle AB

This tutorial combines materials from two previous tutorials taught at IEEE AUTOTESTCON for many years. It provides attendees with a comprehensive overview of the challenges for Diagnostics and for implementation solutions through built-in [self] test (BI[S]T), often called embedded test.

The diagnostics section of this Tutorial provides an overview of traditional and more recent approaches to system-level diagnosis and prognosis. The emphasis is placed on different system modeling approaches and the algorithms that can be applied using resulting models. The Tutorial will review the basic issues and challenges in system diagnosis and prognosis. Fundamental terms and concepts of fault diagnosis will be presented with focus being given to historical approaches and the needs from the perspectives of the Department of Defense. Recent initiatives such as DoD ATS Framework, ARGCS, and ATML will also be introduced.

Central to this part of the Tutorial will be a continuing discussion of how one handles uncertainty in the diagnostic and prognostic process. It will include recent developments in applying Bayesian techniques and extensions such as hidden Markov models and dynamic Bayesian networks to fault diagnosis and prognosis. Prognosis will be related to the diagnosis problem in the context of "predictive" classification, and Bayesian extensions, will be discussed.

Health Management Information Integration will also be addressed and will focus on using formal models, called ontologies, to define the semantics of the required information and then focus on processes for maturing diagnostic applications as maintenance information is collected. Throughout the discussion, the Tutorial will draw upon experiences of the instructors and participants to highlight issues related to diagnostic development within defense and commercial environments.

With increased circuit and system complexity in recent years almost every test approach has had to settle for lower fault coverage, more difficulty in diagnoses and all at greater costs. The notable exceptions are BIT, BIST and embedded test, techniques that are combined in our discussions. BIST is a phenomenon that capitalizes on greater circuit complexity (intelligence), better fault isolation from a hierarchical allocation of tests, and does not rely on costly external automatic test equipment (ATE) and test program sets (TPS). Can ATE be eliminated from servicing and repairing units under test (UUTs) in the field? Testability features introduced through the JTAG/IEEE-1149.1 boundary scan enable in situ non-interfering observability of signals even as the system is performing its normal mission (i.e. while the airplane is flying). With internal BIST in many memories and processors, one can assess system level health and achieve diagnostic resolutions discussed in the first part of this tutorial. The recently introduced IEEE-1687 provides hierarchical test capabilities allowing system level access not only to board signals but to registers within the ICs themselves. Those registers can store health status of the chip, thus enabling not only IC test at the system level but also fault isolation to the IC itself. If the faulty IC is a large part of the board or subsystem cost, we can decide to forego repairing the replaced item altogether.

While the technology is arguably available to eliminate ATE and TPS while achieving the same or better levels of fault detection and fault isolation for increasingly more complex circuits, it does not come without costs or considerations. Embedded test or BIT requires purposeful design activities at the earliest possible development stages. Design for testability (DFT) has to be considered at the conceptual design stage, even before specifications are finalized. The test and design engineers must work together to achieve a supportable system, which may include commercial off the shelf (COTS) as well as custom designs. A new management paradigm needs to be implemented in which test is part of the design. The tutorial will discuss the management aspects of such a paradigm, which incidentally have been outlined years ago by MIL-STD now MIL-HDBK-2165.

This Tutorial is aimed at professionals in all areas of support, including reliability, maintainability and logistics, as well as engineers and managers from design, test, and quality assurance.

Monday, September 12, 09:45 - 10:00

Coffee Break

Room: TBD

Monday, September 12, 12:00 - 13:00

Lunch

Room: Sleeping Beauty

Monday, September 12, 13:00 - 17:00

ATE and TPS Management

Room: Monorail AB

This four-part Tutorial is designed to cover the controversial and challenging issues of managing ATE and TPS development. This session is a must for all industry and government ATE/TPS managers. As with the morning ATE session, it focuses on real world situations and explores areas of frequent problems.

Part I - TPS Acquisition - Tony Conard, US Navy This part explores processes and challenges facing the government Acquisition Manager, providing in-depth insight on acquisition topics and processes from Acquisition Planning, including the implementation of Systems Engineering, RFP development, Acquisition oversight, testing and fielding. The NAVAIR Generic OTPS RFP (NGOR) is a crucial component of the NAVAIR acquisition process that provides a standard tailorable RFP for the procurement of Operational Test Program Sets (OTPSs). This session covers requirements and issues faced by the military in acquiring TPSs from a Navy perspective, however, the acquisition topics and challenges covered focuses on areas that are common to DOD OTPS procurements and management.

Part II - TPS Development Management - Craig Stoldt, BAE Systems This session will discuss the various challenges that a TPS development program must overcome to be successful. We will define the measurable objectives to be obtained in the technical, schedule and quality arenas. These objectives can only be met through the management of resource availability. Each discipline involved requires timely access to documentation, physical assets and various support personnel. We will outline the flow of this development process from contract inception through the phases of TRD design, review cycles, ATE acquisition, ITA fabrication, software coding, personnel scheduling and acceptance. As each contract or internal project is different, this modular approach should help the user assess those areas that are pertinent to their needs and apply the "lessons learned" presented to their own needs to facilitate a successful TPS development project.

By planning a program as if it were its own design and development product project, an organization can minimize the loosely controlled concept of test development being just a tail end of the "real" development effort of the prime hardware. A TPS requires all of the same disciplines that are managed in the development of an avionics box, and often brings some additional challenges that are imposed based on the availability of key assets and documentation. This session will highlight the phases of development and the points in the process that can be assessed for review to prevent false starts and major cost and schedule impacts. 

Part III - Managing in a Dynamic Environment - Rick Foyt, US Marine Corps This part discusses the various reasons a TPS Engineer, Quality Engineer and Management will encounter changes with the development platform and environment during TPS Development. It will focus on the various management tools, programmatic actions and options that are available to the TPS & ATE developers and managers. Each of the ATE change categories from relatively simple to a system in Engineering Development will be reviewed and discussed along with evaluating the corresponding impact to TPS management and product acceptance of both the ATE and the TPS baseline.

It will provide a candid discussion of the magnitude of the challenges from each type of change category, along with the options of how to best overcome them with minimal impact to the project. Real world examples experienced by the US Marine Corps will be shared, highlighting the final results. Options and examples of TPS product acceptance and lessons learned will also be shared.

Part IV - Depot TPS/ATE Management - Mark J. Cain, US Air Force This session covers the tasks and challenges faced by the USAF in managing its Depot ATE and associated TPSs. It will lead attendees through the roles and responsibilities of a variety of stakeholders involved in the management of these two interdependent commodities. Included will be the distinct paths available to the Depots and their customers for replacement or acquisition of Depot ATE and TPSs. One avenue to be explored will be that of the Capital Improvement Program (CIP) process. The CIP is frequently used in USAF Depots when replacing obsolete ATE and re-hosting the associated TPSs. We will also explore how Depot ATS capability may be acquired using the Depot Maintenance Activation Planning (DMAP) process used when transitioning or starting up a Depot repair workload from a weapon system OEM or Prime. From weapon system, supply chain, Depot, and product group managers, this session will seek to provide information on how Depot ATE and TPSs are managed throughout their lifecycle.

In addition to the four major parts covered, the tutorial instructors represent four major players in TPS and ATE, namely, the US Navy, a civilian contractor, the US Marine Corps and the US Air Force. Questions peculiar to any of these entities can be addressed by someone close to the issue.

VXI, PXI, IVI, LXI and AXIe Standards

Room: Castle AB

The VXIbus architecture was introduced 27 years ago, and is currently a well-established architecture used extensively in military, aerospace and commercial applications. However, many test engineers have no personal experience with it, or would like to brush up on its basics, as it will be around for another 10-20 years. We will cover the approval in 2004 of the VXI-1 Rev 3.0 spec, which again doubles the backplane speed to 160MB/s. And we will cover the approval of VXI 4.0 and its improvements in speed and flexibility. VXIplug&play standards are the software equivalent to the VXI hardware specifications, and are the definition to which all VXI drivers are now written. This software standard has formed the bedrock for many other software developments, such as Interchangeable Virtual Instrument (IVI) drivers.

PXI is a newer, more compact, faster hardware standard based on CompactPCI. It applies the same extensions to CPCI that VXI did to VME. This modular instrument standard rapidly gained acceptance and can be viewed as a companion standard to VXI, (or by some as a replacement). This 16-year-old hardware standard will be discussed in detail, as will its expected impact on the market. An update will be provided on Enhanced PXI specifications and their implementation, including Low Power Chassis. PXI Express and PXI MultiComputing will be explained with a review of PXI express products and their potential applications.

The Interchangeable Virtual Instrument (IVI) software standard, which has been extensively revised and expanded, will be covered with the latest information available. The IVI Foundation was founded in 1998 and incorporated in 2001. The purpose of the IVI Foundation is promoting specifications for programming test instruments that simplify interchangeability, provide better performance, and reduce the cost of program development and maintenance. IVI Instrument drivers have been available for about 13 years. New Specifications for Digital Test, Counter/Timer, and Signal Oriented test plus LXI triggering and sync will also be discussed.

The LXI Consortium is 12 years old now, and was formed to standardize the way instruments can be connected and controlled via the Internet in a Local Area Network. Extensions for discovery, triggering and synchronization, browser interface, initialization, and programming are all part of the extensions being considered in this standardization effort. We will introduce the latest release of the LXI Specification as well as the introduction of new LXI compliant products that are now available. The LXI Consortium is the first T&M standards organization to release a reference design, LXI Reference Design, V1.0.

An emerging test and measurement standard called AXIe, AdvancedTCA eXtensions for Instrumentation, is expected to find wide acceptance within the Automatic Test Equipment community as it offers many key benefits. It is expected that a large number of COTS (commercial off-the-shelf) signal conditioning, acquisition and processing modules will become available from a range of different suppliers. AXIe uses AdvancedTCA® as its base standard, but then leverages test and measurement industry standards such as PXI, IVI, and LXI, which were designed to facilitate cooperation and plug-and-play interoperability between COTS instrument suppliers. This enables AXIe systems to easily integrate with other test and measurement equipment. AXIe's large board footprint, available power and efficient cooling to the module payload allows high density in a 19" rack space, enabling the development of high-performance instrumentation in a density unmatched by other instrumentation form factors. Channel synchronization between modules is flexible and provided by AXIe's dual triggering structures: a parallel trigger bus, and radially-distributed, time-matched point-to-point trigger lines. Inter-module communication is also provided with a local bus between adjacent modules allowing data transfer rates up to 10 Gbits/s in each direction, for example between front-end digitizer modules and DSP banks. AXIe is a next-generation, open standard that extends AdvancedTCA® for general purpose and semiconductor test. First specifications were released in June 2010, and a 12-bit, 8 channel AXIe digitizer was elected as the 2013 TM Best in Test winner of the category signal analyzer.

This comprehensive update on the development of commercial standards for the ATE community should not be missed by anyone concerned with current and future ATE systems design and integration.

Monday, September 12, 14:45 - 15:00

Coffee Break

Monday, September 12, 18:00 - 19:30

Wine & Cheese Welcome Reception

Room: Sleeping Beauty Pavilion

The 2016 IEEE AUTOTESTCON Committee is pleased to announce that Teradyne Corporation is sponsoring a Conference-wide Wine and Cheese Welcome Reception on Monday evening September 12, from 6:00 PM to 7:30 PM in the Sleeping Beauty Pavilion lounge area.

Open to all Attendees, Authors and Exhibitors, the Welcome Reception is a great opportunity to rekindle old relationships, network for new relationships, and generally set a great start for IEEE AUTOTESTCON -- the premier military automated test systems Conference.

With over 100 presentations and over 50 exhibitors across the ATE spectrum, the week promises an opportunity for all to gain knowledge and create mutually beneficial relationships.

IEEE AUTOTESTCON thanks Teradyne for their sponsorship of this start-the-week event and hopes everyone has a great 2016 conference.

Monday, September 12, 19:00 - 23:00

NFL Football Party

Rooms: Magic Kingdom 1, Magic Kingdom 4

Monday Night Football at IEEE AUTOTESTCON! An AUTOTESTCON First thanks to sponsor Astronics Test Systems. Monday September 12, 7:00 PM to 11:00 PM, Magic Kingdom 1&4 Ballroom. Watch the San Francisco 49ers play the Los Angeles Rams on the big screen! Full open bar! Game starts at 7:20 PM. Philly Cheese Steaks buffet bar plus assortment of delightful snacks. AND a 2-piece band for extra entertainment during timeouts. Attire: Sports casual.

Tuesday, September 13

Tuesday, September 13, 07:00 - 17:00

Registration

Room: Registration Counter

Tuesday, September 13, 07:00 - 07:30

Speaker Breakfast

Room: Monorail C

Tuesday, September 13, 07:30 - 08:30

Coffee Break

Tuesday, September 13, 08:00 - 09:30

Keynote: An Executive Perspective of the Test & Maintenance Industry

Rooms: Magic Kingdom 1, Magic Kingdom 4

What is the current status of our industry and its customer base and how is it expected to change from a business and technology viewpoint in the near future? Top industry executives discuss their perspectives and welcome your questions.

• "Introduction - A View From the Bleachers" - Mike Ellis, AUTOTESTCON Technical Program Chair • "The Future of Test and Maintenance is Distributed. What are the Consequences?" - Dr. Fred Blӧnnigen CEO Bustec • "The Future Direction of Test, Maintenance and Instrumentation - A Supplier's Perspective " - Michael Dewey, Director of Marketing, Marvin Test Solutions • "ATE Trends Circa 2016" - David J. Salisbury, Director, Business Development, Northrop Grumman Electronic Systems • "The Future of Advanced ATS Sustainment" - Christopher Geiger, Technical Director, Integrated Test and Logistics, Lockheed Martin

Tuesday, September 13, 09:30 - 10:00

Coffee Break

Room: Exhibit Hall

Tuesday, September 13, 10:00 - 12:00

DoD Executive Plenary Panel

Rooms: Magic Kingdom 1, Magic Kingdom 4

A panel of automatic test system technical experts representing the multiple military services will offer vision and insights focused on Test Program Set development and sustainment environments, requirements, and current and future challenges. The Panel will address TPS development trends, commercial standards needed and will provide TPS program summaries for each of the military Services.

Moderator: Bill Ross, Eagle Systems, NAVAIR ATS Support

Panelists: • Tony Conard - US Navy, TPS Acquisition Team Lead, NAVAIR Jacksonville • Mike Malesich - US Navy, Head, ATE Software Branch, Support Equipment Division, NAVAIR Lakehurst • Joseph Francis - US Air Force, ACS Directorate, ATS Engineering Division, Warner Robins, GA • Rick Foyt - US Marine Corps, APS Technical Program Officer Automatic Test Equipment Program ATEP MDMC, Albany, GA

Tuesday, September 13, 12:00 - 13:30

Lunch

Room: Exhibit Hall

Tuesday, September 13, 15:30 - 17:00

1A4: TPS Technical Approaches

Room: Magic Kingdom 1
Chair: Noah De La Hunt (NAVAIR, USA)
The Practical Aspects of TPS Resource Data Discovery
Larry V. Kirkland (WesTest Engineering, USA)
Performing a complete and accurate desktop analysis of a Test Program Set (TPS) with all the supporting data can be an extremely exhaustive experience. True TPS transparency has plagued the world of test and diagnosis for decades. Programs managers and users have a need to know exactly how the TPS works and how the Automatic Test Equipment (ATE) resources are allocated. It is fundamental to automatically make available a total envelope of TPS instrument usage and determine or make suggestions about TPS resource allocation considerations or facts. Exposing TPS facts which are somewhat hidden and providing guidance to aid in the determination of planning and support is important for process improvement. The evaluation of ATE resource allocation for a group of TPSs will aid in ATE design engineering. TPS resource transparency needs to be made available to all high level users and managers. Those who use a TPS and those who manage or oversee TPSs should have the resource metadata readily available to evaluate the TPS to know things like resource allocation usage and how the resources are used to expose TPS instrument requirements for future development and support. There are many pertinent and critical aspects which pertain to instrument settings and usage. Instrument or resource evaluation for a TPS is a much needed notion to judge test program performance and long term support. ATE resource utilization, selection, and recurrent problems of specific instruments, programming techniques or instrument settings can be revealed. There is a potential to refine the way a unit is tested, how resources are allocated and if resources can be optimized. Optimal resource allocation can potentially lower test time, solve TPS weaknesses, and keep current with technology to reduce long term support costs. An emulator can reveal run-time inefficiencies, range settings, limit levels, check program flow, allow assigning values to TPS variables, etc. The comprehensive information contained in the TPS and supporting data can serve to expose under and over utilized test equipment, proper resource selection, and many other issues which determine the quality of the TPS and ATE resources. Software programmable algorithms could expose facts automatically. A TPS developed by different engineers can and probably will utilize different instruments and/or instrument settings to perform some tests. The optimal use of instrumentation can be seen by RTOK rates, diagnostics, optimal measurements and glitches. There will always be some similarities in a TPS developed by different engineers but optimizing resource allocation is vital. To do an automated analysis of TPS resource usage data does provide valuable information but there can be questions about whether or not the TPS developer allocated the ATE resources properly or optimally. It is a fact, TPS developers vary in skill level and there can be profound differences in how resources are allocated. Relying on improperly allocated resources can produce superfluous results. This paper will cover the practical aspects of TPS Resource metadata. Also discussed is the availability of metadata and how to derive this metadata.
Testing GPS Systems and Devices with M-Code
Lisa Perdue and Tim Klimasewski (Spectracom, USA)
A major component of GPS modernization, M-Code offers further improvement to the anti-jamming and secure access of radio-navigation signals to the armed forces. M-Code is required for all US DOD applications after FY17. This paper provides an update on the introduction of M-code and the availability of its signal and compatible receivers. It also describes testing M-Code compatible systems by RF simulation for integrators and testers of navigation systems.
Logging: Gaining Access to the Inner Workings of Your TPS
Michael McGoldrick (Teradyne, Inc., USA)
The conversion of test requirements into actual test program sets (TPSs) can be a difficult task in and of itself. When a TPS developer's initial efforts fail, he or she will need to draw upon the debugging tools provided by the instrument vendors and/or the test application development environment in an attempt to gain insight into the cause of the failure. After a TPS is fielded, the resolution of a latent defect in the TPS is made much more difficult: the problem may only exhibit itself on particular test systems; lack of control over the execution of the TPS may make it difficult to bring instrument vendor tools on line at the moment of failure; and the lack of an application development environment on deployed test systems precludes the use of its tools in resolving the problem. The TPS developer's task may also be complicated by the need to include support in the TPS for code unrelated to specific test requirements or the need to perform test validation tasks such as test margin analysis. This paper describes an enhanced test system architecture that includes a general use facility for passing test program data from an executing TPS to one or more support applications running in parallel with the TPS. These support applications may include tools to assist the TPS developer in debugging the TPS, characterizing the unit under test, archiving test results data, and other applications that a test organization may find relevant and helpful. The same test system architecture can also be used by instrument vendors in the design and implementation of the debugging tools that accompany their instrument drivers and other software. The paper also includes a description of a possible implementation of such an architecture, and demonstrates how it can be used to simplify the development of multiple and diverse types of support applications with little to no impact on the development of the TPS itself.

1B4: Test Cost Drivers and Optimization Approaches

Room: Magic Kingdom 4
Chair: Louis Y. Ungar (A. T. E. Solutions, Inc., USA)
Valuation and Optimization for Performance Based Logistics Using Continuous Time Bayesian Networks
Logan J Perreault, Monica Thornton and John W. Sheppard (Montana State University, USA)
When awarding contracts in the private sector, there are a number of logistical concerns that agencies such as the Department of Defense (DoD) must address. In an effort to maximize the operational effectiveness of the resources provided by these contracts, the DoD and other government agencies have altered their approach to contracting through the adoption of a performance based logistics (PBL) strategy. PBL contracts allow the client to purchase specific levels of performance, rather than providing the contractor with the details of the desired solution in advance. For both parties, the difficulty in developing and adhering to a PBL contract lies in the quantification of performance, which is typically done using one or more easily evaluated objectives. In this work, we address the problem of evaluating PBL performance objectives through the use of continuous time Bayesian networks (CTBNs). The CTBN framework allows for the representation of complex performance objectives, which can be evaluated quickly using a mathematically sound approach. Additionally, the method introduced here can be used in conjunction with an optimization algorithm to aid in the process of selecting a design alternative that will best meet the needs of the contract, and the goals of the contracting agency. Finally, the CTBN models used to evaluate PBL objectives can also be used to predict likely system behavior, making this approach extremely useful for PHM as well.
Reducing the Cost of Test through Strategic Asset Management
Duane Lowenstein (Keysight Technologies, USA); Charlie Slater (Keysight, USA)
For most aerospace and defense companies, test and measurement equipment is one of the largest, if not the largest, capital expenses on their balance sheets. With that said, few companies have a comprehensive, corporate wide program to maximize the utilization and management of test and measurement equipment. Other industries, such as power generation, airlines and foundries, have been able to master optimization and utilization of their capital to maximize their return on investment. This paper will explore the balance of the three aspects that make up asset management and will focus on how to implement strategies to lower the total cost of ownership for test. The three areas of asset management addressed in this paper are: 1. The management of the "real" total number of assets, not only in a lab but across an enterprise. 2. The ability to maximize the optimization and utilization of the assets on a continuous basis. 3. Schemes to develop and implement life cycle strategies for test and measurement assets. The implementation and usage of an asset management program can have huge positive implications, not only on reducing capital costs, but on faster throughput, lower operational expenses, shorter time to market, and even better quality; all of these allow a company to be more competitive in the new firm fixed contract world.
Cost Model for Verifying Requirements
Edward Dou (Raytheon, USA)
Testable requirements are the foundation to any development program. The number of requirements and the technical difficulty of satisfying those requirements are factors that drive program cost and schedule. Being able to quickly assess the scope of requirement verification and costing that activity is essential to the proposal process. For awarded programs, controlling and costing requirements volatility is critical to ensuring sufficient resources to execute the program and meet customer need dates. When considering requirements verification, to include regression testing, a balance is often needed between the cost and the coverage provided. These challenges are commonly encountered during program startup and execution. This paper presents a cost model, Cost Model for Verifying Requirements (CMVR), to assist program managers in quickly assessing the financial impact of verifying requirements as a result of changing (e.g. adding, modifying, and deleting) requirements. Of note, this paper focuses on more formal testing and verification activities, but does not address development and integration aspects. For the CMVR model to provide accurate results, the test team should first fully map requirements to test events. In doing so, requirements should be traced from the stakeholder (e.g. customer requirements) through derived requirements to test objectives and ultimately to test scripts/procedures. Each test script and procedure will need to be assessed to determine the cost (man-hours and duration) to complete the test objective. With the linkage between requirements and test events established, programs can then use the cost model for bidding, evaluating requirements volatility, and developing test sets that optimize the cost-benefit ratio. Bidding: During bidding, requirements are often not fully developed. The CMVR model addresses these ambiguities by providing a portfolio mix (easy, moderate, difficult) based on historical data, enabling program managers to select or alter - similar to tailoring ones 401K plan. Requirements Volatility: Evaluating the impact of requirements volatility on test costs requires assessing development, test setup, execution, and analysis of potential efficiencies that can be leveraged from overlapping tests. Developing Test Sets: With limited time and resources, programs may need to identify a subset of tests to execute (such as for regression testing). Programs will need to determine the focus areas of requirements (Depth), the test requirement coverage (breadth), and the critical must-test requirements. This paper concludes by providing a practical example of utilizing the CMVR model and demonstrating how this capability enables quickly assessing cost and schedule impacts due to a change in requirements. In summary, The CMVR cost model provides program managers with an important tool to quickly assess the testing cost of requirements.
An Economic Analysis of False Alarms and No Fault Found Event in Air Vehicles
Mustafa Ilarslan (None, Turkey); Louis Y. Ungar (A. T. E. Solutions, Inc., USA); Kenan Ilarslan (Afyon Kocatepe University, Turkey)
False Alarms (FAs) that occur in a fielded system and No Fault Found (NFF) events that are discovered after line replaceable units (LRUs) have been returned to repair are costly situations whose full impact is difficult to put into monetary terms. For that reason, pragmatic economic models of NFFs are difficult to find. In this paper, we deal with problem of having to differentiate between NFFs of good units under test (UUTs) and of faulty UUTs. While we cannot tell which UUT is good and which is faulty, we can determine using probabilities what percentage of the NFFs are faulty and what percentage are good. Based on these probabilities, we can evaluate various strategies. Assigning cost factors that are knowable, such as the cost of testing a UUT, the cost we incur for good UUTs vs. costs we incur for faulty UUTs and various test and repair costs, we can calculate the performance of various strategies and assumptions. In this paper, we formulate three strategies: 1) We assume all NFF UUTs are good and are willing to endure the cost of bad actors (i.e. faulty UUTs) sent back to the aircraft 2) We environmentally stress all NFF UUTs, hoping to fix some and avoid bad actors. 3) We rely on the technician to reasonably select some NFF UUTs and perform appropriate repair. We formulate each of these strategies for a case when NFF is 70%. The formulation is similar with any NFF distribution, but the coefficients in each formula will be different. With proper cost data, we can actually decide which strategy works best. We conclude by tabulating the formulas and calculate NFF costs for an example situation. The numbers we picked for this example may be appropriate for some operations, but not for others. As a follow-up to this paper we would like to validate the model with data. Such data may be available in some military and commercial maintenance departments.

1C4: Test Tools

Room: Castle AB
Chair: Alberto Tungcab (NAVAIR, USA)
KORAT - A Platform Independent Test Automation Tool by Emulating Keyboard/Mouse Hardware Signal
Yung-Pin Cheng (Nation Central University, Taiwan); Deron Liang and Wei-Jen Wang (National Central University, Taiwan)
Software, ranging from firmware, BIOS, and embedded software to complex software products, can only be tested by designing test cases to go through code and then verifying the results with expected outcomes. When code is changed frequently, regression testing is critical to ensure that changes do not introduce new faults. However, depending on the input types of the system under test (SUT), regression tests often require testers to drive the SUT manually, mainly by keyboard and mouse. In the meantime, testers play an important role as test oracle to determine the correctness of a test run by observing if the SUT behaves abnormally. Regression tests can be automated by programming or adopting testing tools. The most cost-effective approach supported by some commercial testing tools is capturing the testing behaviors of a human tester and then replaying the tests to assert the correctness. Unfortunately, most capture/replay tools are designed for testing the software which must be executed under a general-purpose O.S. They are inapplicable to many software systems, such as embedded software, BIOS, etc. In this paper, a capture/replay testing tool called KORAT is proposed. KORAT adopts a hardware component to intercept and emulate keyboard/mouse signals to drive an SUT as if the SUT is interacting with a human. A tester can design and operate a test case on a correct SUT to record the behaviors into a KORAT script, in which no programming skills are required. In a regression run, the test case is replayed and the correctness is asserted automatically by analyzing SUT's video output (aka, images) and sending keyboard and mouse signals smartly. The correctness of a replay run can be asserted by image recognition, optical character recognition (OCR), and ASCII string matching via networking. Since KORAT only interfaces the video output of a SUT, it is platform independent and non-intrusive; meaning there is no performance interference caused by KORAT's capture and replay. A real application of KORAT to BIOS regression testing of industrial computer (militarized computers) manufacturing is described.
Anomaly Detection in Analog Circuits using Support Vector Data Description
Yang Yu and Yueming Jiang (Harbin Institute of Technology, P.R. China); Yanmei Lv (Mechanical Engineering College, P.R. China); Yongxue Ma and Xiyuan Peng (Harbin Institute of Technology, P.R. China)
The electronic circuits have brought great benefits to our lives due to the advancements in electronic technologies, and diagnosis as well as test is an effective method to guarantee the quality of the electronic products. While in recent years, with more and more electronic circuits being used in high reliability application, such as aerospace and transportation, the probability of circuit failure and anomaly increases greatly due to the adverse and complicate working condition, and most of the faults are unknown because we are lack of the necessary knowledge about the adverse condition. This brings great obstacles to circuit test and diagnosis. Therefore, how to detect and recognize the anomaly and fault for circuits working in adverse condition has become a hot research topic. It is well known that analog circuits are more complex than digital circuits and the task of test and diagnosis will be even harder. In analogy circuit diagnosis, most of the existing intelligent fault diagnosis methods focus on building classification models with labeled history data, capturing and analyzing the circuit output data according to the models, then recognizing the possible fault. However, due to the unpredictable working condition, the labeled history data is hard to be obtained, especially the anomaly data. This makes the traditional intelligent fault diagnosis method is not suitable for anomaly detection for analog circuits working unpredictable working condition. To solve the above problem, this paper presents an anomaly detection method based on Support Vector Data Description (SVDD) for analog circuits. The novelty of this method is that only normal data is required to build the classification model, avoiding the difficulty of abnormal samples acquisition. In the proposed method, the first step is to extract features for the unit impulse response sequence with Wavelet Transform (WT) to reduce the dimension of samples. Then SVDD is used to build the abnormal detection model. SVDD is a kind of one class classification method based on Support Vector Machine (SVM) and statistical theory, which is very suitable to deal with the small-size, high-dimensional samples and non-linear samples, meanwhile, it also has good generalization ability. To prove the effectiveness of the proposed method, the simulation and experiment are taken on Leapforg filter and Four op-amp biquad high-pass filter. In simulation, the training data are from Pspice simulator. In hardware experiment, the training data are obtained from the filters PCB board. All the data are trained to build the classification model using SVDD method after feature extraction. Then anomaly detection is implemented according to the model. For normal samples, the detection accuracy is 93%. For abnormal samples, the detection accuracy is 96% which is 17% higher than the detection accuracy from traditional fault diagnosis method based on SVM classification. Overall, the proposed SVDD method can be more effective in anomaly detection for analog circuits.
Automated Maintenance Path Generation with Bayesian Networks, Influence Diagrams, and Timed Failure Propagation Graphs
Stephen Oonk and Francisco J. Maldonado (American GNC Corporation, USA)
Large and complex systems often have a set of alarms that monitor key parameters (e.g. high or low temperature, voltage out-of-tolerance, power loss, etc.) which are correlated to failure modes, but not necessarily in a direct way. When monitoring alarms occur, support personnel either have to rely on their own knowledge or quickly consult with a handbook on failure mode effects and analyses (FMEA) to diagnose the cause. In this paper, we present a plurality of advanced graph-based techniques which are combined for automated analysis of alarms or other discrepancies in a system and to determine, in response, the most appropriate maintenance path. Specifically: (i) Timed Failure Propagation Graphs (TFPG) and/or Bayesian Networks (BN) use a set of alarms as evidence for backward root-cause diagnosis and forward failure effects analysis and (ii) Influence Diagrams (ID) select an optimal maintenance path considering the most likely failure causes and effects combined with the utility of available maintenance operations. This support system can operate as hardware-in-the-loop, where alarms are diagnosed, an optimal maintenance procedure is suggested, maintenance is performed in the field by support personnel, alarms are updated according to whether this fixed the problem, and if alarms persist, the next best procedure is offered. Key technologies include: • Timed Failure Propagation Graphs. These are causal models that describe discrete and hybrid behavior of a system in the presence of faults (even multiple ones occurring simultaneously) while capturing timing constraints and multi-mode system switching dynamics. Within the TFPG, a fault originates at a source and its impact propagates through a system, setting off a sequence of alarms. We have developed an algorithm that analyzes each path (branch) that exists from a given alarm (node) to a given failure mode (source) and accumulates evidence to point towards a particular fault (either in the system's components, or possibly the alarm itself). This algorithm analyzes temporal and spatial consistency of edges that connect alarms in the system. In this paper, we present advantages compared to approaches in the literature. • Bayesian Networks. These networks are based on probability theory and contain a random set of nodes, edges connecting nodes, and conditional probability tables (CPTs) that quantify the effects a parent node has on a child. Bayesian Networks accommodate stochastic reasoning, whereas TFPGs are deterministic. In this paper, we present a methodology for readily setting up a BN topology from an available FMEA, and building the CPTs based on Expectation Maximization learning. Both the TFPGs and BNs perform forward and backward reasoning to determine the most likely causes and effects. • Influence Diagrams. This technique determines the optimal maintenance path to follow by weighing together the system's node states (e.g. the diagnosed fault causes), utility of certain repairs, and constraints against repairs (e.g. time to perform, associated costs, likelihood of no success). In this way, the influence diagram can guide personnel in repairs and maintenance. Finally, a ground robotic vehicle is used to verify, with simulations and hardware tests, the diagnostic reasoning and maintenance troubleshooting support tool.

1D4: Novel ATE Approaches

Room: Monorail AB
Chair: Guy Newton (NAVAIR, USA)
Virtual Machines and Automated Test Equipment
Eric A Bean (National Nuclear Security Administration's Kansas City National Security Campus & Honeywell, USA)
Supporting legacy automated test equipment (ATE) has always been a challenge while also maintaining configuration control. Often, it is the computer that fails before any other instrumentation or circuitry. With the rapid changes in computing technology, both hardware and software, it can be particularly difficult to replace the computer in a legacy test system. Likewise, it can be challenging to maintain configuration control on a released test system for production when additional capability is required and development is being performed on the production system. This paper examines the capabilities and limitations of the use of Virtual Machines to mitigate the issues surrounding support of legacy ATE as well its application to future development on production test systems. In the author's experimentation, communication with GPIB, LAN, and PXI instruments was considered. Furthermore, a test case was developed in which a virtual machine of a legacy tester computer was created and tested with the existing ATE instrumentation. Additionally, virtual machines were considered for use as a configuration management tool during tester development and after a tester is released to production. The Department of Energy's National Security Campus is operated and managed by Honeywell Federal Manufacturing & Technologies, LLC under contract number DE-NA0002839
Diskless Clients in Modern ATE
Joe Headrick and Scott Jennings (Lockheed Martin, USA)
For modern Automatic Test Equipment it is not unusual to have multiple computers making up a system. One issue with having multiple computers is the requirement for each computer to have a disk drive and associated operating system. This becomes an issue in environments which require periods processing as each system must be cleared prior to moving from one operating state to another. This process can become even more complicated by the fact that some of the computers may be installed in places that are not readily accessible for quick turnaround. In addition these additional computers require additional disk drives to support the operation in each mode. In order to reduce the complexity of periods processing, a scheme of diskless operation can be used to allow multiple computers to share a single disk drive off a master computer. This paper will describe a method to handle just such a complex scenario. In this particular instance several computers are hosted running multiple operating systems off of one main computer's hard drive. This allows for quick and reliable periods processing to be performed while maintaining acceptable performance. With the advent of cheap Solid State Disk drives and fast intelligent managed switches, the performance of the overall system is actually not impacted much at all. Modern operating systems and tools provide the capability to implement diskless nodes in a fairly straightforward manner. While this technology is not new by any means (older ATE used Sun computers in a diskless configuration), modern technology makes this easy to implement and provides a performance close enough to a disk based system to be an effective solution. In addition, by removing the extra disks, not only is the periods processing time reduced, but the margin of error in completing the conversion is reduced to the one drive.
An Ethernet and USB-Based Compact Automated Field Test Equipment for Mobile Surveillance and Reconnaissance Systems
Onder Unver and Cem Çiğdemoğlu (ASELSAN INC., Turkey)
Military surveillance and reconnaissance systems are generally used in critical areas like borders, military bases etc. for security purpose. Availability and maintainability are the most critical requirements for these systems. Therefore, when the system fails in the field, the fault must be determined as soon as possible to get into use the system. In this paper, design of an Ethernet and USB-based compact automated field test equipment (FTE) that finds the faulty line replaceable unit (LRU) for mobile surveillance system is discussed. Design of the proposed field test equipment is done by considering its demanding features, i.e. built-in test feature, portability, simplicity and ruggedness. As a result, one notebook computer, one test hardware, one test software and test cables in a rugged hard case small bag construct the field test equipment. The proposed field test equipment does built-in-test of itself before starting a test run and then continue to check full functionality of system automatically. The system under test (SUT) is modeled as combination of subsystems based on their functionality, i.e. thermal imaging subsystem, radar subsystem C2C subsystem, power subsystem etc. For each subsystem, a fault tree diagram is designed to find faulty line replaceable unit by testing specific features, communication and power of the subsystem. Hence, one faulty LRU is reported at the end of full system test run unless no faulty LRU exists. If all the faults are solved, then system status is reported as ready for use. Consequently, a failure in the system is defined automatically within minutes as a level of line replaceable unit by using the compact and rugged field test equipment.

Tuesday, September 13, 18:00 - 20:00

Exhibitors' Reception

Room: Exhibit Hall

Wednesday, September 14

Wednesday, September 14, 07:00 - 08:00

Coffee Break

Room: Exhibit Hall

Wednesday, September 14, 07:00 - 17:00

Registration

Room: Registration Counter

Wednesday, September 14, 07:00 - 07:30

Speaker Breakfast

Room: Monorail C & Castle C

Wednesday, September 14, 08:00 - 09:30

2A1: Advances in ATE Technology

Room: Magic Kingdom 1
Chair: Matthew Morgan (NAVAIR, USA)
Modern technologies are biting off more than our Test Systems can chew
Neil Baliga (Verifide Technologies, Inc., USA)
Despite billions spent on software in testing aerospace and high technology products, companies are still struggling to keep up with Moore's law and shorter schedules. This results in excessive software development, hard to analyze data, manual labor that slows the disposition cycle time, lower production rates, and costly errors in high-value DUT's. Modern technology has delivered a one-two punch for aerospace manufacturing organizations. The first blow is the increased complexity of devices; existing test systems are stretched beyond their capabilities and require major rework and iteration. The second fatal blow is the pressure on test organizations to release on extremely short schedules and budgets due to rising competition in the industry. This paper presents two key concepts to mitigate the effects of the new technology revolution on your test systems - Modularity and Scalability. These are not new concepts; this paper will, however, discuss what these concepts mean for test systems, and also cover how to implement and combine the them to reduce recurring costs and technical risk in your test organizations.
Scalable and Adaptive Design Test System for Ground-based to Airborne Platforms
Tranchau Nguyen (US Air Force 309 SMXG, USA); Ty Ung (Government, USA); Mark Reimann, Scott Rawlings and Heather Holmquist (309 SMXG, USA)
This abstract focuses on the scalable and adaptive design of building a test system to support multiple United Stated Air Force systems while preserving the legacy capabilities (requirements) and initial investments (software/hardware) of those systems. From the Intercontinental Ballistic Missile (ICBM) Minuteman (MM) III ground-based launch control support system such as the Ground Minuteman Automatic Test System (GMATS) to the ICBM MM III telemetry wafer processing effort like the Radio Frequency Test Set (RFTS) to airborne project such as the F-16 Radar Transmitter Test System (RTTS) to the original equipment manufacturers (OEMs) hardware testing for the F-16 Common Configuration Implementation Program (CCIP), each of the previously mentioned system required a unique test station with specific hardware, test executive interface and software programming languages to accomplish its tasks. Due to the rapid obsolescence of hardware and additional requirements from the end customers, the new replacement test system for current systems as GMATS/RFTS/RTTS/CCIP must have the adaptive capability in its hardware and agile along with flexibility in its software design to satisfy the requirements for the above systems and architecturally scalable for other systems in the future with minimal impact to the legacy hardware interface adapters and software architecture for the new system to be cost effective, manageable and successful over the next 20+ years in a typical Department of Defense weapon system.
"U-TEST™ - An Innovative Test Platform to Accelerate the Entry-into-Service of Military and Civil Electronic Systems."
John Ardussi (Spherea Test & Services, USA); Loup Foussereau and Gerard Delfour (Spherea Test & Services, France)
A potential problem area in the deployment of complex aerospace electronic systems is the integration phase, where multiple units or modules require test and verification as a complete system. The integrator may be an OEM or a military engineering unit. It is not uncommon for system integration, and vehicle entry-into-service, to be delayed due to design or delivery problems of one or more modules. This can result in unforeseen program delays and cost. Spherea Test & Services has developed an innovative new test platform to address this specific need of the aerospace industry. U-TEST™ is a flexible simulation and testing environment focused on achieving a seamless transition between the design, integration, and qualification phases of embedded systems. U-TEST™ has been implemented with open-system and COTS components, both hardware and software. More than 50 systems have been successfully deployed internationally. In this presentation, Spherea would like to describe the technical architecture underlying the U-TEST™ product family and highlight the cost and time saving benefits it can offer to OEMs and military engineering teams.
Recommended Practice for Insuring Reuse and Integrity of ATS Data by the Application of IEEE SCC20/ATML Standards
Patrick Verbovsky (NAVAIR, USA); Joseph J Stanco (TPSAssociates, USA); Mukund Modi (NAVAIR, USA)
The IEEE SCC20 / Automatic Test Markup Language (ATML) standards are currently being used to describe a host of ATE related documents. These standards cover test descriptions, requirements and specifications of ATE instruments and UUTs in an all-encompassing test environment. These standards provide the necessary elements needed to achieve the goals of reducing the logistic footprint associated with complex system testing. However, in order to achieve the full benefits of these standards one must recognize the tasks of implementing the standards to provide the information necessary to achieve the goal of reduced support equipment proliferation and cost reduction. While these standards go a long way in achieving these objectives, a number of issues must be addressed. In order to support this environment, the IEEE SCC20 /ATML standards provide for a number of ways to develop IEEE compliant documents. However, without a set of comprehensive procedures and supporting tools the optimum reuse and data integrity of these products may not be achieved. This situation is caused by the scope of the testing environment which utilizes the integration of many elements and events that occur over a products life cycle [1]. This situation leads to a data providence issue resulting in data that may be inconsistent with someone else's IEEE SCC20 /ATML documents. This paper will discuss how to handle the data issues by describing an approach and methodology addressing these situations. The recommended methods focus on insuring that the IEEE SCC20 /ATML developed products results in the highest degree of reuse, interchangeability and data integrity throughout the different use cases of both government and industry. The way to apply these methods starts with the source of the data. In this case the source would be a semantical taxonomy that describes how the IEEE SCC20 /ATML documents should be structured for supporting the data required by the use cases. Due to the large scope of this effort, this paper will concentrate on a specific example use case utilizing select standards and tools to aid in producing compliant ATML / SCC20 standard products that will result in the reuse and interoperability of these products. It will focus on the data needed to test a UUT and how that data is defined and utilized in the resulting documentation. The activities requiring this data, the events and resources acting on this data will be covered. The intent is to maintain the integrity and validity of the data throughout the products (UUT) testing life cycle. It is intended that paper will lead to improved use and enhancements of these standards. This information is intended to be used in developing a recommended practice approach that will support the use of these standards in the acquisition of test products required during a products life cycle. Reference: [1] Modi, Mukund, Joe Stanco, and Patrick Verbovsky. "Supporting a Product's Life Cycle Utilizing Reusable ATML Compliant Test Documentation" Publication. IEEE AUTOTESTCON, 2015.

2B1: Virtual Instrumentation & Switching

Room: Magic Kingdom 4
Chair: Teresa P Lopes (Teradyne, Inc., USA)
Benefits of Universal Switching in ATE
Robert Waldeck (Astronics Inc, USA)
Benefits of Universal Switching in ATE The commercial offerings in ATE over the last 20-30 years has shown a strong disparity between solutions available to the integrator vs solutions offered by the large turnkey ATE manufacturers. This disparity is primarily focused in the area of switching. For the system integrators, the choice has been a wide variety of switching products, primarily in formats such as VXI or more recently PXI. These offerings are various unrelated switches in Matrix, Tree and SPDT or SPST formats. There has been minimal effort among the card providers to offer a set of cards which work together to create a unified switching system. The Turnkey system providers on the other hand have primarily focused on providing systems with highly integrated Universal Switching architectures. The reason for the disparity is puzzling as the Turnkey system providers clearly see strong advantage in the Universal Switching Architecture, strong enough to spend significant resources in developing a proprietary switching system of their own. Surprisingly, the commercial card providers have not jumped onto this bandwagon and developed commercial alternatives for System Integrators. Recently we have been spending more time with customers wrestling with Legacy ATE challenges. It has been our experience that Universal Switching can be both a way forward as well as providing a future platform more suited for TPS transportability. This paper will explore both the functionality as well as the merits of Universal Switching systems in an effort to help the reader make an informed decision. Considering that the fielding of any group of TPS's on a system platform frequently exceeds the cost of the Test System itself, sometimes to a large degree, the ability to reduce the cost of the TPS development is a significant driver in the reduction of the overall cost of ownership of the program. We will also explore how Universal Switching can be used to replace non-Universal Switching in fielded ATE Systems, and how it more readily supports new technology insertions and creates a next generation test platform which more readily supports TPS transportability across platforms.
Digital Radio Frequency Memory Synthetic Instrument Enhancing US Navy Automated Test Equipment Mission
Christopher P Heagney (US Naval Air Systems Command, USA)
This research project aims to expand the capability of current US Navy Automated Test Equipment (ATE) family of testers known as the Consolidated Automated Support System (CASS). Industry research is now focused on breaking the historical construct of test equipment. Advances in the field of synthetic instruments have opened the door to test avionics in new ways. Every year new capabilities are developed using core hardware and increasingly capable software modules to create complex waveforms. This research creates a Digital Radio Frequency Memory (DRFM) Synthetic Instrument that can be programmed to perform a wide array of low latency Radio Frequency (RF) tests. Synthetic Instruments are defined as a concatenation of hardware and software modules used in combination to emulate a traditional piece of electronic instrumentation. This Synthetic Instrument couples high speed Analog-to-Digital Converters (ADC) to high speed Digital-to-Analog Converters (DAC) with Field Programmable Gate Arrays (FPGA) in between for digital signal processing. An RF front end is used to down convert the RF to baseband where it is sampled, modified, and up converted back to RF. The FPGA performs Digital Signal Processing (DSP) on the signal to achieve the desired output. Application of this DRFM in automated testing is demonstrated using a Reconfigurable Transportable Consolidated Automated Support System (RTCASS) tester at Naval Air Systems Command (NAVAIR) Jacksonville, FL. The Unit Under Test (UUT) is an ALQ-162 Defensive Electronic Countermeasures (DECM) receiver-transmitter. Ultra-low latency signals are generated to simulate enemy jamming stimulus. As the ALQ-162 detects and responds to the input, the DRFM switches to a new frequency. The time taken by the ALQ-162 to acquire, respond, and re-acquire is measured. This test confirms the internal Yttrium Iron Garnet (YIG) oscillator meets slew specifications. Currently Navy ATE can only test RF units using high latency steady state tests. This research project developed a supplemental unit that can be added to the VXI chassis in the CASS family of testers and conduct ultra-low latency active tests. The instrument acts as hardware-in-the-loop to perform real-time tests including a new capability to measure jamming response time from DECM avionics. Demonstrated performance capabilities include: latency < 100 ns, output Spurious-Free Dynamic Range (SFDR) > 80 dBc, input SFDR > 60 dBc, frequency tuning resolution < 2 Hz, and frequency settling time < 0.5 ns. New RF capabilities developed by this effort parallel similar research ongoing for digital test instruments like the Teradyne High Speed Subsystem. Incorporating this Digital RF Memory synthetic instrument into current and future ATE will improve readiness and supportability of the fleet. Improvements demonstrated by this research project will expand the type and quantity of assets able to be tested by current and future ATE.
Testing High Speed Ethernet & Fibre Channel Avioncs Switches
Troy Troshynski (Avionics Interface Technologies & Avionics Interface Technologies, USA)
Modern avionics systems are increasingly employing the use of high speed serial data networks. High capacity Ethernet and Fibre Channel switch fabrics are commonly found at the core of theses avionics networks and it is typical to find a mix of both copper and optical media interfaces as well as multiple data link bit rates within a single aircraft system. As these switching fabrics become integral pieces of avionic suites, functional test system must be developed with the capacity to replicate the combined data streams of multiple avionics end points when the fabric is off the aircraft and becomes the Unit Under Test (UUT). This paper provides a brief technical overview of the common principals of high speed avionics Ethernet and Fibre Channel networks and switch fabrics and also addresses several key items that must be addressed when designing a test and simulation system targeted to support high speed switch UUTs.

2C1: Software and Simulation Testing

Room: Castle AB
Chair: Larry Attkisson (Northrop Grumman, USA)
a modular, extendible and reusable test configuration for design verification testing of mission computers
Mehmet Turkuzan (Aselsan Inc. and Gazi University, Turkey); Hayati Atakan and Yusuf Yıldırım (DEICO Engineering Inc., Turkey); Mert Değerli (Middle East Technical University & DEICO Inc., Turkey)
This study shows a different approach for mission computer testing. The standard procedure of testing mission computers consists of the design of wiring set and the interface adaptor. The overall system can be partitioned into three blocks, namely; system computer, the interface adaptor, and the test computer. In order to test different mission computers with different wiring interfaces, the interface adaptor and the wirings between these blocks need to be updated for each mission computers. In this study a generic, computer aided and modular test configuration is proposed as a solution to the engineering validation tests of mission computers. The test computer is prepared on PXI chassis with all required communication ports (RS-422, RS-232, Ethernet, CAN, 1553). In addition, some required modules are added, such as a relay module, a multiplexer module, and a DMM module. A VPC G18 receiver is also combined with the test computer via some mechanical parts. A custom design interface box is last part of the hardware. The software part is developed in C-sharp and is used mainly for automated tests. However it is also possible to be used at some manually configured, operator driven tests. The proposed test configuration provides a number of improvements compared to the existing test configurations. Reduced test preparation time, reduced engineering support for test preparation, reduced price, and reduced failure rate are some of these major improvements of the proposed system. This configuration is implemented and used on various mission computers that are deployed on various systems on different platforms.
Modeling & Simulation accelerates complex system design, test, and verification process
Uma S Jha (Raytheon Co, USA); Erik Chowdhury (Raytheon SAS, USA)
Modeling and Simulation (M&S) has been a key contributor to the validation of key system concepts, trade-off analysis, and test/verification of the system performance before it could be realized in cost effective and timely manner. Leading system developers harness M&S capabilities for competitive advantage and operational/business efficiency by leveraging advances in VLSI technologies (processing horse power, memory, storage, high throughput buses), graphics, display, and networking technologies and its utility spans not only across concept development, and trade-off analysis but also in system design and development, test and evaluation, failure analysis, and, to some extent, identifying production issues. M&S facilitates investigation of the performance conformance of a new system and offers design alternatives and test harness before costly and time consuming physical prototypes can be built; it provides a cost-effective means of investigating feasibility and interoperability in a complex operating environment before fielding systems or upgrades; and it is sometimes the only way to study a technical problem in a repeatable and deterministic manner and to analyze the benefits/shortcomings of certain technical/operational/environmental/performance assumptions. Additionally with a high-fidelity model, virtually any system's end-to-end performance can be analyzed and evaluated.
Testing the Operational Control of a Creative 3D Assembly in Simulated Settings
Sanguk Noh, Chulpyo Kim, Jisu Ha and Sukgen Hwang (The Catholic University of Korea, Korea)
This paper presents the testing and evaluation of a simulator for the operational control of a creative 3D assembly. We model the 3D assembly as a moving agent, which generates a sequence of actions according to graphical programming block scripts previously defined at any given domain. To guide or control the operation of the 3D assembly, we accumulate crisp if-then rules into the knowledge base, and we also provide a fuzzy control system which consists of a set of fuzzy variables, membership functions for fuzzy variables, and a set of rules specifying the relationship between fuzzy input and output variables. We develop an interpreter that generates standard XML scripts from graphical programming blocks, which can then be converted to a C# object. The simulator that we implement processes both crisp logical rules to determine an exact action at a discrete condition and fuzzy logical rules to control the fuzzy actions of the 3D assembly given continuous input values of a situation. We experiment with our simulator and a variety of graphical programming blocks to test and evaluate the operational control of the creative and movable 3D assembly in simulated settings. In the experiment, the assessment includes (1) the definition of three types of motion, control, and data blocks, (2) the generation of XML scripts, and (3) the execution of both crisp and fuzzy logical rules, respectively.
Development and Validation of Production Electronic Tests Procedures through the Virtual Airplane: A Case Study at an EMBRAER aircraft
RESUME The validation field and verification systems have currently being rethought due to the great progress in product development processes and the incorporation of innovative techniques in the V & V process (Validation and Verification). A great example of this is the use of modeling and simulation in the development of complex systems; even being considered as an innovation in systems development. On Embraer, we have as an example, the Virtual Aircraft project, that developed methods, processes and tools that use cutting edge of these technologies. With the facility of exercising the system development through models, the number of possible and desirable tests to be performed during the various stages of development, since the validation requirements until the final tests enabled the development of tests in parallel with the project and the development of the system. The traditional process of development of the EMBRAER production testing procedures are based on written documentation and no interactivity, providing low maturity for use in aircraft and making it impossible to have a early validation. With the problem posed and the new modeling technologies available, it was identified the opportunity for early validation of electrical and electronic test production procedures through simulation models (Virtual Airplane). To make this idea, it was necessary to structure a process for application of production tests in simulation models in an integrated manner; creation and validation of the use of modeling and simulation tools through two case studies and, finally, the demonstration of efficiencies and effectiveness of the implementation of MBD - "Model-Based Design," and MBT - "Model-Based- testing "in the development and validation of electrical and electronic testing of production, providing a gain of continuous maturity throughout all stages of the development process of the test procedure and anticipating several problems that would only be found in the test phase of implementation in the aircraft, where prototype stage and equipment are already in an advanced stage of development and remediation costs are higher. COMMENTS AND CONCLUSIONS The transition from the traditional model of development of electrical and electronic test procedures ( "V" for multiple "V"), evidenced in case studies, proved to be feasible by the techniques of "Model-Based-Testing" enabling the involvement of the test engineer from the beginning of product development and enable to anticipate the problem detection and to reduce the development cycle of procedures, enabling the reduction of many costs. The new Embraer's aircrafts adopt the concept and the tools used in this work.

2D1: Prognostics and Health Monitoring 1

Room: Monorail AB
Chair: John W. Sheppard (Montana State University, USA)
Machine Learning Anomaly Detection in Large Systems
Jerry Murphree (DRS Technologies, USA)
We have a need for methods to efficiently determine the health of a system. Diagnostics and prognostics determine system heath through analysis of data from sensors. Anomalies in the data can help us determine if there is a failure or a pending failure. There are common statistical methods to detect anomalies in individual measurements. For systems with many measurements, the anomalies may occur as specific combinations of values. Large systems have various associated states and modes which define the valid measurements. The amount of data to analyze grows very quickly as the system becomes more complex. In recent years techniques have been developed to address large data analysis. Machine Learning encompasses a broad selection of tools to optimize a statistical model of the data. These tools include supervised learning techniques, such as linear regression and logistic regression, in which training data exists to tune the model. Unsupervised learning, such as clustering, is used to explore data which does not have a defined output label associated with inputs data. Standard approaches to training supervised learning systems require a large sample of positive and negative outcome data. Some uses of machine learning involve data where there are very few cases of negative outcomes. There are machine learning algorithms defined as Anomaly Detection which are designed to deal with this type of data. Simple algorithms include Gaussian Distribution Analysis, which assumes independence in distributions of data. Large Systems with anomalies defined in the dependent combinations of data require either a manual creation of combinations of independent variables, or Multivariate Gaussian Distribution Analysis, which does not scale well for large systems. A further complication is the mixture of linear and discrete data. Neural Networks are a type of learning system which has been applied to each of the individual needs addressed above. This paper describes an approach to anomaly detection using neural networks for the specific problems in large systems to efficiently determine system health.
Fault Diagnosis and Augmented Reality-based Troubleshooting of HVAC Systems
Rajeev Ghimire, Krishna R Pattipati and Peter Luh (University of Connecticut, USA)
Rapid advances in electronics, computational capabilities and communication technology have increased the complexity and integration of cyber-physical systems. The increased complexity, cross-subsystem fault propagation, associated delays make fault diagnosis and conventional maintenance strategy challenging. This motivates us to employ the emerging technologies for efficient fault diagnosis and troubleshooting in such systems. This paper presents an integrated framework to monitor a cyber-physical system in real-time, access system heath information remotely and act on that information proactively to prevent or minimize the system down time. In this paper, we model a heating, ventilating and air conditioning (HVAC) system, comprised of two air handling units (AHUs) serving a floor of a large academic building. The HVAC system is comprised of multiple subsystems, including heat exchangers, AHUs, variable air volume (VAV) boxes, etc. interconnected together. It has sensors to monitor signals such as the temperature, pressure, carbon dioxide (CO2), humidity, air flow rate, etc. The system is represented hierarchically in terms of subsystems, components and failure modes. We characterize failure sources of the HVAC system, fault propagation across the different subsystems and find the corresponding signals that are affected and tests that can detect these failures. Based on this information, we develop the HVAC model in TEAMS-Designer, a multilevel graphical modeling tool to describe the structure, failure behavior, and failure modes of the system [1]. The TEAMS-Designer is used to obtain the cause-effect dependency matrix (variously referred to as the diagnostic matrix, fault dictionary, D-matrix) and a diagnostic tree. We also obtain the testability figures of merit, such as fault detection, fault isolation, test point recommendations, ambiguity groups and unused tests from this modeling exercise. The TEAMS toolset [1] is integrated with augmented reality (AR) using smart glasses for integrated fault diagnosis, real-time monitoring and guided troubleshooting of the HVAC system. There are in excess of 277 failure sources in the components associated with the HVAC system and 50 monitoring points. The diagnostics and testability information is obtained using the TEAMS-designer model. Assuming we can measure all the signals at the output, existing system has 100% fault detection, but only 41% fault isolation rate. We explore augmented reality (AR) devices (Google glasses [4], Epson glasses [5][4]) as a smarter way of interacting with the remote diagnostic server (RDS- a tool that collects measurements, technician conducted test results, diagnostic information from on-board computer, and processes it for tele-diagnosis based on the diagnostic model of the system). The smart glasses are used interactively to retrieve and relay the necessary diagnostic and repair instructions through the AR display, audio and built-in sensors. This approach can be applied to aerospace, automotive systems, etc. References [1] TEAMS-Designer, Software from Qualtech Systems, http://www.teamqsi.com/TEAMS.html [2] TEAMS-RT, Software from Qualtech Systems, http://www.teamqsi.com/products/teams-rt/ [3] TEAMS-RDS, Software from Qualtech Systems, http://www.teamqsi.com/products/teams-rds/ [4] Google Glass, smart glass from Google Inc., https://developers.google.com/glass/ [5] Epson MoverioBT-200, smart glass from Epson, http://www.epson.com/cgi-bin/Store/jsp/Product.do?sku=V11H560020
Measurement and analysis of arc tracking characteristics in the high frequency band
Virginie Degardin (University of Lille & IEMN, France); Lamine Kone (University of Lille, France); Flavien Valensi (University Paul Sabatier, France); Pierre Laly, Martine Liénard and Pierre Degauque (University of Lille, France)
The concept of More Electric Aircraft (MEA), leading to a growing demand in power needs, is driving the development of new power distribution architectures with High Voltage Direct Current (HVDC) networks. Major issues are related not only to power generation and distribution but also to cabling which must be highly reliable. Indeed, intermittent arc faults may appear if the wires are degraded due to mechanical stresses, and the probability of initiating continuous arcs becomes higher for HVDC configuration. New technologies of fault detection and circuit interruption have been studied, mainly based on an analysis of the current variation in the low frequency band, typically below 100 kHz. Despite these technological advances, an arc may occur, sustain itself and propagate along the wires owing to a degradation of the insulation. This gives rise to what is known as an arc tracking which may cause serious damage. Furthermore, the current pulses covering a wide frequency band, electromagnetic coupling to other cables may disturb control/command or communication systems. The objective of this contribution is thus to measure and analyze the electrical characteristics of an arc tracking in a high frequency (HF) band, from 1 to 30 MHz. We will put greater emphasis on the current and voltage power spectral density (noted CSD and VSD respectively) generated by this arc. To our knowledge, this subject has never been covered. Guillotine and wet arc tracking tests are standardized procedures to generate a parallel arc fault. A slightly modified version of the wet arc test is used in our experiments. Both a droplet of salted water and a small copper wire are put between the two partly stripped wires before applying high voltage, 100 V in our case. The test set up was specially designed for measuring the HF frequency components of the arc. Particular attention was paid on the way to deduce, from measurement results, the intrinsic parameters of the arc, taking the different devices introduced in the measurement chain into account. During the arc tracking phase, the average DC current is adjustable up to 100 A. It appears that average CSD and VSD, calculated on a sliding time window of 20 µs, are nearly constant during the arc tracking phase and do not depend on the value of the regulated DC current. Surprisingly, results are quite reproducible from one experiment to another one, i.e. for different arcs. It was shown that the arc behaves as a voltage generator whose VSD, expressed in dBµV/kHz, is a logarithmic decreasing function of frequency. VSD varies from 74 dBµV/kHz at 1 MHz to 47 dBµV/kHz à 30 MHz for copper wires, and other trials have been made to study the influence of the material. These results can be applied either for predicting the resulting disturbing noise at the input of an electronic equipment or for designing a system for detecting an arc tracking based on its HF frequency content. This work has been made under contract with the R&T Department of the EWIS Eurasia Division of Labinal Power Systems.
Fatigue Damage Detection for Advanced Military Aircraft Structures
Paul Braden (US Air Force, USA)
Modern military aircraft are evolving into more sophisticated structures, with exotic new materials and stealthy designs. But in all of the advances, what is the implication for overhaul procedures and tooling? Looking at implementation of new technologies employed by the Air Force for the repair of aging F-16's, A-10's, KC-135's and C-130's, we can see how the new fleet of F-35's, F-22's and KC-46's will face certain unexpected challenges that deserve proper review and analysis. One primary concern is the widespread use of composite skins on the wings of fighter planes. There are several key advantages but few manufacturers have understood the complications from repairing these materials. For instance, on the F-16, the horizontal tail is made of carbon fiber riveted to aluminum subsurface. Besides the difficulties in finding the fractures, there are relatively few repair procedures for mitigating these problems like there are in classic sheet metal work. In this presentation, we analyze the most recent advances to address the overhaul concerns arising from composite skins in military aircraft. A cost analysis is presented to show the various reasons why composite skins may cause a headache for the military as the technology of detection and repair tries to catch up to these advanced new materials. Some computations will also be performed to show the reduction in strength over time for carbon fiber composites as compared to 7475 series aluminum. Simulations that focus on the growth of expected cracks that may escape NDI will be presented to show the difference in damage and fatigue life between the two materials and how current inspections will need to be improved to solve this difficult problem.

Wednesday, September 14, 09:30 - 10:00

Coffee Break

Room: Exhibit Hall

Wednesday, September 14, 10:00 - 11:30

2A2: Management Topics

Room: Magic Kingdom 1
Chair: Mukund Modi (NAVAIR, USA)
Merging ATE: An Interesting Possibility
Larry V. Kirkland (WesTest Engineering, USA)
There have been great strides in the open system plug and play concept for Automatic Test Equipment (ATE) in the DoD. An ideal test system can be thought of as the sum of its parts: measurement and stimulus hardware, signal switching, power supplies, cabling and interconnect system (Interface Test Adapter - ITA), external PC or embedded controller, Operating System (OS), control and support software, and the programming environment. Each part is selected based on parameters such as Unit Under Test (UUT) test parameters, physical dimensions, test times, and cost. UUT test requirements are the crucial aspect of instrument selection and functionality. The open system plug and play concept gives rise to the possibility to run a test program on a different ATE, that is, taking your ITA and your Test Program Set (TPS) from its' programmed ATE and running the TPS on a different ATE utilizing the same ITA. The main components for running a TPS on different ATE are an ITA Transition Adapter and Translation Software to convert or compile the Test Program to run on another ATE. The ITA hardware configuration and the Interface Connection Assembly (ICA) variations between different ATE are critical factors. If instruments have compatible features then UUT test requirements might not require examination, however if there are distinct differences in instrument capability between ATEs then UUT Test Requirements become a critical factor. Also, there will be ATE switching variances between ATE designs so this is a prime consideration. In pursuit of merging ATE TPSs from its' programmed ATE to a different ATE, an ITA transition adapter can be developed. The transition adapter is the hardware between the ITA and the different ATE ICA. The transition adapter is wired to route signals from one ICA configuration to another ICA configuration. The transition adapter design requires an ICA to ITA evaluation that consists of a pin-to-pin comparison between each ATE. Each ICA connection must be traced to the instrument or instruments which can be connected to that pin. In addition, instrument specifications must be evaluated and compared between different ATE. One thing of critical importance is the instrument driver compatibility. The translation software will convert an existing TPS from one ATE to another ATE. That is, the translation software must compile the existing test program to run on a different ATE test executive. At this point, many factors come into focus; the re-compiled Test Program must be analyzed for capability to run on the new platform or ATE. Remember, instruments from different manufacturers don't always perform completely inter-changeably. Quirks between instruments which theoretically have the same specifications can be a major setback. During the ITA hardware development signal integrity is analyzed not only by signal evaluation and noise but also by actual test program execution. Every design detail is important to minimize signal integrity problems. Signal degradation factors analysis is vital to signal health and proper UUT testing. This paper will cover many aspects associated with running a TPS on a different ATE utilizing the same ITA.
Acquisition of Out Of Production ATE Test Program Sets-A DoD Perspective
Lyle Beck (NAVAIR, USA)
A discussion of the highs and lows related to procurement of ATE Operational Test Program Sets (OTPSs) that have been out of production for any length of time as seen from a Department of Defense viewpoint (as opposed to a private industry viewpoint). Focus is on procurement strategies, Technical Data Package development, and issues with how to overcome component obsolescence and "minimum buy" requirements. These procurements tend to be from FMS customers due to changes in their preferred support postures and dependence on failing legacy ATE.
Legacy ATE Upgrade Lessons Learned
Donald Gardner (DRS Technologies & DRS C4ISR, USA)
Replacing or updating legacy ATE system presents unique challenges to engineers. Some challenges include: alleviating ATE hardware obsolescence, working with limited, or inaccurate design documentation (both hardware and software), and the need to update legacy code and software to more modern applications and languages. Each of these items present their own challenges. But when these issues are combined, this task can be compared to completing a jigsaw puzzle with missing pieces. Addressing one issue will impact the decisions made to resolve other issues. In this paper, we will look at a recent project to replace an obsolete tester for a complex RF system on a major DoD platform with a modern equivalent. The solution uses commercial off-the-shelf hardware from various vendors to replace custom legacy hardware. The original test program, written in FORTRAN, was redeveloped using LabVIEWTM. We will look at some of the challenges faced and examine the decisions made to meet these challenges. One of the issues we will look at is how the duplication of the legacy communication interface, while allowing for backwards compatibility, increased complexity and integration time. We will examine how external processes and procedures played a role in the decision-making process; and how they contributed to challenges during development and integration. We will show how our standard internal processes resulted in a greater confidence in the decisions made and the understanding gained of the problems. The combined effect drove an examination of decision-making throughout the project. Finally, we will take a look at the final results and suggest improvements to the final product. The goal is not to fault or praise any one individual or group, but to examine the decisions made during the project. We also want to learn from the experience. By creating and utilizing this "lessons learned," we can avoid repeating similar mistakes and make better decisions that can lead to overall better products in the future.
TPS Metrics Extraction Software for Resource Management
Joel Luna (Frontier Technology, Inc., USA); Christopher Geiger (Lockheed Martin, USA); Matthew Morgan (NAVAIR, USA)
The Consolidated Automated Support System (CASS) family of testers currently hosts more than 1,500 Test Program Sets (TPSs) in support of the testing and repair of avionics and weapon system units under test, spanning numerous aircraft platforms. Several hundred additional TPSs are also slated for development. This has resulted in a large pool of TPS code and associated data that are an untapped resource in Automatic Test System (ATS) planning and support. The ability to relate test instrument capabilities to TPS source data and ATS usage data would provide a comprehensive look at how avionics maintenance is performed. This could identify economic targets of opportunity for the deployment of new and innovative test techniques. Currently the technology does not exist to tap into the available TPS code and associated data. The objective of this effort was to develop a software toolset that provides an innovative capability to extract usage metrics from TPS source code and ATS log data and to build companion TPS simulations for more refined analysis. The effort focused on defining and developing a complete data metrics generation concept for the aggregation and analysis of ATE and TPS data. This is a new capability that will parse TPS source code and extract metrics that leverages other sources of data, such as log data and UUT maintenance data. The result of the process is the creation of TPS and ATS component usage metrics that can be used to support usage analyses. As a result of this effort, we successfully demonstrated concept software for metrics extraction and analysis from TPS source code and TPS log files. The concept software parses and processes TPS (ATLAS) source code and TPS log data, computes count and percentage metrics for any combination of action, resource, or capability, as well as execution time metrics for the TPS log data. We also show how resource settings can be extracted from the TPS source code and metrics computed for the profile of settings used versus ranges specified for those resources. The ability to view parsed data, computed metrics, and compare the metrics between one or more sets of input sources was also demonstrated. Several recommendations are made to advance the concept. The first is to show how standard Navy maintenance data for WRA and SRA testing can be merged with TPS data to determine enterprise utilization and prediction of future utilization based on predicted maintenance demand data. In addition, the software should extend the capability for metrics that compare resource setting, provide statistics of record data to provide an immediate overall view of record data, and develop a glossary of standardized test names to be used to perform enterprise level comparison at the test level.

2B2: Networking Approaches for Test

Room: Magic Kingdom 4
Chair: Andrea Roderick (NAVAIR, USA)
Distributed wireless sensor network system for electric field measurement
Deng Dawei and Haiwen Yuan (Beihang University, P.R. China)
The area of high voltage direct current (HVDC) transmission line is very large, so using cable for electric field monitoring system is very inconvenient. Wireless sensor network(WSN) can solve this problem. Compared with the traditional communication network, WSN has the advantages of small volume, high flexibility, strong self-organization. So it's more suitable for the construction of distributed electric field monitoring system which has long distance and high mobility. On the other hand, optical E-field sensors are passive devices and they have such advantages as compact structure, wide-band response and wide measuring range which mechanical sensors lack. A distributed wireless system with optical E-field sensor is designed for collecting and monitoring the electric field under HVDC transmission lines. This measurement system has been used in China's state grid HVDC test base and power transmission projects. Based on the experimental results, this measurement system demonstrates that it can adapt to the complex electromagnetic environment under the transmission lines and can accomplish the accurate, flexible, and stable demands of the electric field measurement.
A Universal Method for Implementing IEEE 1588 with the 1000M Ethernet Interface
Zhaoqing Liu (Harbin Inst. of Tech., P.R. China); Dongxing Zhao, Min Huang and Yigang Zhang (Harbin Institute of Technology, P.R. China)
Nowadays, IEEE 1588 (IEEE Standard for a Precision Clock Synchronization Protocol) has been widely adopted by networked measurement and control systems. However, as the increasing demand for both of the scalability and the complexity of test tasks, the 100M Ethernet Interface intrinsically induces limitation for the transmission of the test data. That plight can be eliminated by the 1000M Ethernet Interface, which has been already been the defining feature of the future networked test equipment. Thus, the implementation of IEEE 1588 that is compatible for the 1000M Ethernet Interface is imperative for the test system. This paper proposed a universal method for implementing PTP (Precision Time Protocol) for test systems with the 1000M Ethernet Interface. To achieve the synchronization accuracy of sub-microsecond, the configurable real-time clock and the time stamp module were realized in the programmable logic which makes the PHY and MAC to be free of time stamp functions in the communication link. PTPd (Precision Time Protocol deamon, an open source implementation) was modified and transplanted into the embedded Linux system to realize PTP state machine while IEEE 1588 IP core device driver was developed to provide the application layer with the access to the accurate time-stamp obtained in link layer by IEEE 1588 IP core.This project structure makes the transplantation process concentrates on time adjustment algorithm design in the application layer regardless of obtaining precise time stamp in hardware. The proposed method was evaluated on the Xilinx Zynq-7000 SOC platform by outputting PPS (Pulse Per Second) which can verify the synchronization accuracy of all nodes (master and slaves) in the network. After quantifying the accuracy and stability of the synchronization offset, we concluded that clock frequency offset and network transmission delay are the main influence factor for synchronization and proved the feasibility of maintaining submicrosecond-level synchronization accuracy within multi-level switches topology.
Utilization of the 5th Generation Mobile Networks for Automated Tests
Aydin Guney (ASELSAN Inc. & METU, Turkey); Halil Tongul (Aselsan Inc., Turkey)
Internet of things is one of the most important achievements that 5th Generation Mobile Networks (5G) is expected to bring. Today's LTE and LTE-A networks have limited number of connected users and also the latency introduced in these networks has a wide range. 5G is not only envisioned to provide faster data rates, but also introduce the possible connection of a huge number of devices and expected to reduce the latency significantly. Researchers estimate that 50 billion devices will be connected to the mobile networks within 5 years. Most of these devices will employ tasks that will be controlled from a remote host which requires a very small latency. Therefore, 5G is considered to be the near-perfect media for the internet of things concept. However, 5G as a hot topic, is still under research and there is a long way through realizing it. ASELSAN devoted a highly skilled team for the development of 5G base stations. As test engineers, we will be taking part in the development of 5G, however, in this paper we are not discussing about the tests of 5G networks. Rather we are inspecting the exploitation of 5G for global test scenarios. One possible application for internet of things can be thought as: multiple central or field test stations connected to the internet through 5G networks with low end-to-end latency, which are running test sequences from a cloud database and writing the results to the same database. Moreover, individual devices can connect to mobile networks and provide built-in test results to the same database, which exponentially increase the number of connected devices. In this paper, we are taking the first step towards realizing internet of test stations. Since the number of test stations can be very high it is intractable to use local test sequences and update core test files manually. We have developed a new test software set that stores all the test sequences in a database and automatically updates the core setup files as needed. With this scheme, it will be very easy to move on a cloud infrastructure and thus realize internet of test stations.

2C2: Component-level Testing

Room: Castle AB
Chair: David R Carey (Wilkes University & Four Hound SOlutions LLC, USA)
Comparison of the Efficiency of Two Methods on RF/MW Power Amplifier Gain Compression Test
Esra Nurgun (Aselsan Inc., Turkey)
It is a well-known fact to everyone involved in RF/Microwave testing world that handling of nonlinearities on the RF path bears very crucial priority. As exactly the case in the other applications, especially in radar and EW applications, power is not inexpensive. Therefore, the nonlinearities in RF PAs need a closer look and demand meticulously designed testing procedures. In this paper, the method of Network Analyzer GCA (Gain Compression Application) being more cost effective and accurate is proposed for gain compression measurement. In doing so, a comparison between the proposed method and commonly used method is made with respect to some optimality criteria. In establishing these optimality criteria testing times and measurement accuracies are considered to be of primary concern. On behalf of being more specific, two above mentioned traditional measurement methods are described in the following: The first method comprises to sweep the input power with SG (Signal Generator), monitor the output power with SA (Spectrum Analyzer) or PM (Power Meter) and measure the difference between them until the difference reaches the expected value. The second method comprises to sweep the input power with Network Analyzer, measure the insertion loss and calculate the output power directly for DUT. In our study, with proposed Network Analyzer GCA method, we have shown considerable improvement on increasing manufacturing throughput over classical SA/SG sweep method. We have seen that test time has decreased several order of magnitude depending on total sweep points and similarly considerable improvement of measurement accuracy has been observed. It has also been noted that the effectiveness of the proposed method becomes dominant as the number of measurement points increases. To conclude, using the newly proposed GCA option for gain compression measurements has been shown to possess the following advantages over classical sweep methods: • Highly accurate results with a guided calibration- removal of mismatch and power errors, • Elimination of the problems of lengthy test times, • Removal of the overhead of inconvenient setups by providing a single connection solution.
High-Speed FPGA Configuration and Testing Through JTAG
Ammon Gruwell, Peter Zabriskie and Michael Wirthlin (Brigham Young University, USA)
Since most FPGAs use the universal JTAG port to support configuration memory access, hardware and software tools are needed to maximize the speed of FPGA configuration management over JTAG. This paper introduces a tool called the JTAG Configuration Manager (JCM) that enables high-speed programmable access to the configuration memory of FPGAs through JTAG. This tool consists of a linux-based software library running on an embedded ARM processor paired with a hardware JTAG controller module implemented in programmable logic. This JTAG controller optimizes the speed and timing of JTAG transactions over cables of any length using an automatic speed calibration process. This JTAG interface enables custom configuration sequences to be sent at high speeds. The JCM also has access to all JTAG interfaces of the FPGA including temperature monitoring and internal Boundary SCAN, making it useful for many testing and verification applications.
Optimization of Core-based SOC Test Scheduling based on Modified Differential Evolution Algorithm
Libao Deng (Harbin Institute of Technology at Weihai, P.R. China); Debao Wei, Liyan Qiao, Xiaolong Bian and Baoquan Zhang (Harbin Institute of Technology, P.R. China)
System on a chip (SOC) based on reusable IP core has become the mainstream method of integrated circuit (IC) for many advantages, such as high performance, low power, small size and short development period etc. However, how to test these embedded cores on a system level efficiently is a bottleneck problem. Test scheduling can enhance parallel testing to minimize test application time. Test scheduling for SOC is a NP-complete combinatorial optimization problem, and it is equivalent to the 2-D bin-packing problem. Core testing solution is generated as a set of wrapper design which is represented by a rectangle with width equaling to the test time and height equaling to the number of test access mechanisms (TAM). Differential evolution (DE) is a simple yet efficient global optimization algorithm. However, the standard DE is unsuited for its variants operate in the discrete space. To better tackle the discrete space problems, a novel modified binary differential evolution algorithm (NMBDE) was proposed in this paper, which a probability estimation operator was developed. A new hybrid mutation mechanism based on the probability estimation operator is introduced for better handling the test scheduling, where the variation coefficient is not a constant, but changed with iterative times. If the fitness value of candidate solution is not superior to its parents, random individual is selected as the base to enhance population diversity in the optimal strategy, otherwise the optimal strategy chooses the best individual as the base with probabilities to accelerate effectively convergence velocity of iteration. The advantage of the proposed algorithm is that, when minimum information is at a standstill gradually, the probability operators are able to get the global solution. The experimental results on ITC'02 SOC benchmarks are very encouraging. Compared with other similar algorithms, the proposed algorithm in the paper can achieve better solution with fewer iterations.

2D2: Prognostics and Health Monitoring 2

Room: Monorail AB
Chair: John W. Sheppard (Montana State University, USA)
Automated Test Data Monitoring: Enabling Preemptive Response
Joseph Bosas, Jr (Honeywell Federal Manufacturing and Technology, USA)
The National Security Campus (NSC) collects a large amount of test data used to accept high value and high rigor product. The data has been used historically to support root cause analysis when anomalies are detected in down-stream processes. The opportunity to use the data for predictive failure analysis however, had never been exploited. The primary goal of the Test Data Monitor (TDM) software is to provide automated capabilities to analyze data in near-real-time and report trends that foreshadow actual product failures. To date, the aerospace industry as a whole is challenged at utilizing collected data to the degree that modern technology allows. As a result of the innovation behind TDM, Honeywell is able to monitor millions of data points through a multitude of SPC algorithms continuously and autonomously so that our personnel resources can more efficiently and accurately direct their attention to suspect processes or features. TDM's capabilities have been recognized by our U.S. Department of Energy National Nuclear Security Administration (NNSA) sponsor for potential use at other sites within the NNSA. This activity supports multiple initiatives including expectations of the NNSA and broader corporate goals that center around data-based quality controls on production. The Department of Energy's National Security Campus is operated and managed by Honeywell Federal Manufacturing & Technologies, LLC under contract number DE-NA0002839.
Complex System Health Analysis by the Graphical Evolutionary Hybrid Neuro-Observer (GNeuroObs)
Francisco J. Maldonado and Stephen Oonk (American GNC Corporation, USA); Rastko R. Selmic (Louisiana Tech University, USA)
Obtaining methodologies that enable predictive health monitoring of components degradation and the propagation of related effects across the overall system is a need when designing complex systems (such as autonomous vehicles, robotic systems, and aerospace platforms). In this paper, a current software development is presented for workflow generation and visualization to evaluate how component degradation will impact an entire system. This "Graphical Evolutionary Hybrid Neuro-Observer" (GNeuroObs) is a general purpose complex-system analysis software tool that integrates, within a graphical environment, methods to set up a complex system, implement design changes, inject failures and degradation in elements, components, or subsystems (collectively, entities), conduct simulations, and enable workflow data visualization (with graphics to track system conditions during failure evolutions and display entity health at selectable levels of detail). This generalized software tool was designed under the philosophy of NASA's Systems of Systems, and provides a strategy for analyzing design changes and susceptibility to faults and degradation at multiple levels. This flexible system expedites designs by providing an Entity Model and Background Algorithm (EM&BA) Library, where entity models contain both healthy and degradation mathematical representations. This library can be expanded by custom user defined models. The GNeuroObs building blocks are Workflow Data Generators (WF-DataGen), which are instantiated (through graphical aids) in the environment, and contain entity models, health monitoring, and input/output software interfaces. Relevant technical aspects of GNeuroObs include: • Highly Accurate System Modeling. Instead of modeling a system as a single black box, the framework builds a model by conducting a decomposition of the system into subsystems and components. Accordingly, models (which capture entity physics) for components are defined to introduce their contribution within the subsystem. At a higher level, subsystem models are linked for simulation of the overall system. To build the system model in this way, the WF-DataGen building block is used as a data structure, where capabilities include: (a) obtaining a high resemblance with the actual system; (b) accounting for degradation in components; and (c) automated analysis by the direct coupling of health management into entity models. • Approach Considers High Level Analysis Techniques. In addition to low level health analysis within WF-DataGens, high level analysis techniques include: (1) a hybrid Neuro-Bayesian scheme fusing both subsystem and low level entity information, where a stochastic inference mechanism provides fault root cause analysis and (2) infusion of PHM techniques. • Health Monitoring and Root Cause Analysis by a Powerful Library of PHM Algorithms. This aspect is based on artificial neural network paradigms, stochastic methods (e.g. Bayesian Networks and probabilistic theory), model based methods, and novel hybrid methods, which are available for instantiation in a WF-DataGen. By using our embedded Collaborative Learning Engine, adaptation is enabled for handling highly dynamic operational conditions, system design inaccuracies, and degradation. In this paper, GNeuroObs is described with the application of a fuel subsystem. In that system, the methodology allows describing interrelations among a complex set of heterogeneous sensors, where mathematical correlations are used to analyze failures in entities and propagation of effects across the system.
Reliable Health Monitoring and Fault Management Infrastructure based on Embedded Instrumentation and IEEE 1687
Artur Jutman and Konstantin Shibin (Testonica Lab, Estonia); Sergei Devadze (Testonica Lab OÜ, Estonia)
Rapid emergence of embedded instrumentation as an industrial paradigm and adoption of respective IEEE 1687 standard by key players of semiconductor industry opens up new horizons in developing efficient on-line health monitoring frameworks for prognostics and fault management. The cross-layer framework that we describe is capable of handling soft and hard faults as well as the system's degradation and is based on the following four major components. Embedded monitors and sensors as well as Built-In Self-Test (BIST) facilities and various checkers called collectively Embedded Instrumentation form the fundament of the framework and are responsible for collecting service information. The next layer based on IEEE 1687 networks appended with a special-purpose asynchronous emergency signaling infrastructure is responsible for efficient data transportation mainly from the instruments towards monitoring and fault management software. An important property of our approach is that the data exchange is done in very rare occasions, based on thresholds or fault detection conditions. As a result, simple mechanisms allow to disregard the vast majority of irrelevant data. We employ a set of flags asynchronously transported through the hierarchy of modules resulting in a carefully prioritized interruption reaching the system software. This is used for both ensuring an emergency reaction to an uncorrected fault event as well as routine background transportation of system health information. Fault management software represents the third major component of the proposed framework. It takes responsibility for proper reaction on interrupts, handling received information and initiating follow-up service actions, e.g. scheduling diagnostic procedures in case of faults, scheduling tasks for re-execution, updating health-map data. The latter is the fourth component of the framework. It reflects the actual status of the system marking some blocks as unusable, while holding error-occurrence statistics for the others. We describe the way we organize the health map. A few research groups have recently recognized the value of IEEE 1687-based approach on health-monitoring and published in 2016 several well-received works on this topic. In the full paper we will review the previous work in this direction along with the current status and important results achieved so far as well as point out promising directions of future work. The current paper will specifically address the aspects of reliability of the health monitoring service infrastructure itself. We will demonstrate that monitoring, diagnostic and fault management functions based on underlying IEEE 1687 infrastructure and instrumentation can be both negligible from the system performance perspective as well as more reliable than the system under on-line monitoring and fault management itself. Since, the approach is majorly based on the reuse of the DFT infrastructure implemented in semiconductor devices for manufacturing testing and diagnostic purposes, the additional overhead is minimal and it is mainly caused by the need to protect this service infrastructure during the on-line operation. We demonstrate that the value of the proposed framework is especially evident for modern high complexity devices and advanced node digital SoCs, which are increasingly prone to defects and wear-out.

Wednesday, September 14, 12:00 - 13:30

Awards Luncheon

Room: MK Ballroom 2-3

Wednesday, September 14, 13:30 - 15:00

2A3: Software Advances in ATE

Room: Magic Kingdom 1
Chair: Michael Seavey (Northrop Grumman, USA)
Signal Oriented ATS Software Platform Based on STD and ATML Standards
Wang Cheng and Meng Chen (Shijiazhuang Mechanical Engineering College, P.R. China); Chen Peng (Institute of Mechanical Technology, P.R. China)
To solve the problems of the openness, interoperability and portability of automatic test system (ATS), the signal oriented ATS software platform based on STD and ATML standards is established in this paper. First of all, through the object-oriented analysis, the formation and operation mode of signal oriented ATS software platform is studied from multiple perspective, such as the organization view, function view and information view. The static and dynamic modeling of the software platform is completed based on STD standard, and the key signal components in STD standard is built by COM technology. Then, based on the actual demand of the general test information description of electronic equipments, the schema files are cut by referencing the schema definition in ATML standards, and the standardized describing method of the test adapter and test results is presented. At the same time, the configuration files conforming to ATML standards is able to be automatically generated with the configuration tool developed in this paper. At last, the signal oriented ATS software platform management program is developed for realizing the seamless connection of the runtime system based on STD and standardized description based on ATML.
Integrating Software Data Loaders into ATE Systems
Troy Troshynski (Avionics Interface Technologies & Avionics Interface Technologies, USA)
The Integrated Modular Avionics (IMA) concept has been adopted into several new military and commercial aircraft programs such as the F-22, F-35, Airbus 380, and Boeing 787. The goal of the IMA concept is to reduce the number and varieties of hardware computing modules and to increase the portability of avionics software. The IMA concept is currently being driven by industry initiatives such as the Future Airborne Capability Environment (FACETM) Consortium. It can also be seen in new industry standards such as ARINC 653 which defines an avionics application standard software interface. As a result of the adoption of the IMA concept, new avionics hardware modules are becoming increasingly generic, multipurpose, and reconfigurable based on the loaded software applications. Therefore, the automated test equipment (ATE) used to support maintenance and service of these new avionics systems must consider the use of standard approaches for handling loadable avionics software such as ARINC 615 and 615A software data loaders. This paper provides a brief technical overview of IMA systems and ARINC software data load protocols. It also explores strategies for integrating software data loaders into ATE systems that are required to support IMA based Units Under Test.
Unlocking the Potential of "big data" and Advanced Analytics in ATE
Carlos Hernandez (Global Strategic Solutions LLC, USA); Mukund Modi (NAVAIR, USA); Luis Hernandez (Global Strategic Solutions LLC, USA); Anne Dlugosz (NAVAIR, USA); Dave Miller (Global Strategic Solutions LLC, USA)
Big Data and advanced analytics capabilities are delivering value in many commercial sectors. The motivation for implementing this new technology is having the ability to conduct analysis of big data to achieve cost reductions, business process improvements, faster and better decisions, and new offerings for customers. These key business objectives also apply to the domain of automatic test equipment (ATE). It is clear that big data and advanced analytics technologies have the potential to bring dramatic improvements to the DoD ATE community of interest (COI). However, in order to unlock the potential of Big Data and advanced analytics in ATE, we have to deal with some fundamental issues that impede their implementation. For example, currently there is no connectivity or integration of UUT health monitoring data produced by the system itself to the troubleshooting, test and repair data produced throughout the maintenance process or test data produced by the ATE. Also, there is no standard format or interface employed for capturing, storing, managing and accessing the health state data produced by the ATE. Data collected across operational maintenance activities is in numerous non-standard formats, making it difficult to correlate and aggregate to support advanced analytics. This paper discusses the fundamental shift in business practice required to address these critical issues, the specific benefits that can result from the integration of Big Data and advanced analytics in ATE, including enabling Prognostics and Health Management (PHM). The paper also provides an overview description of a specific case study, the application of ATML standards in the approach, and some critical design and implementation issues based on current (actual) development efforts.
Software-Driven Technology Insertion Strategies for Automatic Test Systems
Michael Watts (National Instruments, USA)
As Automatic Test Systems continue to adopt architectures based on synthetic instrumentation and modular I/O platforms, software is eclipsing hardware as the primary input for determining technology insertion cadence and scope. While abstracting Test Programs Sets from specific hardware is commonly referenced as a valuable tactic to reduce the risks of I/O obsolescence, it also requires significant up-front investment with a return that is later determined by the frequency of change. As systems become increasingly software-centric, a cost-optimized development strategy requires bounding technology insertion options, evaluating the costs associated with developing driver and measurement layers across those options, and managing the costs of migrating across application software and operating systems as a function of time. This paper will discuss the evolving solution space for software-dominated technology insertion strategies through an examination of the underlying compatibility of the COTS components at play.

2B3: Design for Testability Panel

Room: Magic Kingdom 4
Chair: Louis Y. Ungar (A. T. E. Solutions, Inc., USA)

Many designs lack the necessary features to detect, isolate and easily repair failed circuits. While DFT techniques, such as JTAG/IEEE-1149.1 boundary scan have assisted in board manufacturing test, their system level applications have been lagging. For example, commercial-off- the-shelf (COTS) boards and systems increasingly used in military systems reduce hardware costs, but has it made test and diagnosis of the system easier or more difficult? In many cases the JTAG port is not accessible at the system level, which begs the question of what can we do to get testable COTS? Some COTS are equipped with Built-In Test (BIT), but these test the individual subsystem - not the entire system - and that may not detect or isolate system faults. System level DFT and DFD (diagnosability) are essential for cost effective support. We know that testability has to be implemented early in the design, but can test and design work together cost-effectively? Whom do we ask to implement it? How? New techniques, such as the IEEE-1687 support hierarchical test, but can we get management backing to design testable systems? These are some of the issues that Panelists and the Audience will tackle and as in previous panels make pragmatic and useful recommendations to bring back to managers. Interestingly, at the IC level, DFT is widely supported by managers, at the board level it also finds some support, but at the system level testability is hard to come by. Why?

2C3: Test Techniques 1

Room: Castle AB
Chair: Jeff Murrill (Northrop Grumman, USA)
BER Test Time Optimization
Suhas Shinde (Intel & Intel, Germany); Jan Knudsen (Intel Deutschland, Germany)
in commerce, time to market (TTM) is defined as the length of time it takes from a product being conceived until its being available for sale. There are no standards for measuring TTM and it varies from product to product. The product life cycle of PC's is 2-3 years whereas for mobile products its 1-2 years. During this product life cycle major part of time is taken by testing phase. Hence it becomes essential to reduce number of tests to very critical ones and also to reduce test time. For high speed serial interfaces one of the quality measures of digital transmission is bit error ratio (BER). BER is defined as the ratio of number of received bits in error to the total number of bits transmitted. BER testing for high speed serial interfaces requires long string of bits to be sent and hence requires very long test time, usually in minutes and hours. This test time finally translates to money and hence should be shortened. This paper explains how hypothesis definition and testing can reduce BER test time and cost cutting can be achieved. Further research goes in to verification phase of this test methodology with test lab exercise.
Solar Probe Plus (SPP) Wrap Around Automated Testing
Anthony Parker (JHUAPL, USA)
Solar Probe Plus (SPP) Wrap Around Automated Test-Bed Tony Parker The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, 20723 The Solar Probe Plus (SPP) mission, under NASA's Living with a Star program, will fly a spacecraft (S/C) through the sun's outer corona. The mission will gather data on the processes of coronal heating, solar wind acceleration, and production, evolution and transport of solar energetic particles. The spacecraft has an Electrical Power System or EPS that has to undergo testing before delivery to the spacecraft for integration and test. The specific unit to be delivered is call the Power System Electronic box or PSE. The PSE has a novel S/A control algorithm which autonomously positions the wings to optimize the thermal load while maintaining adequate electrical power. A Wrap Around Automated Testbed (WAAT) containing various Ground Support Equipment (GSE) has been designed to test the PSE in real-time. The major components of the Testbed consist of a dynamic solar array simulator (SAS); A Battery Simulator to emulate the spacecraft flight battery; An EPS emulator to control command and telemetry to the PSE. This EPS emulator also provides a fast serial link for incident radiant flux on the S/A wings for any wing angle, S/C attitude and shadow condition during any point in the mission computed via a Simulink model to the SAS based on PSE telemetry. The EPS emulator will also provide any necessary loads, power supplies and temperatures. The system has a script engine that will allow communication between all GSE and provide complete automation. This WAAT provides the user the ability to test the PSE or any other Unit Under test including board or slice level designs by having command and telemetry specific to that particular GSE. Each GSE then become part of the entire Testbed completely under automated control by tying it to a single script engine on a server. With the exception of the script engine each GSE has a PXI controller and is using PXI modules to accomplish various tasks including controlling and communicating with multiple Digital Signal Processors (DSPs). The WAAT can travel with the PSE or the UUT to automate the process for environmental testing before delivery to the spacecraft. The WAAT can also be used for back up testing in the event any anomalies occur during spacecraft integration or flight.
Diagnostic Engineering in the 21st Century: How do I optimize my analyses to solve real-world problems?
Jack L Amsell (DSI International, Inc., USA)
Paper Synopsis In the early days of Systems Engineering, several methods were developed in industry to verify or validate designs, ensure that products could meet requirements and specifications, and later support those products. Each of those needs drew upon techniques and methodologies that were state-of-the-art. Sometimes, those approaches could not resolve all issues. For that reason, newer methods were developed to analyze and answer questions, but those newer methods did not really bring forth the necessary paradigm shift to supply optimal results. This paper will address that gap: Is the goal Design Assessment or an Implemented Solution? • Modeling provides full system knowledge o Modeling tools permits quick capture of topology and dependencies o Test development efficiency to ensure capture of diagnostic knowledge o Comprehensive diagnostic studies for flexibility in analyses • Interconnectivity allows for sharing specialized analytical resources • Broadband run-time environment and tools o Ability to merge diagnostic analyses with data resources o Easy transfer of formatted data to run-time access tools o User-oriented interfacing for more efficient maintenance and support o Database to provide knowledge-based enhancement of support options Diagnostic Engineering Paradigm for Today • Relevant terminology o Metrology o Testability o Diagnosability • Modeling concepts o Object-oriented modeling o Functional dependency modeling o Use of dependency net linking o Encapsulation of data o Hierarchical modeling • Diagnostic analysis concepts o Hybrid diagnostics—Function-based / Failure-based o Testability metrics o FMECA / FTA o Fault Codes / Fault Insertion o Simulators
Automated Testing and Quality Assurance of 3D Printing / 3D Printed Hardware
Jeremy Straub (North Dakota State University, USA)
A significant proliferation of additive manufacturing (or, as its commonly known, 3D printing) use has occurred, a departure from its rapid prototyping origins. Aerospace and military applications represent an area of growing use, an area providing significant benefit and a set of applications where produced-item quality is critical. This paper presents an overview of an automated visible light (or other pixel-based sensing) data technology to detect, classify and prospectively respond to 3D printing anomalies and failures. The paper begins with a discussion of additive manufacturing. Then, a review of common additive manufacturing problems (ranging from those faced by consumer-grade printers to the higher-end and more reliable systems that are used to produce aircraft and spacecraft parts) is presented. The paper then continues with a discussion of the problems that the aforementioned defects can create and an assessment of the magnitude of these problems. From here, the paper turns to a presentation of the assessment technology. It provides an overview of the use of the comparison of projected and sensed imagery to autonomously identify defects in a 3D printed object (both during printing and after printing is completed). The use of the technology with multiple materials is discussed. The importance of both in-process assessment (for detecting internal structural defects, defects which might later be covered and defects that might otherwise be difficult to image) and post-completion assessment is discussed. The efficacy of early defect detection for prospective corrective action is also discussed and the correction approaches possible based on in-process detection are considered. The algorithms used for automated detection and correction are presented and their efficacy for problem-solving are discussed. Real-world work demonstrating error detection is presented and assessed. Finally, focus turns to discussing the efficacy of the technology for multiple application areas. Specifically, it's utility for multiple types of additive manufacturing, including fused deposition modeling (FDM) and sintering, is considered. Based on this assessment and a discussion of the needs of various applications, the efficacy of the technology is assessed for the quality assessment of multiple types of printing. The needs of multiple application areas (including aircraft, spacecraft and terrestrial motor vehicles) are presented and, from this, the technology's utility for these applications is determined. The benefits that each application area could receive (and the likelihood of these benefits ensuing) are discussed. The paper concludes with a discussion of the future work that is required to advance this technology to be ready-for-use and relevant current work to this end.

2D3: Advanced Instrumentation Approaches

Room: Monorail AB
Chair: Robert R Fox (US Navy, USA)
A New Category of Software-defined Instrumentation for Wireless Test
Tarek Helaly (ThinkRF Corporation, Canada); Nikhil Adnani (ThinkRF Corp., Canada)
The past decade has seen an exponential proliferation of wideband radio communication technologies. The drive toward wider bandwidths and increasingly complex modulations presents unique challenges from a test and measurement perspective. At the same time device manufacturers have to contend with reduced or shrinking test equipment budgets. This trend is going to accelerate with the rapid proliferation of newer Internet of Things devices and their unique test requirements. This paper describes a new category of cost-effective, software-defined, headless signal analyzer platform called the WSA6000. This product, distinguished by its cost-effectiveness, small form-factor and enhanced performance specifications enables a range of new test applications that could not be meaningfully addressed by legacy equipment. The software-defined aspect enables the user to take advantage of an external host processor such as that in a laptop or desktop. Additionally, software modules for specific modulation formats can be utilized with a common hardware platform. This paper describes key attributes of the WSA6000, its architecture, how its different from PXI, rack-and-stack and handheld products with example applications.
Automating Atmospheric Neutron Testing: Minimizing Cost and Improving Statistical Certainty
Matthew Smith and Lawrence Kent (Draper Laboratory, USA); James Montante (Draper, USA)
Accurate radiation-induced event-rate predictions are critical in establishing expected electronic equipment performance in natural space and terrestrial avionic environments. Some sources of uncertainty, like environmental software models, cannot easily be improved. However, the statistical certainty of collected data, facility rental expense, and engineering man-power can be optimized through high fidelity automated test. The investment in automated test development results in a lower overall cost of a test campaign, while simultaneously maximizing statistical certainty of the collected data. This is achieved by tailoring the automated test to expedite the event data capture per unit of real time at atmospheric neutron test facilities.
Method and Device for Hot Air leak Detection in aircraft Installation by Wire Diagnosis
In most aircrafts, hot air leak detection loops are formed by thermosensitive cables having temperature dependent characteristics. These wires are installed along air leads in order to be able to react to temperature changes induced by leaks. This reaction consists in a local melting of eutectic salt, which results in a short circuit from an electrical point of view. Hence, an alert is sent to the cockpit. However, with old configurations, this alert does not include the leak localization information. Classic methods based on load measurement allowing defect localization are not accurate enough as they do not take into account the cable aging and junction degradation. This may cause false alerts. Reflectometry based methods are among the most common for cable diagnosis. Reflectometry is a non-destructive method, based on the radar principle. It measures the signal reflected by characteristic impedance variation. The proposed method uses multi-carrier reflectometry: MCTDR (Multi-Carrier Time Domain reflectometry). Advantageously, the MCTDR measures allow our device to be superimposed on the already installed systems without interfering with current signals. The reflectometer measures received signal and compares the amplitudes of this reflectogram to a given reference. A hot point is detected when the amplitudes of a given number of successive reflectograms are increasingly greater than reference. This is being caused by a decrease in local value of impedance. To identify fast drifts caused by the appearance of hot points, the device uses a floating reference, this reference is being modified over time. This floating reference allows in particular not to take into account slow drifts, caused by cable aging and environment variation, and eliminates many sources of false alarms. The device allows the detection and localization of defects with a good accuracy. Moreover, we deduce the value of the temperature at the hot area by computing its impedance. This tool can be used in maintenance mode or embedded mode allowing preventive maintenance and avoiding aircraft on ground (AOG) situation.
Lost Bus: Solving the obsolete PC Bus standards problem in ATE
Pavel Gilenberg (Teradyne, Inc., USA)
Every 6 to 11 years a new PC peripheral interface standard is developed. As the new interface gains in popularity, the old interfaces that are replaced become obsolete. This poses a challenge for ATE and UUT equipment that rely on the PC peripheral interfaces for testing. The solution for the obsolescence problem in PXI (and similar) systems is to move the interfaces away from the PC and into an instrument. A peripheral instrument would need to include several standard PC interfaces such as Ethernet, USB, SATA, UART, and I2C. By moving the interfaces to an instrument, when the PC and the associated interfaces become obsolete, the instrument can continue to be produced allowing the continued testing of the legacy UUTs. Furthermore, the instrument can maintain software compatibility through multiple OS releases preventing costly rehosts of TPS (Test Program Set).

Wednesday, September 14, 15:00 - 15:30

Ice Cream Break

Room: Exhibit Hall

Custom Systems Integration is sponsoring an ice cream break on Wednesday afternoon in the Exhibit Hall, Center Lounge area. The break will be held from 3:00 PM to 3:30 PM on Wednesday, September 14.

IEEE AUTOTESTCON heartily thanks CSI for their sponsorship of this great mid-week event and looks forward to everyone having a great 2016 conference.

Wednesday, September 14, 15:30 - 17:00

2A4: Design For Testability

Room: Magic Kingdom 1
Chair: Craig D Stoldt (BAE Systems, USA)
Tapping Into Boundary Scan Resources for Vehicle Health Management
Louis Y. Ungar (A. T. E. Solutions, Inc., USA); Michael D Sudolsky (Boeing, USA)
Design for testability (DFT) should go beyond simply assisting manufacturing test or even beyond fielded unit troubleshooting. Boundary scanned components can be controlled to collect real time snapshots of signals capable of assessing circuit health in situ without interfering with normal operation or flight. Vehicle Health Management (VHM) frameworks can utilize the information gained from SAMPLE instructions gathered by JTAG/IEEE-1149.1 boundary scan compatible ICs using Boeing On-Line Diagnostic Reporting (BOLDR®) techniques to enable this data collection for assessing what maintenance actions should be taken. Boundary scan data at or around the time that failures take place can be collected as historical information and retained as "evidence" during a call for line replaceable unit (LRU) maintenance actions. First, it can help assess whether built-in test (BIT) or embedded test indications are persistent, continuous for certain operational modes, intermittent, or simply spurious. In other words, it can help determine false alarms (FAs). Second, once LRUs are in the repair facility and a No Fault Found (NFF) situation is encountered, the historical evidence can help determine the root cause and direct repair actions. Distributed and Centralized BIT for VHM data acquisition can be enhanced by the information boundary scan data provides. Many LRUs that already have boundary scan hardware can utilize embedded software updates coupled with BOLDR® VHM techniques with minimal if any hardware changes to take advantage of this added information source. This paper details how SAMPLE and BOLDR® can be used without major changes in legacy or new avionics.
Inhibiting Factors in Design for Testability Higher Education
David R Carey (Wilkes University & Four Hound SOlutions LLC, USA); Russell A Shannon (NAVAIR, USA)
Engineering students are not graduating with the necessary knowledge or experience in design for testability (DFT), automatic test equipment (ATE), or diagnostics in order to work in the field. Many of these engineers will join systems development teams. As a result, they typically do not demonstrate a consistent understanding of integrated diagnostics, or have an appreciation of the need. They appear to lack the experience needed in this area. These same "fresh out" engineers will ultimately derive the low-level requirements for developing diagnostic systems, and this lack of knowledge about testing environments will have a significant impact. Failure to adequately address the integrated diagnostics and testing needs of a system greatly impacts its supportability and, consequently, the cost of that system throughout its life cycle. Integrated diagnostics is a career field for which there currently exists no standard set of basic qualifications, few educational opportunities to study at the university level, no clear processes within most organizations for practicing integrated diagnostics as a systems engineering activity, and no uniform method of sharing techniques and lessons learned with new employees. Studies have found that the majority of test engineer training is on-the-job, rather than knowledge acquired as part of a higher education degree program, or a formal training process. As a result, it requires two to three years for any recent graduate to become competent in the field of test engineering. There are three main inhibiting factors to teaching design for testability as part of post-secondary education. The first factor is cost. The high cost, and quick obsolescence, of many ATE systems is a barrier to entry to any small- or medium-sized college's engineering department budget. Even accounting for corporate donations, there are hidden costs, such as facilities and equipment maintenance, which make ATE prohibitively expensive. Moreover, in the United States, all engineering curricula must be accredited by the Accreditation Board for Engineering and Technology (ABET). It is an arduous process, even for such well-worn topics as electrical engineering or mechanical engineering. A department chair is unlikely to risk the department's accreditation, or prolong the accreditation process, by including an exotic topic such as DFT, or diagnostics. Finally, it is the goal of most institutions that their students will obtain employment upon graduation. To that end, curricula are often tailored to the demands of local employers. If surrounding industry is not asking for skilled diagnostic or DFT engineers, then there is no incentive for an engineering department to include it in a degree curriculum. This paper explores each of these factors in depth, and provides mitigations for overcoming the challenges that each presents.
Design for Testability concepts applied to product development: An Embraer aircraft
SUMMARY As the new product design becomes more complex, increases the difficulty to do checks and efficient validations, requiring more resources for its implementation, which implies greater expenditure of time for its implementation, so the importance of DFT concept is increasing even more. On EMBRAER, in aircraft development cycle are required number of checks and validations throughout the design phase. The validation and verification are applied to the product or components through a variety of tests. This paper discusses the tests performed during the aircraft manufacture phase wich consist of functional and operational tests in order to verify the integration of systems, that is ensure that during manufacture of aircraft all parts have been assembled correctly as the design specifications. Manufacturing tests can be divided into mechanical testing and electrical integration tests: the first allows you to check the correct assembly of the mechanical components, the second allows you to check the correct functioning of electrical interconnections and systems integration, mitigating failure of open connections, short and / or inversion signals. This work has as object the creation of DFT requirements to optimize the electrical integration of manufacturing test. To enable the creation of requirements, it has been identified the need to identify the characteristics standards of the various aircraft systems. These systems have different technologies and functions, however, from the viewpoint of electrical integration testing, all systems share some similarities, which allows to use the same test application methodology in production. The arrangement of the system architecture allows to identify electrical similarities according with the division of different components in fundamental elements of test that can be divided into: Sensor Element / primary transducer, primary actuator element, data concentrator element and data presenter element. The deployment of systems in test key elements possible to apply the DFT concept and, consequently, the writing of the testability high-level requirements applicable to electrical integration tests of all systems. Basically, all DFT requirements consist in the concentration of various information on a main bus of the aircraft. Thus, with the use of a data acquisition platform it is able to access all information on one service port and compare the information expected for a given scenario. COMMENTS AND CONCLUSIONS  The main contribution of this work is the development of standards testability requirements for propulsion , hydraulic, electrical, avionics, interior, mission, environmental and flight controls systems, which can be expanded to future programs. Testability requirements contribute to reduce the execution time and the cost of support tools to test the manufacture and maintenance of aircrafts. The KC-390 Program was the pioneer in the EMBRAER to adopt the concept presented in this paper.
Design and Implementation of Shared BISR for RAMs: A Case Study
Gang Wang (University of Connecticut, USA); Chengjuan Chang (Institute of Computing Technology, Chinese Academy of Sciences, P.R. China)
As transistors' sizes of embedded memory continue to shrink and the valuable silicon is becoming the draining resources, it is the prevalent trend that multi-memory structures exist in the current SOC design to achieve better performance. Duo to the imperfect manufacturing processes, it may introduce the faults to the designs. Built-In Self-Test(BIST) and Self-Repair(BISR) are the better test and repair methods for embedded memory, however, to single embedded memory, both BIST and BISR are unacceptable in multi-memory design and the redundancies resources in memories which memory manufacturers provide are very limited, so it is inefficient to use traditional redundancy resource allocation algorithms, instead of using a more precise BISR structure to improve both the repair rate of RAMs and the resource utilization of redundancies, as well as, reducing the silicon area overhead of BISR circuits. For these aims, this paper proposes a shared self-repair design that uses CAM as the operation units of fault information. In the paper, it clearly presents the special components of design and the corresponding working principles. We implement this structure in the real industrial microprocessors. Experimental Results demonstrate the effectiveness of the proposed structure.

2B4: Panel: 2016 Outlook of Modular Instrumentation in the T&M Industry

Room: Magic Kingdom 4
Chair: Bob Helsel (LXI Consortium & PXI Systems Alliance, AXIe Consortium, VXIbus Consortium, IVI Foundation, USA)

What is the status and outlook for VXI, PXI, LXI, and AXIe instrumentation? Has modular instrumentation become the defacto standard of automated test? In what applications? What does this mean for Mil/Aero applications in particular? Four industry experts will give brief presentations on these topics followed by an interactive panel discussion.

2C4: Test Interface Solutions

Room: Castle AB
Chair: James Lytle (NAVAIR, USA)
IEEE-1505.3-2015 Std BAE Manufacturing Test Interface Implementation
George Isabella (BAE Systems, USA); William L Adams, Jr. (United States Air Force, USA); Robert Spinner (Advanced Testing Technologies, Inc. & ATTI, USA); Stephen Mann (BCO, Inc, USA); Michael J Stora (System Interconnect Technologies, USA); David Droste (CGI Federal, USA)
This paper provides a manufacturing test implementation overview by aerospace BAE utilizing the "IEEE-1505.3-2015 Universal Test Interface Pin Map Configuration for Portable and Bench Top Test Requirements Utilizing IEEE Std 1505-2010".  The 1505.3 standard specifies requirements for a test interface system configuration input/output (I/O) framework and a physical pin map, to enable the interoperability of compliant interface fixtures (also known as interface test adapters, interface devices, or interconnection devices) on multiple scalable ATE systems. The paper describes how the features and capabilities of the IEEE-1505.3-2015 Standard were applied by BAE to a manufacturing test application. This is designed to validate the open standard as a high performance, multi-signal connector, scalable architecture that could be further standardized across all of its manufacturing applications. As a fundamental interface element of any Test Program Set (TPS) I/O configuration (receiver/fixture structure), the 1505.3 standard implementation at the factory and subsequent migration to its government customer for depot use, can have significant benefits. These value-added benefits for the US Air Force are discussed regarding the vertical integration of a IEEE-1505.3-2015 TPS.
A Case Study Dividing Tests of Interdependent Units Into Independent Test Systems
Volkan Özdemir (ASELSAN Inc., Turkey)
There are many units in weapon systems; that communicates with each other and supplies necessary signals to related units. The complicated hardware and software interfaces of these systems make designing tests for production more challenging. In order to test a unit individually, without the other units existing in the test system, some serious test design work must done for simulating the functions of other units. To design a well-balanced test application, commercially available test equipment must be used, since maintenance of the test system gets harder if unique test boards are designed instead of using general test equipment. In this paper, a case study is explained which has two main units and four electronic boards. These UUT's are highly dependent on each other and unique output of one unit is the unique input of other units. Some signals even cannot be produced if the designated input is not wired to the UUT. These units and electronic cards communicate with each other using RS422 and Can-Open during the operation continuously. The communication rules are so strict that even if a random message is missing than the communication of the system fails. To design a test system that simulates functions of these units and communicates with UUT's at the same time by serial channel and CAN-Open, some serious time is needed but if the test designer has a limited time, than the process of designing must be flawless and systematic. With years of experience in the test design area, ASELSAN excelled its own test development process in order to work under pressure and limited time. Dividing of the test design process into different parts and building a closed loop control system around it lower the necessary time to develop tests dramatically. Throughout this paper, ASELSAN's way of designing a test system is clearly explained with a challenging weapon system.
A universal structure model for switches and its application to automatic test system
Xiuhai Cui (Harbin Institute of Technology & University of Cornell, USA); Shaojun Wang and Ning Ma (Harbin Institute of Technology, P.R. China); Yu Peng (Harbin Institute of Technology, HIT, P.R. China)
Various types of switches are widely used in aerospace automatic test industry, such as toggle switch, multiplier, matrix switch etc. Different types of switches have different drivers and development methods. To simplify the implementations of drivers and reduce the cost and time associated with developing new test systems. This paper proposes a universal model to define different types of switches. Based on such model, we develop a satellite test system targeting various types of switches. In this system, we use backtrack searching algorithm to find the path between ends of switch. These different switches can use one kind of driver. To improve the interchangeability of instrument, the software architecture was built with IVI (Interchangeable Virtual Instrumentation) - COM (Component Object Model) driver architecture. Our test system has been verified in actual PXI system and VXI system. More specifically, we make the following contributions: First, a universal model is designed for different types of switches. We describe switch with a X×Y matrix. X and Y represent the different ends of the switch. Four kinds of numbers are used to represent the different connection relationship including direct connection, without connection, configuration channel and occupied channel. Through this method, the different switches are described in one uniform way. Based on this module, backtrack searching algorithm is selected to find path. This search method can quickly find the path between ends of switch and solve the problem of which the configurable channel is occupied by other path. This model allows designing a driver suitable for different switches. So it reduces the workload of instrument of manufacture. Second, with this kind of the model, we design a test system for satellite. Our test system contains different switch modules, data acquisition modules, and communication modules. More specifically, this system consists of three levels. In the top level, we support a user-friendly graphical interface, databases for test programming set, a procedure editor and a procedure executor. The middle level is the driver of test system. We adopt an industry open standard architecture IVI-COM. To facilitate the use for user, the IVI Configuration Store was developed based on XML file for this system. The bottom level is composed of different switches, data acquirement boards, and communication boards. Third, we verified the universal module and the driver based on our designed switch model in actual PXI system and VXI system. They both work correctly in ground satellite test.
Loopback Test Unit Benefits
Failure in the Production Test Set can halt Production and lead to extended downtime spent isolating the failure. Identifying a failure in a test set often relies on using a known good unit, or "Golden Unit". Use of a Loopback Test Unit in place of a Golden Unit addresses issues with safety, cost, signal accessibility and fault isolation. A Loopback Test Unit is used in place of the Golden Unit for test set fault detection and isolation. Therefore, it is designed to simulate the Production Unit's physical form factor and interface in order to be compatible with the Production Test Set. It receives the power forms and signal inputs or outputs from the Production Test Set and, as applicable, provides a response, loops the signal back or generates a test signal. Maximum utility is achieved by implementing the Loopback Test Unit at the lowest level of assembly. Commonly this is at a circuit card assembly (CCA) with test connections by spring probes or bed of nails fixtures, which have a higher rate of connectivity failure. Designing with JTAG compatible test interfaces provides an automated connectivity test. This allows verification via automated testing of individual test probe connections to a common power plane. Furthermore, a well-designed CCA Loopback Test Unit can be reused and incorporated into the next "Module" level of assembly, providing design and test verticality. Designed for the Production Test interface the Loopback Test Unit can use the same or modified test automation software. A Loopback Test Unit with automated testing provides an easy way of verifying the proper function of the Production Test Set and can be integrated as a standard operation into the factory test flow, as appropriate. If a failure, or especially repetitive failures, occurs with a Production Test Set, the technician can insert a Loopback Test Unit, run an automated test and begin failure analysis. By design, the Loopback Test Unit can provide signal accessibility and isolation, greatly facilitating fault identification. Signal Isolation also allows signal insertion which may be necessary for more complex failures. In this paper, the author will present the general concept of a Loopback Test Unit, the time consuming troubleshooting deficiencies it resolves, and address the financial benefits of implementing this in production testing.

2D4: Interesting Instrumentation Techniques

Room: Monorail AB
Chair: Jesse Zapata (US Navy, Point Mugu, USA)
Developments in Instrumentation and Measurement: Advancements in Power Source Technology: Advancements in the use of digital technology in programmable power sources
Herman van Eijkelenburg (Pacific Power Source, Inc. & Pacific Power Source, Inc., USA)
Programmable AC power sources have been widely used to implement and support a wide range of Test Procedure Specifications (TPS's) as they provide the test engineer with the ability to fully control voltage, frequency and current to the unit under test. Products like these are used to simulate various power conditions and anomalies that are likely to occur in actual use of AC and DC powered products. They are also essential for providing the requisite 400Hz frequency AC power to military and avionics subsystems. These types of instruments provide the following features and benefits to the test engineer: • Safety isolation between AC grid input and output to unit under test. • Conversion of any grid voltage found around the world to a specific desired output frequency to the unit under test. • Precise control over output voltage, load regulation • Provide output power immune to any AC line input fluctuations or momentary voltage drops. • Phase conversion from single phase to three phase, single phase to split phase or three phase to single phase or Conventional AC Source Topologies and Design The vast majority of available AC power source designs are based on pulse with modulated control circuits and the use of low frequency transformers to provide isolation between input and output of the AC power source. These PWM design generally use analog control circuits to provide output regulation, current limit functions and frequency conversion functions. While this is a proven design dating back to the early 80's, it is fraught with a series of drawbacks. To list a few; • The use of line frequency AC input transformers to provide galvanic isolation ads significant size and weight to the product, especially as power level increase. For examples, a 15KVA AC source will require a 18kVA three phase input transformer weighing approximately 180 lbs. Power sources using this design can weigh in well over a 350 lbs total. • Alternatively, the use of output transformers to provide galvanic isolation causes similar increases in size and weight and also prevents the ability to generate DC output capability. Furthermore, output transformer must support the wide frequency range typically associated with programmable AC power sources, typically to 1000Hz or higher, requiring more complex and costly transformer designs. Digital Power Conversion The higher PWM switching speeds required to support the wide output frequency range of an AC source, often higher than 30 kHz have made it difficult to use digital signal processors to provide all control functions. With the advent of recent advances in DSP technologies, a full digital implementation of an AC power source design supporting these switching frequencies is now feasible. A good example of this is the new Pacific Power AFX series which uses a three stage, all digital power converter design that eliminates both AC input and output transformers and allows a power density five times higher than similar contemporary products and a fourfold reduction in weight. This presentation will illustrate the topology used in this new design to highlight the approach used to obtain this significantly improved packaging density for AC and DC power sources. Specific Sections will be: • Why Digital • Converter Typologies • Efficiency, Size and Weight • Unique capabilities made possible by the use of all digital controls • Summary
Run-Time Reconfigurable Instruments for Advanced Board-Level Testing
Igor Aleksejev (Tallinn University of Technology & Testonica Lab OÜ, Estonia); Artur Jutman (Testonica Lab, Estonia); Sergei Devadze (Testonica Lab OÜ, Estonia)
FPGAs are often used as a platform for embedded instrumentation. Either being already a part of a system or a part of DFT, the on-board FPGA can be exploited to carry out necessary manufacturing tests. Up to now, the FPGA-based instruments proposed in the literature were soft core IP based. This assumes that a full compilation process have to be done for each product and test pair. In this work, we propose run-time reconfigurable (RTR) FPGA-based embedded instruments which are distributed as pre-compiled ready-to-use bitstreams. These instruments are designed in a special way that allows on-the-fly adaptation of the instrument to test the particular product. The key advantage of the proposed RTR approach is that such instruments can be instantly used in the production and not require to be recompiled for a new product or after test specification change. For these purposes all proposed instruments have a special run-time reconfigurable architecture that maintains all possible configurations of interconnections between an instrument (i.e. FPGA) and a device under test (DUT) on a board. This refers both to DUTs that are always connected to dedicated pins of the FPGA (e.g. high-speed serial transceivers, Boot SPI Flash, ADC or CLK ICs) and to others that might be attached to any GPIO location (e.g. RAM, FLASH, Ethernet PHY). This is a fundamental difference to other formats of instruments where exact pin connections should be specified before the compilation but not the after. This paper also studies the properties of proposed RTR instruments in comparison to already known soft core IP based instruments. The analysis of features includes automation and easy of use for manufacturing tests, how the instruments can be spread among end-users, robustness and reliability of counterparts as well as logistics behind the development of instruments itself. The feasibility of the proposed methodology was proven by the implementation of the RTR instruments targeted board-level Test & Measurement tasks. The obtained real-life experimental results proved the efficiency of developed instruments over state-of-the-art test technologies. With the help of these instruments one can considerably improve quality of tests for printed circuit board assemblies as well as reduce test time. Being integrated to the test setup, the proposed RTR instruments represent an automated and low-cost complementary solution for testing of complex high-performance boards and systems.
Design of High Precision Analog Modules and System Architectures to Maximize Testability and Long Term System Reliability
Andrew Cunningham (Draper Laboratory, USA); Andrew Goldfarb and Brigid Angelini (Draper, USA)
To achieve cutting edge performance in measurement and control systems, it is often necessary to execute many high accuracy and high speed system functions in the analog domain. However, including custom analog hardware in complex long lifecycle defense or aerospace systems presents challenges in integration, reliability and maintainability of the system. Examples of these issues include: sub-system test of analog modules, system calibration, drift of analog components over time, and the effect of temperature on analog measurements. This paper presents best practices and novel methods to increase the practicality of designing custom analog system components, and discusses trade-offs between designing custom analog modules vs procuring equivalent commercial off the shelf components. In addition, this paper will touch upon the design of system architectures to minimize the risks and costs associated with the use of custom analog components.

Wednesday, September 14, 18:00 - 22:30

Networking Dinner

Thursday, September 15

Thursday, September 15, 07:00 - 08:00

Coffee Break

Room: TBD

Thursday, September 15, 07:00 - 12:00

Registration

Room: Registration Counter

Thursday, September 15, 07:00 - 07:30

Speaker Breakfast

Room: Monorail C

Thursday, September 15, 08:00 - 09:30

3B1: Test Data Management & Security

Room: Magic Kingdom 4
Chair: Larry V. Kirkland (WesTest Engineering, USA)
Increasing the Security on Non-Networked Ground Support Equipment: Analyzing the Implementation of Whitelisting Protection
Seth DeCato (United States Air Force & 309 Software Maintenance Group, USA)
Within the United States Air Force (USAF), dedicated non-networked computer systems are used to maintain aircraft electronic systems. Traditional security practices like anti-virus (AV) software have been used to protect the maintenance equipment from malware and exploitation by adversaries. Malware sophistication and prevalence from well financed digital adversaries is rising. New layers of digital security must be applied to these computer systems so that both maintenance equipment and aircraft are protected. This paper will focus on implementing application whitelisting software (AWS).
A Method for Storing Semiconductor Test Data to Simplify Data Analysis
Jeremy W Webb (University of California, Davis & J. Webb Consulting, USA)
The automated testing of semiconductor wafers, integrated circuits (IC), and surface mount devices generates a large volume of data. These devices are tested in stages and their test data is typically collected in log files, spreadsheets, and comma separated value files that are stored at both the foundry and on-site on file shares. Building a cohesive picture of the quality and performance of these devices can be difficult since the data is scattered throughout many files. In addition, tracking any deviations in performance from wafer to wafer by analyzing historical process control monitor (PCM) test data is a manual, laborious task. Collecting the test data from multiple sources (e.g., foundries and contract manufacturers) can be cumbersome when manual intervention is required. Automating the transfer of test data from third party servers to on-site file shares is crucial to providing a consistent method of accessing the data for analysis. This paper provides a method of both implementing a database for storing test data collected at the various stages of IC manufacturing, as well as automating the retrieval and import of test data into the database from all relevant sources.
Simple Tips for Future-Proofing Your Test Data
Tom Armes (Engineer & IntraStage, USA)
The complexity and volume of manufacturing and engineering test data in modern electronics manufacturing is growing every year. The proliferation of formats and structures of data means that test data storage, retrieval and analysis becomes increasingly difficult. Question: If you were asked to choose a way to store your data for the next 30 years and make it usable and integrated with enterprise data, how would you do it? In this paper, we'll talk about the available technologies and file formats that Test Engineers should consider when preparing to write out test data from complex manufacturing and engineering test beds. By storing and structuring your test data with the best practices discussed in this discussion track, you'll be able to efficiently store, quickly analyze, and future-proof your test output to work with other enterprise data. IntraStage's experience with over 50 Mil/Aero and Medical device companies and their complex data needs has given us a unique insight in best practices in storing test data, structuring test data formats (XML, CSV, TXT, MDB, STDF, MDF etc.), and designing new test outputs to integrate with legacy data and legacy systems. We'll share our insights on these best practices for Test Engineers to future-proof their Manufacturing and Engineering output. Some primary benefits of this paper include: • Best Practices in storing test data • Rapid prototyping of your test data. Be able to quickly prototype and deploy your test data output • Develop standards and data formats that allow integration of current test data with upcoming data silos.
Integrating Cybersecurity into NAVAIR OTPS Acquisition
Arthur R Shilling, Jr (Naval Air Systems Command); Thomas Combass (CIV NAVAIR ISSC JAXS, USA)
Assessment of cybersecurity vulnerabilities and associated risks is a prevalent and escalating requirement for the OTPS acquisition and development communities. In August of 1992, the Defense Information Systems Agency (DISA) developed the Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP); an assessment process for all Department of Defense (DoD) information systems. The accreditation and requirements process was service-specific and system-centric. In July 2006, the DoD Information Assurance Certification and Accreditation Process (DIACAP) was distributed. DIACAP implemented enterprise-wide Information Assurance (IA) through a standardized set of IA controls with continuous monitoring and annual reviews of the system's security posture. The current process, implemented in May 2014, is the Risk Management Framework (RMF). RMF is a more dynamic and integrated process than its predecessors. Instead of using DoD defined security controls, RMF uses the Committee on National Security Systems Instructions(CNSSI) and National Institute of Standards and Technology (NIST) publications for its risk assessment guidelines and security control references respectively. Under RMF, all Information Technology (IT) is placed into four broad categories. These categories are Information Systems (IS), Platform IT (PIT), IT services and IT products. Fundamentally, all DoD IT assets must be categorized, security controls tailored, and implemented for the specific asset. Operational Test Program Sets (OTPS) mainly fall into the category of PIT. However, there may be circumstances where OTPSs fall into the category of an IS or any number of ambiguous areas. Since only generic high-level guidance on how to evaluate PIT, guidelines for evaluating PIT OTPSs will be summarized. Also, since not all OTPSs are PIT, or it is not immediately clear which system category an OTPS falls, guidelines will b e created to define these systems for proper evaluation. For the majority of OTPSs during the acquisition lifecycle; risk categorization, control selection, and assessment will occur. Case studies of OTPSs will be analyzed and discussed; OTPS PIT, OTPS IS, and ambiguous examples. In each of these cases, the question of task dependence versus the definition of what makes a particular OTPS a PIT or IS will be explored.

3C1: Electro-Mechanical Test

Room: Castle AB
Chair: Timothy W Davis (NAVAIR & Fleet Readiness Center Southeast, USA)
Noncontact Sensors and Nonintrusive Load Monitoring (NILM) Aboard the USCGC SPENCER
Peter Lindahl (MIT, USA); Greg Bredariol (MIT / US Coast Guard, USA); John Donnal (United States Naval Academy, USA); Steven Leeb (MIT, USA)
Modernization in the U.S. Navy and U.S. Coast Guard includes an emphasis on automation systems to help replace manual tasks and reduce crew sizes. This places a high reliance on monitoring systems to ensure proper operation of equipment and maintain safety at sea. Nonintrusive Load Monitors (NILM) provide low-cost, rugged, and easily installed options for electrical system monitoring. This paper describes a real-world case study of newly developed noncontact NILM sensors installed aboard the USCGC SPENCER, a Famous class (270 ft) cutter. These sensors require no ohmic contacts for voltage measurements and can measure individual currents inside a multi-phase cable bundle. Aboard the SPENCER, these sensors were used to investigate automated testing applications including power system metric reporting, watchstander log generation, and machinery condition monitoring.
Rotating machine fault detection using principal component analysis of vibration signal
Tristan Plante, Lucas Stanley, Ashkan Nejadpak and Cai Xia Yang (University of North Dakota, USA)
Vibration analysis is widely applied to rotating machinery fault detection and diagnostics. Once a fault occurs, the signature of the measured vibration signal will change. Imbalanced disk and shaft misalignment are most common faults in a rotatory machine. To prevent failure of the machine and unexpected production costs, detecting faults at early stage play a vital role in highly reliable operations. In this paper, a method on detecting disk unbalance mass and shaft misalignment faults of a rotary machine on vibration signal is presented. Several experiments were conducted on the machinery fault simulator to generate health and faults database. In addition to health condition with perfect balanced disk, and alignment of shaft, unbalance disk and shaft misalignment faults were simulated. The speed of motor is controlled from 0 to 2800 rpm by variable frequency drive and measured using tachometer. The vibration signals were measured using accelerometers mounted on the both bearing houses, and then analyzed in both time and frequency domain. Severity and type of each fault condition can be assessed based on the amplitudes of the corresponding peaks as well as their respective locations on the frequency spectrum. The vibration severity is calculated and compared with standard severity level for determining the health condition of the machine. The measured signal is also analyzed using spectrum analysis software and a MATLAB program. The specific natural frequency corresponding with fault or failure mode is identified. By comparing fault case to the healthy case, the patterns corresponding with each fault case were outlined and explained. Principal component analysis method is applied to extract the essential features of measured vibration data, and identify fault source. This method has the potential to reduce measured vibration data size and remove noise from data, and also enable early detection of the various effects of vibration signature. The proposed methodology can accurately detect different faults based on vibration measured signals, which will result in reduced maintenance time and cost of the system.
A Nonintrusive Magnetically Self-powered Vibration Sensor for Automated Condition Monitoring of Electromechanical Machines
Jinyeong Moon and Peter Lindahl (MIT, USA); John Donnal (United States Naval Academy, USA); Steven Leeb (MIT, USA); Ryan Zachar (US Navy, USA); Christopher Schantz (Infratek Solutions); William Cotta (United States Coast Guard, USA)
This paper presents a nonintrusive and electromagnetically self-powered embedded system with vibration sensor for condition monitoring of electromechanical machinery. This system can be installed inside the terminal block of a motor or generator and supports wireless communication for transferring data to a mobile device or computer for subsequent performance analysis. As an initial application, the sensor package is configured for automated condition monitoring of resiliently mounted machines. Upon detecting a spin-down event, e.g. a motor turn-off, the system collects and transmits vibration and residual back-emf data as the rotor decreases in rotational speed. This data is then processed to generate an empirical vibrational transfer function (eVTF) rich in condition information for detecting and differentiating machinery and vibrational mount pathologies. The utility of this system is demonstrated via lab-based tests of a resiliently mounted 1.1 kW three-phase induction motor, with results showcasing the usefulness of the embedded system for condition monitoring.

3D1: High Frequency Testing

Room: Monorail AB
Chair: Iram Weinstein (Leidos, USA)
Radio Frequency Test Set: An Ethernet-Based RF Test System Design to Test the Minuteman III MOD 7LW Wafer
Ty Ung (Government, USA); Tranchau Nguyen (US Air Force 309 SMXG, USA)
The LGM-30G Minuteman III Inter-Continental Ballistic Missile (ICBM) System is facing legacy test systems that are becoming unsustainable due to aging and obsolescence test equipment. The Test Set Group, Electronic System (TSGES) is an example of a test system approaching end-of-life cycle and outdated test instrumentation. The TSGES is a legacy test system that requires the operators to manually configure the test instruments to perform the MOD 7LW wafer checkout. The Radio Frequency Test Set (RFTS) is an RF test system in current development to replace TSGES. One of the RFTS main functions is to increase test automation of the MOD 7LW wafer at the subsystem-level and system level. The other criteria of the RFTS goals are increased reliability, availability, and reduced maintenance expense. Designed and developed from the ground up by 516th 309th Software Maintenance Group (SMXG) at the Ogden Air Logistics Center (OO-ALC), RFTS is a growth capable ethernet-based RF test system that performs DC and RF test using the developer's designed test program sets (TPSs). The test instruments are mainly built from commercial-of-the-self (COTS) hardware enclosed in a custom electromagnetic interference and electromagnetic compatibility (EMI/EMC) rack mount enclosure. The software is developed by the 309th containing a test executive that execute modular TPSs for specific unit testing. The main feature of the test executive is a graphic-user interface (GUI) that monitors the status of the equipment in the test station and runs TPSs loaded by the operator to its execution buffer. The RFTS includes the latest commercial-of-shelf (COTS) instruments that can provide stimulus frequency from 250 kHz to 20 GHz, RF signal analysis up to 13.6 GHz, and S-Band telemetry receiver. This paper gives an overview of the RFTS hardware and software engineering design, the design of the interface test adapters (ITAs), the development of the TPSs, and the safety and environmental test. In addition, this paper will also report the outcome of the engineering design effort on the RFTS program.
Dedicated Engineering Test Equipment Design for Multi-Function Radar Hybrid T/R Modules
Izzet Serbest (METU & ASELSAN Inc., Turkey); Muharrem Arik (Koc University & ASELSAN Inc., Turkey)
The demand for complicated sensor (multi-function capabilities) technologies over radar platforms has grown while mission and efficiency constraints have become more stringent. Development of Active Electronically Scanned Array (AESA) and Digital Array Radar (DAR) technologies has created a strong demand to combine the functions of several digital and analog implementations into a single unit. Combining the functionality of several modules into one special package also makes testing process more challenging due to its comprehensive capabilities. It has also changed the way radars are characterized, from component level tests through complete system verification. Therefore, these complicated tasks require one special and unique test solution dedicated to hybrid T/R modules. Hybrid beam-forming technique and multi channel systems impose challenges on testing of T/R modules operating in a variety of modes. Therefore, T/R modules require a broad array of measurements and all interfaces must be driven by the Dedicated Engineering Test Equipment Design (DETED) platform. Beside basic challenges of testing, the test system rely not only on conventional microwave instruments but also on analog and digital test blocks optimized to the specific requirements of the radar system. Hence, DETED takes place between synthetic and rack-and-stack test solutions. The user could modify the test system for various purposes to cover all requirements of both test sets due to the verification requirements. Moreover, RF tests could be run much faster with more accurate and repeatable results during the design process and in the production line. In this paper, our goal is to put forward a special and dedicated design and verification test solution developed by ASELSAN Inc. for X-Band Compact Hybrid T/R modules. This paper describes the system level architecture, test solution abilities and customizable test programs tailored to the specific TR module problems. Additionally, designing methods of the hardware and software architecture are presented
Trends in Radar Systems Drive the Need for a Smarter Test System
Abhay Samant (National Instruments, USA)
Active Electronically Scanned Array (AESA) technology will enable next generation radars achieve better jamming resistance capability and low probability of intercept by spreading their emissions over a wide frequency range. These radars consists of a large number of transmit/receive modules (TRMs) which are electronically scanned in a tight time-synchronized manner, thus causing digital control to move closer to the radio front end on the antennas. Other emerging technologies, such as cognitive radars and MIMO radars, will continue to drive the need for complex timing, synchronization, and high mix RF and digital measurement requirements. To meet these challenges, radar engineers will need a platform based test system which delivers capabilities such as multi-channel phase aligned measurements over wide bandwidths and high-throughput streaming. This paper discusses the fundamentals of AESA radars and trends in radar systems. It analyzes the impact of these trends on test system architecture and explains how the advances in PXI modular instrumentation can meet these challenging requirements.

Thursday, September 15, 09:30 - 10:30

Coffee Break

Room: Exhibit Hall

Thursday, September 15, 10:30 - 12:00

3C2: Test Techniques 2

Room: Castle AB
Chair: John P Chapman (Boeing, USA)
A Novel Approach of Test and Fault Isolation of High Speed Digital Circuit Modules
Du Shuming (Nanjing Research Institute of Electronics Technology, P.R. China); Zijian Cao (Nanjing Research Institute of Electronics Technology); Yan Wang (Nanjing Research Institute of Electronics Technology, P.R. China)
Along with the development of digitization and intelligentization for radar and other electronic equipment, high speed digital circuit (HSDC) modules involving CPU,DSP, FPGA and etc. are widely used in these electronic equipment. The highest bit rate of the HSDC modules can reach several Gbit/s or even higher, external interface for these modules adopt high speed interfaces such as RapidIO 2.0 , PCI Express 2.0, 10G Ethernet and etc. The chips used by the module are usually packaged by BGA . The pins are hidden below the chip, so they are difficult to be tested by the test probe. The application of the new technology brings a great challenge to the test and the fault isolation of the HSDC module. The automatic test system(ATS) which is based on the traditional I/O module cannot meet the test requirements of the HSDC modules. This article analyses the test requirements of the HSDC module based on VPX bus, including a) The requirement of generating high speed digital signal of multiple channels ; b) The requirement of collecting high speed digital signal of multiple channels ; c) The requirement of high speed adapter. This article proposes the HSDC module test system's architecture which is based on the VPX bus . It introduces the function and specifications of the key components in this test system's architecture. The key components consist of high speed digital IO module, high speed interface module and high speed digital signal interface's adapter. High speed digital IO module is used to generate and collect the signal of RapidIO ,RocketIO and other high speed digital signal. High speed interface module is used for the interface of the high speed optical fiber signal and the high speed ethernet signal. The high speed digital signal interface's adapter is used for connection between high speed digital test modules and UUT. The article also brings forward the fault isolation method of HSDC module of combining boundary scan and embedded test. Boundary scan is used for isolating open circuit fault and short circuit fault of HSDC modules. Test probe is not required by this method. The embedded test includes the method based on the test IP kernel and the method based on module BIT. Utilizing the method based on the test IP kernel, we can test the signal on internal test point of FPGA and pins of other chips (e.g., RAM,DSP). Utilizing the method based on module BIT,the chips fault and interface fault can be detected. It introduces the difference between the HSDC module and the traditional digital circuit module in the development of test program. The test and diagnosis methods presented in this article have already been used in several kinds of HSDC module.
Construction of Chaotic Sensing Matrix for Fractional Bandlimited Signal Associated by Fractional Fourier Transform
Haoran Zhao and Liyan Qiao (Harbin Institute of Technology, P.R. China); Libao Deng (Harbin Institute of Technology at Weihai, P.R. China); YangQuan Chen (University of California, Merced, USA)
Fractional Fourier transform (FrFT) is a powerful tool for the non-stationary signals because of its additional degree of freedom in the time-frequency plane. Due to the importance of the FrFT in signal processing, the classical bandlimited sampling theorem in Fourier transform domain has been extended to fractional Fourier bandlimited signals based on the relationship between the FrFT and regular integer order Fourier transform. However, the implementation of those existing extensions are not efficient because of the high sampling rate which is related to the signal fractional Fourier frequency. Compressed Sensing is a powerful tool to collect information directly which reduces sampling pressure, the computational load as well as saving the storage space. The construction of sensing matrix is the basic issue. Most of implements demands that the received signal should already be discrete completely, and the sensing matrix which is constructed by random under-sampling is uncontrollable and hard to be realized by hardware. This paper proposes a deterministic construction of sensing matrix for the multiband signals in the fractional Fourier transform domain. We derive the sensing matrix based on the analog to information conversion technology. The sensing matrix is constructed by a chirp signal with random time delay. The sub-sampling method is used to be obtain the structural signal. Theoretically, the matrix satisfies the restricted isometry property (RIP) condition and the entire structure of system can be realized using hardware. We show in this paper that the sampling rate is much lower than the Nyquist rate. The signal reconstruction is studied based on the framework of compressed sensing. The performance of the proposed sampling method is verified by the simulation. The probability of the successful reconstruction and the mean squared error (MSE) are analyzed. Extensive numerical results suggest that proposed system is effective for a spectrum-blind sparse multiband signal in the fractional Fourier transform and demonstrate its promising potentials.
Modulation Identification for Cognitive Aeronautical Air-Ground Communications
Hayder Al-Hraishawi and Lalit Gupta (Southern Illinois University, USA)
The main objectives for deploying Cognitive Radio (CR) technology in aeronautical communications are to achieve reliable communications and to avoid the expected congestion that is bound to occur in the near future. CR allows a very flexible and dynamic radio management system leading to improved radio spectrum utilization. Realizing these features in cognitive radio requires the ability to identify the modulation type of the received signal. In this paper, we introduce an automatic modulation identification algorithm to distinguish aeronautical communication signals and other communication signals based on wavelet transform (WT) analysis. The proposed identification algorithm is able to recognize multicarrier and single-carrier modulated signals and is also able to discriminate linear and nonlinear modulations using the following sequence of steps: (i) analyze the incoming unknown signal using the Haar WT, (ii) determine the magnitude of the wavelet coefficients and remove the undesired peaks through median filtering, (iii) conduct variance analysis on the median filter outputs to decide whether the signal is multicarrier or single-carrier, (iv) distinguish between linear and nonlinear modulations of single-carrier waveform through kurtosis analysis, and (v) separate MSK and FSK modulation by analyzing the variances of the Morlet WT of the received signal. Interestingly, the proposed algorithm has lesser computational complexity when compared with other identification algorithms that have been proposed for aeronautical systems due to the fact that it only requires fewer features to extract. Further, from a practical stand point, distinguishing between L-DACS2 signal and FSK modulation is necessary in aeronautical communications since they are both nonlinearly modulated signals and commonly used in different applications. Unlike the existing methods, our algorithm is able to identify L-DACS2 signal and FSK modulation with high success rates. Simulation results are provided to demonstrate the performance of the identification algorithm for an aeronautical channel modeled as a Ricean fading channel in presence of Gaussian noise.
A Simplified Overview of Handling Faults in Systems around Us
Hussain Al-Asaad (University of California, Davis, USA)
Faults cause the observed behavior of a system to deviate from the expected behavior. They occur throughout the lifetime of a typical system. They may occur in the design phase of the system, the manufacturing phase, or during normal operation. In this paper, we present a simplified overview of handling faults in electronic as well as non-electronic systems such as systems in engineering, science, biology, etc. The paper first defines the concepts of faults, errors, and failures. It then demonstrates the high cost of failure via several examples. The goal of this paper is to present a detailed and simplified discussion of the methods of handling faults in systems around us including fault avoidance; fault dismissal; error detection; fault location; error correction; fault masking; fault tolerance; and reconfiguration. Faults in a system can be divided into three categories: Faults that can be avoided (at a substantial cost), faults that can be dismissed due to various reasons, and faults that must be handled correctly before they become errors and ultimately lead to the overall failure of the system. The paper discusses the various techniques that prevent the faults in the system from leading the system to failure. The paper also discusses the requirements for fault tolerance and the methods used to achieve the desired fault tolerant capabilities in a typical system. The paper also presents two case studies to illustrate the concepts described above. The first system is the personal computer and the second system is the human digestive system. The case studies demonstrate the significant similarities between handling faults in electronic and non-electronic systems.

3D2: Life Cycle Management Topics

Room: Monorail AB
Chair: Dean Matsuura (Teradyne Inc., USA)
Recurrent TPS Development Issues or Ascertaining the Excellence of an Automated Unit Test
Larry V. Kirkland (WesTest Engineering, USA); Cori N McConkey (Weber State University)
Although Test Program Set (TPS) development is a science; it can be construed as an ART if a precise software environment and exacting test program development practices are utilized. Also, the ability to build an effective and precise software development environment for TPS development is not an off-the-wall bunch of guesswork routines. This can only be accomplished by true test engineers who work in the test and diagnosis environment and understand the various requirements to develop a group of measurements that cover all of the aspects of a TPS. An advocate of true test technology, who wants a complete and comprehensive fault coverage of a Unit Under Test (UUT), will not be satisfied with mundane TPSs that are not capable of scoping out UUT failures is a precise, timely and factual manner. To scope out UUT problems requires practicing many factors which focus on TPS quality, robust software tools, powerful test hardware and the inclusion of state-of-the-art hardware with interactive diagnostics. Major TPS weaknesses continue to be diagnostics, manual probing, real life trends, time to repair, cross-referencing, weak test equipment, test time, etc. About 70% of the TPS development effort can go to diagnostics. Accurately estimated enhanced diagnostics can result in a substantial life-cycle cost savings. In fact, during TPS development and TPS support, fault localization should be the first step and always the most critical step. Ideally, there should be no more than 3-4 probes to (no probing is best) and fault isolate to 2 or less components with very high accuracy. UUT accessibility and thru-put complicate the TPS design and the UUT repairer. Complicated state transition sequences and edge changes can be a setback when trying to control the UUT circuits. We should always focus on what's really happening at some internal circuit element. The time to repair can be hindered by Fan Out, No-Fault-Found (NFF), Intermittents, UUT Source and sink circuits, noise with the Automatic Test equipment (ATE) or in the Interface Test Adapter (ITA), component weakness, miss-matched replacement parts, poor connections (solder / pins), component variations, etc. All elements are critical and time to repair can go into days and even weeks. There are clues to TPS weaknesses that dictate re-examination and reconsideration. These clues can include things like glitches, limits, % detect, test time, impedance, instrument selection, signal routing, ITA design, diagnostic issues, excessive code or software routines, ambiguity groups, signal degradation from various sources, signal amplification, signal reduction, lack of test requirements or data, improper grounding, noise, etc. TPS weaknesses need attention and improvement when they are observed. Support costs tend to compound when weaknesses are not corrected ASAP. This paper will cover many aspects associated with a TPS that fulfills the customer's needs and expectations and this includes the ATE and a focus on diagnostic issues.
A Maintenance Production Tool for Support Equipment Life Cycle Management
Christopher J Guerra, Anh Trung Dinh and Christopher E Camargo (Southwest Research Institute, USA)
Aircraft maintenance managers encounter significant pressure to maintain the operational readiness of their aircraft fleet. In the commercial domain, the demands result from financial pressure to remain competitive with peers. In the military domain, maintenance managers must meet operational targets to achieve mission success. For daily operations, many managers use printed tabular sheets or manually updated spreadsheets to track aircraft and support equipment status. While this affords expediency to the maintenance managers, the approach limits the immediate communication of status changes to other levels of supervision and to others in the organization who have interest in the information. The status sheet runs the risk of being lost or being marked inadvertently. This tracking method adds additional time to the overall maintenance production process because subordinate staff have to exchange the information with the maintenance manager. The approach discards information because the documentation is destroyed daily or as needed. Improvements to maintenance production could benefit from this data, or the maintenance manager could use the information to identify trends in the fleet. This research describes the initial considerations in developing a maintenance production tool for tracking the status of support equipment. The tool uses a web service architecture to enable either a closed or networked system topology. The system tracks individual items by their part numbers. Reported information for the support equipment includes quantity status (availability and required amount), problem reports, safety violations, etc. The tool provides the ability to identify obsolescence of the items and to plan for future investments to mitigate against deficiencies in the equipment. A method to numerically aggregate the issues allows the maintenance manager and management to use the data to analytically rank the support equipment, which most severely affects the maintenance production. The flexible framework with which the tool was developed will allow for extensions to support other facets of maintenance production. Future work could include integration with the tracking process for individual aircraft to monitor the configuration and status. As the data for the support equipment will be consolidated in one location, the trend based and predictive health maintenance analysis of the assets will be possible.
Legacy Test Systems - Replace or Maintain
Michael J Dewey and Jim Kent (Marvin Test Solutions, USA)
Test platforms age and the components within the test systems degrade, become obsolete and wear out over time. Manufacturing companies must continuously evaluate the expected lifespan of their test equipment and determine the risk and tradeoffs associated with replacing the equipment vs. maintaining the equipment. Both industry and government entities continually struggle with how to best evaluate and address the issues associated with aging test equipment and systems. This paper reviews the various options available to test engineers when faced with replacing or maintaining a test system. Specifically, the manufacturing / test community must evaluate the consequences, risks and costs associated with several options: 1. Do nothing and continue to maintain equipment until equipment failure. 2. Rejuvenate equipment by replacing components / instruments. 3. Replace existing equipment with modern automated test equipment. 4. Outsource manufacturing and test of the product to a supplier. To help quantify the decision making process the use of an evaluation tool can help analyze the factors that influence the "replace or maintain" question. Each of the options listed above carries with it its own list of questions that must be addressed. These questions are encoded into the tool with responses then interpreted and results collated with user historical data, providing the test engineer with quantifiable and meaningful data for evaluating the cost of replacing or maintaining factory test equipment. The final paper will expand on the methodology used to develop this tool as well as review in more detail, the various options and tradeoffs associated with evaluating the "replace or maintain" question.