Program for 2018 21st International Conference on Information Fusion (FUSION)

Time LR0 LR1 LR2 LR5 LR6 LR11 LR12 JDB-Seminar Room JDB-Teaching Room

Tuesday, July 10

08:00-08:30 Registration
08:30-11:30       T12: Multitarget Multisensor Tracking: from Traditional to Modern Distributed Approach T8: Information Quality in Information Fusion and Decision Making T5: Extended Object Tracking: Theory and Applications T14: Noise Covariance Matrices in State Space Models: Overview, Algorithms, and Comparison of Estimation Methods T11: Multisensor-Multitarget Tracker/Fusion Engine Development and Performance Evaluation for Realistic Scenarios  
11:30-12:30 Lunch
12:30-15:30       T9: Machine and Deep learning for Data Fusion T10: Multi Source and Multi Modal Sensor Fusion Strategies and Implementations in the world of Autonomous Driving   T7: Implementations of Random-Finite-Set-Based Multi-Target Filters T13: Multitarget Tracking and Multisensor Information Fusion  
15:30-16:00 Refreshment Break
16:00-19:00       T16: Overview of High-Level Information Fusion Theory, Models, and Representations T15: Object Tracking, Sensor Fusion and Situational Awareness for Assisted- and Self-Driving Vehicles: Problems, Solutions and Directions T17: Statistical Methods for Information Fusion System Design and Performance Evaluation T3: Analytic Combinatorics for Multi-Object Tracking and Higher Level Fusion T2: An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter  
19:00-20:30 Icebreaker

Wednesday, July 11

08:30-09:00 Opening Session
09:00-10:00 Plenary: Variational Inference and Gaussian Processes
10:00-10:30 Refreshment Break
10:30-12:10 1a - Tracking Algorithms 1b - Gaussian Processes 1c - SS: Advances in Distributed Kalman Filtering and Fusion 1 1d - SS - Advanced Nonlinear Filters 1 1e - SS - Lower Bounds for Parameter Estimation and Beyond 1f - Point Process Methods 1g - Modelling, Simulation and Evaluation 1h - Extended Object/ Group Tracking 1i - Track Before Detect
12:10-13:10 Lunch
13:10-14:50 2a - Localisation 1 2b - SS: Multi-sensor Data Fusion for Navigation and Localisation 1 2c - SS: Situational Understanding Through Equivocal Sources 2d - Deep Learning 1 2e - SS: Forty Years of Multiple Hypothesis Tracking 1 2f - Applications of Information Fusion 2g - Sonar, Radar, Video Tracking 2h - SS: Big Data Fusion and Analytics 2i - Intent, Behaviour, Swarm Modelling
14:50-15:20 Refreshment Break
15:20-17:00 3a - Dempster-Schafer Theory 3b - SS: Evaluation of Techniques for Uncertainty Representation 3c - SS: Directional Estimation 3d - Machine Learning 3e - SS: Advances in Motion Estimation using Inertial Sensors 3f - SS: AI Enabled Fusion for Federated Environments 3g - Sensor/Resource Management 3h - Fuzzy Sets/ Set Membership 3i - Imaging Methods / Image Processing
18:00-20:00 Welcome Reception

Thursday, July 12

06:30-07:30 5K Race
08:45-09:00 StoneSoup
09:00-10:00 Plenary: Fusion of Multi-band Images Using Bayesian Approaches: Beyond Pansharpening
10:00-10:30 Refreshment Break
10:30-12:10 4a - SS: Forty Years of Multiple Hypothesis Tracking 2 4b - SS: Physics-based and Human-derived Information Fusion 4c - Belief Functions 4d - Parameter Estimation/ Covariance Estimation/ Model Callibration 4e - SS: Advanced Nonlinear Filters 2 4f - Network Tracking 4g - Networks/ Community Detection/ Sentiment analysis/Anomaly detection 4h - Multisensor Fusion 4i - Probability and Point Process Based Methods
12:10-13:10 Lunch
13:10-14:50 5a - Data Association 5b - SS: Novel Information Fusion Methodologies for Space Domain Awareness 5c - SS: Indoor Positioning 5d - SS: Multi-layered Fusion Processes: Exploiting Multiple Models and Levels of Abstraction for Understanding and Sense-Making 1 5e - Stone Soup 5f - Point Process Methods 2 5g - Sensor Registration 5h - Image Fusion 5i - Distributed Fusion
14:50-15:20 Refreshments
15:20-17:00 6a - Algorithms for Tracking 6b - SS: Remote Sensing Data Fusion 6c - SS: Advances in Distributed Kalman Filtering and Fusion 2 6d - SS Advanced Nonlinear Filters 3 6e - Localisation 2 6f - SS: Extended Object and Group Tracking 6g - Sensor/Resource Management 6h - Situational Awareness 6i - SS: Multi-layered Fusion Processes: Exploiting Multiple Models and Levels of Abstraction for Understanding and Sense-Making 2
19:00-23:00 Gala Dinner

Friday, July 13

09:00-10:00 Plenary: 25 years of particles and other random points
10:00-10:30 Refreshments
10:30-12:10 7a - Sequential Monte Carlo 7b - SS: Context-based Information Fusion 7c - Distributed Fusion 7d - SS: Uncertainty, Trust and Deception in Information Fusion 7e - Localisation 3 7f - SS: Semi-supervised/unsupervised Learning-based State Estimation 7g - SS: Information Fusion in Multi-Biometrics and Forensics 7h - Bayesian Methods/ Belief Propagation  
12:10-13:10 Lunch
13:10-14:50 8a - Deep Learning 8b - SS: Sensor, Resources, and Process Management for Information Fusion Systems 8c - SS: Multi-sensor Data Fusion for Navigation and Localisation 2 8d - Pattern Analysis/AI 8e - SS: Towards a Battlefield IoT: Information Challenges and Solutions 8f - Applications of Information Fusion 2 8g - Detection Theory/ Methods 8h - SS: Intelligent Information Fusion and Data Mining for Tracking  
14:50-15:20 Refreshments
15:20-17:00 9a - Data Association/Sensor Registration 9b - Point Process Methods/ PHD/ Multi-Bernoulli tracking 9c - Belief Functions 9d - Decision Making 9e - Algorithms for Tracking - Gaussian Processes, Gaussian Mixture Methods 9f - SS: Autonomous Driving      

Saturday, July 14

08:30-16:30               ISIF Board Meeting  

Tuesday, July 10 8:00 - 8:30

Registration

Tuesday, July 10 8:30 - 11:30

T12: Multitarget Multisensor Tracking: from Traditional to Modern Distributed Approach

Giorgio Battistelli, Luigi Chisci and Alfonso Farina
Room: LR5

T8: Information Quality in Information Fusion and Decision Making

Galina Rogova
Room: LR6

T5: Extended Object Tracking: Theory and Applications

Karl Granström, Marcus Baum and Jens Honer
Room: LR11

T14: Noise Covariance Matrices in State Space Models: Overview, Algorithms, and Comparison of Estimation Methods

Ondrej Straka, Jindrich Dunik and Jindrich Havlik
Room: LR12

T11: Multisensor-Multitarget Tracker/Fusion Engine Development and Performance Evaluation for Realistic Scenarios

Kiruba Kirubarajan
Room: JDB-Seminar Room

Tuesday, July 10 11:30 - 12:30

Lunch

Tuesday, July 10 12:30 - 15:30

T9: Machine and Deep learning for Data Fusion

Subrata Das
Room: LR5

T10: Multi Source and Multi Modal Sensor Fusion Strategies and Implementations in the world of Autonomous Driving

Bharanidhar Duraisamy, Ting Yuan, Tilo Schwarz, Martin Fritzsche and Michael Gabb
Room: LR6

T7: Implementations of Random-Finite-Set-Based Multi-Target Filters

Ba-Ngu Vo and Ba-Tuong Vo
Room: LR12

T13: Multitarget Tracking and Multisensor Information Fusion

Yaakov Bar-Shalom
Room: JDB-Seminar Room

Tuesday, July 10 15:30 - 16:00

Refreshment Break

Tuesday, July 10 16:00 - 19:00

T16: Overview of High-Level Information Fusion Theory, Models, and Representations

Erik Blasch
Room: LR5

T15: Object Tracking, Sensor Fusion and Situational Awareness for Assisted- and Self-Driving Vehicles: Problems, Solutions and Directions

Kiruba Kirubarajan
Room: LR6

T17: Statistical Methods for Information Fusion System Design and Performance Evaluation

Ali Raz and Daniel DeLaurentis
Room: LR11

T3: Analytic Combinatorics for Multi-Object Tracking and Higher Level Fusion

Roy Streit and Murat Efe
Room: LR12

T2: An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter

Felix Govaers
Room: JDB-Seminar Room

Tuesday, July 10 19:00 - 20:30

Icebreaker

Wednesday, July 11 8:30 - 9:00

Opening Session

Welcome:

Professor Simon Godsill, Department of Engineering, University of Cambridge Professor Lyudmila Mihalova, President, ISIF

Wednesday, July 11 9:00 - 10:00

Plenary: Variational Inference and Gaussian Processes

Professor Carl Edward Rasmussen
Chair: Simon Maskell

Abstract: Gaussian Processes are a principled, practical, probabilistic approach to learning in flexible non-parametric models and have found numerous applications in regression, classification, unsupervised learning and reinforcement learning. Inference, learning and prediction can be done exactly on small data sets with Gaussian likelihood. In more realistic application with large scale data and more complicated likelihoods approximations are necessary. The variational framework for approximate inference in Gaussian processes has emerged recently as a highly effective and practical tool. I will review and demonstrate the capabilities of this framework.

Bio: Carl Edward Rasmussen is professor of machine learning in the Department of Engineering at the University of Cambridge. He received an MSc in Electrical Engineering from the Technical University of Denmark in 1993 and did his PhD with Geoff Hinton Computer Science at the University of Toronto in 1996. Since then he has been a post doc at the Technical University of Denmark, a Senior Research Fellow at the Gatsby Computational Neuroscience Unit at University College London, and a research group leader at the Max Planck Institute of Biological Cybernetics in Tübingen, Germany. In 2007 he moved to Cambridge, where he is now professor of Machine Learning and head of the Computational and Biological Learning Lab. He is chairman of Cambridge AI company PROWLER.io.

Wednesday, July 11 10:00 - 10:30

Refreshment Break

Wednesday, July 11 10:30 - 12:10

1a - Tracking Algorithms

Room: LR0
Chair: David F Crouse
10:30 Cubature and Quadrature Based Continuous-discrete Filters for Maneuvering Target Tracking
A continuous-discrete system represents a system with continuous process and discrete measurement models. Very recently, two filters namely cubature quadrature Kalman filter (CQKF) and Gauss-Hermite filter (GHF) are introduced to solve the filtering problems for discrete-time systems where both the process and measurement models are discrete in nature. In this paper we extend the two estimators so that they could work for continuous process and discrete measurement model. The proposed filters are applied to solve a continuous-discrete maneuvering air-traffic-control problem and the results are compared with continuous-discrete Kalman filter (CD-CKF) in terms of the root mean square error (RMSE). It has been found that the proposed methods provide better estimation accuracy compared to the CD-CKF.
10:50 Three-dimensional Tracking with Angle Measurements Without Observer Maneuver
Passive target estimation is a widely investigated problem of practical interest. We are concerned specifically with an autonomous flight system developed onboard the ONERA ReSSAC unmanned helicopter. This helicopter is equipped with a (visible or infrared) camera and so is able to measure azimuths and elevation angles of a target. The latter is supposed to follow a constant velocity motion. It is well known that observer must maneuver in order to insure the observability of the target state. We are interested in tracking partly the target state when both the observer and the target have a constant velocity model in a three-dimensional space. We describe the set of all the trajectories compatible with the angle measurements and we propose a quick method to estimate these trajectories.
11:10 Single-Point Bistatic Track Initialization Using Doppler in 3D
The first two moments of a converted bistatic range (delay)-direction-cosine-range-rate measurement are derived taking into account the Doppler information and maximum bounds on the target velocity in orthogonal unobservable directions. Cubature integration is used to very efficiently evaluate the necessary multivariate integrals allowing correlations between measurement components to be easily taken into account. Such single-point track initialization is useful in many tracking algorithms, such as some variants of the joint integrated probabilistic data association filter (JIPDAF).
11:30 Multitarget Tracking Using Over-the-Horizon Radar
Most conventional multitarget tracking systems assume that in each scan there is at most one measurement for each target. This assumption is, however, not valid for the overthe- horizon radar (OTHR), where a target can generate multiple measurements through different propagation modes. A typical multitarget tracking algorithm may fail in this system. For tracking multiple targets using OTHRs, we propose an approach named decentralized multipath multiple hypothesis tracker (DMMHT), where uncertainties in both measurement origin and measurement mode are handled jointly. In DM-MHT, when forming the global hypotheses, each mode forms local hypotheses separately first, because the measurements generated by the same target through different propagation modes are rather different. This largely simplifies the data association, and the best global hypotheses are obtained by solving a constrained integer programming problem. Then the measurements that are associated with the same track through the different propagation modes can be used to obtain the overall estimates. The overall estimates are fed back to the corresponding propagation modes to improve the tracking performance. Simulation results demonstrate that the proposed approach is effective and its computational complexity is greatly reduced compared with the existing multiple-detection MHT.
11:50 A Novel Variable Structure Multi-model Tracking Algorithm Based on Error-ambiguity Decomposition
Model set adaptation (MSA) plays a key role in the variable structure estimation approach (VSMM). In this paper, we adopt the error-ambiguity decomposition (EAD) principle into the VSMM framework and derive the optimal EAD-MSA criteria. By proposing some approximation methods, an EAD variable structure interactive multiple model algorithm (EAD-VSIMM) is constructed. We test the EAD-VSIMM algorithm in a maneuvering target tracking scenario and the results demonstrate that, compared to two benchmark MM algorithms, the proposed EAD-VSIMM algorithm can achieve more robust and accurate estimation results.

1b - Gaussian Processes

Room: LR1
Chair: Marina Riabiz
10:30 Ensemble Kalman Filtering for Online Gaussian Process Regression and Learning
Gaussian processes are used in Bayesian machine learning and signal processing for estimation of unknown functions. However, they suffer from high computational complexity, as in a basic form they scale cubically with the number of observations. Several approaches based on inducing points were proposed to handle this problem in a static context. However, these methods lack performance for data that is received sequentially over time. In this paper, a novel online algorithm for training sparse Gaussian process models from online data is presented. It treats the mean and hyperparameters of the Gaussian process as the state and parameters of the ensemble Kalman filter, respectively. The online evaluation of the parameters and the state is performed on new upcoming samples of data. This procedure iteratively improves the accuracy of parameter estimates. The ensemble Kalman filter reduces the computational complexity required to obtain predictions with Gaussian processes preserving the accuracy level of these predictions. The performance of the proposed method is demonstrated on both synthetic and real large datasets of UK house prices.
10:55 Sparse Structure Enabled Grid Spectral Mixture Kernel for Temporal Gaussian Process Regression
We propose a modified spectral mixture (SM) kernel that serves as a universal stationary kernel for temporal Gaussian process regression (GPR). The kernel is named grid spectral mixture (GSM) kernel as we fix the frequency and variance parameters in the original SM kernel to a set of pre-selected grids. The hyper-parameters are the non-negative weights of all sub-kernel functions and the resulting optimization problem falls in difference-of-convex programming. Due to the nice structure of the optimization problem, the hyper-parameters are solved by an efficient majorization-minimization algorithm instead of the gradient descent algorithms used dominantly in the GP society. The solution is sparse, which provides us with a principled guideline to identify the important frequency components of the data. Experimental results with various classic time series data sets show that the proposed GPR with GSM kernel outperforms the GPR with SM kernel by far in terms of the mean-squared-error (MSE) and stability of the optimization algorithm.
11:20 A Gaussian Process Regression for Natural Gas Consumption Prediction Based on Time Series Data
For several economical, financial and operational reasons, forecasting energy demand becomes a key instrument in energy system management. This paper develops a natural gas forecasting approach, which consists of two major phases: Firstly, it classifies the natural gas consumption daily pattern sequences into different groups with similar attributes. Secondly, the design and training of multiple autoregressive Guassian Process models phase is carried out using the Algerian natural gas market data together with exogenous inputs consisting in weather (temperature) and calendar (day of the week, hour indicator) factors. The main novelty in this work consists of the investigation of multiple different clustering techniques for better analysis and clustering of natural gas consumption data. The impact of the obtained clusters, by each technique, is then summarized and evaluated with respect to the prediction accuracy.
11:45 A Gaussian Process Regression Approach for Fusion of Remote Sensing Images for Oil Spill Segmentation
Synthetic Aperture Radar (SAR) satellite systems are very efficient in oil spill monitoring due to their capability to operate under all weather conditions. Systems such as the Envisat and RADARSAT have been used independently in many studies to detect oil spill. This paper presents a Gaussian process regression approach for oil spill segmentation in Synthetic Aperture Radar (SAR) images. The accuracy performance evaluation demonstrates that the proposed framework has approximately 45% improvement of the oil spill location estimates compared with the segmentation results from the individual images before the fusion process. The proposed framework can be used also in other environmental applications.

1c - SS: Advances in Distributed Kalman Filtering and Fusion 1

Room: LR2
Chair: Benjamin Noack
10:30 A Review of Forty Years of Distributed Estimation
This paper reviews forty years of distributed estimation research since the first papers on decentralized filtering appeared in 1978. Starting with a formulation of the problem, it reviews the as-sumptions and objectives of the main approaches, including information decorrelation, cross-covariance fusion, channel filters, covariance intersection, maximum a posteriori probabil-ity fusion, best linear unbiased estimate, various forms of dis-tributed Kalman filters based on pseudo estimates and augment-ed states. It also reviews algorithms motivated by sensor net-works with flexible communication including consensus and diffusion-based filters. Suggestions for future research are pro-vided.
10:50 Distributed Kalman Filter for A Class of Nonlinear Uncertain Systems: An Extended State Method
This paper studies the distributed state estimation problem for a class of discrete-time stochastic systems with nonlinear uncertain dynamics over time-varying topologies of sensor networks. An extended state vector consisting of the original state and the nonlinear dynamics is constructed. By analyzing the extended system, we provide a design method for the filtering gain and fusion matrices, leading to the extended state distributed Kalman filter. It is shown that the proposed filter can provide the upper bound of estimation covariance in real time, which means the estimation accuracy can be evaluated online. It is proven that the estimation covariance of the filter is bounded under rather mild assumptions, i.e., collective observability of the system and jointly strong connectedness of network topologies. Numerical simulation shows the effectiveness of the proposed filter.
11:10 Event-triggered Consensus Bernoulli Filtering
This paper focuses on reducing communication bandwidth and, consequently, energy consumption in the context of distributed target detection and tracking over a peer-to-peer sensor network. A consensus Bernoulli filter with event-triggered communication is developed by enforcing each node to transmit its local information to the neighbors only when a suitable measure of discrepancy between the current local posterior and the one predictable from the last transmission exceeds a preset threshold. Two information-theoretic criteria, i.e. Kullback-Leibler divergence and Hellinger distance, are adopted in order to measure the discrepancy between random finite set densities. The performance of the proposed event-triggered consensus Bernoulli filter is evaluated through simulation experiments.
11:30 Distributed Observations in Meteorological Ensemble Data Assimilation and Forecasting
With the ever increasing amount of meteorological data available, from satellites in particular, it becomes more and more important to use as large fraction of this data as practically possible for operational weather forecasts. Two applications are shown where more data are used in ensemble data assimilation and forecasting by distributing satellite data among different ensemble members. For the Ensemble of Data Assimilations (EDA), a version of the perturbed observation ensemble Kalman filter, the ensemble mean error can in theory be reduced by replacing observation perturbations by distributing different subsets of observations to different members, but in practice this is complicated by observation error correlations and the need to maintain the same level of spread in the ensemble as before. For the ensemble forecasts, it is shown that the initial conditions can be improved by re-centring EDA perturbations on multiple equally good analyses that use different subsets of observations instead of re-centring all EDA perturbations on a single one of those analyses. It is shown how the ensemble mean error decreases for decreased error correlation between analyses. In practice only 5-6 analysis are needed to obtain 80% of the root mean square error reduction that would be achieved with infinite number of analyses.
11:50 Analysis of Partial Knowledge of Correlations in an Estimation Fusion Problem
A recently proposed algorithm of fusion under partially known correlations of estimation errors has been proved to outperform the classic Covariance Intersection algorithm, which was proposed for the case of no knowledge of correlation. This paper shows that the assumptions of the recently proposed algorithm are rather strict with respect to the classic one. Namely, the mean square error (MSE) matrices of the two state estimates cannot be upper-bounded arbitrarily. A relaxation of the assumption that the matrices have to be known exactly is discussed, as well as the fusion reproducibility, and several examples dealing with dependent errors are provided.

1d - SS - Advanced Nonlinear Filters 1

Room: LR5
Chair: Fred E Daum
10:30 New Theory and Numerical Results for Gromov's Method for Stochastic Particle Flow Filters
We derive a new exact stochastic particle flow for Bayes' rule using a theorem of Gromov. We also show numerical experiments for high dimensional problems up to d = 100. The accuracy of our new filter is many orders of magnitude better than standard particle filters, and our filter beats the EKF by orders of magnitude for difficult nonlinear problems. The new theoretical result is equation (10), which is valid for arbitrary smooth nowhere vanishing densities, whereas our previous theory was derived for the special case of Gaussian densities with linear measurements. It is crucial to mitigate stiffness of the flow in order to achieve good numerical results.
10:50 A Homotopy Method for Grid Based Nonlinear Filtering
With increasing calculation power of modern systems, the focus of Stochastic Filtering turns to nonlinear effects. Sophisticated methods have to be investigated in application areas, where linearized methods like the Extended Kalman Filter tend to sub-optimality or even divergence. Fully nonlinear solutions to the estimation problem are provided by regarding an approximation of the full probability density function in particle filters or the Fokker-Planck equation. Both methods suffer from the degeneration of the approximated pdf by the application of pure Bayes rule for the measurement update. For particle filters the idea of particle flow successfully overcomes this problem. Unfortunately the particle flow can not be directly adapted to grid based methods which solve the Fokker-Planck equation. In this contribution a new approach to solve the problem of degeneration for grid based nonlinear filtering methods is presented by introducing a grid flow concept. It consists of a common flow of the whole grid, which preserves the underlying grid structure and is supplemented by a compensation step, which considers the measurement effects, that not could be handled by a common flow. The advantages of this grid flow approach are shown for a seven-dimensional nonlinear tracking example, which is solved by the Fokker-Planck equation on sparse grids. It turns out, that the grid flow approach increases the estimation accuracy in comparison to pure Bayes rule.
11:10 Comparison of Gain Function Approximation Methods in the Feedback Particle Filter
This paper is concerned with a study of the different proposed gain-function approximation methods in the feedback particle filter. The feedback particle filter (FPF) has been introduced in a series of papers as a control-oriented, resampling-free, variant of the particle filter. The FPF applies a feedback gain to control each particle, where the gain function is found as a solution to a boundary value problem. Approximate solutions are usually necessary, because closed-form expressions can only be computed in certain special cases. By now there exist a number of different methods to approximate the optimal gain function, but it is unclear which method is preferred over another. This paper provides an analysis of some of the recently proposed gain-approximation methods.We discuss computational and algorithmic complexity, and compare performance using well-known benchmark examples.
11:30 Reduced Order Nonlinear Filters for Multi-scale Systems with Correlated Sensor Noise
This paper provides theoretical results and numerical demonstration for nonlinear filtering of systems with multiple timescales and correlated signal-sensor noise. The motivation of this work is to provide the necessary theoretical bedrock upon which computationally efficient algorithms may be further developed to handle the problem of data assimilation in ever- increasingly higher dimensional complex systems; specifically with a focus on Dynamic Data-Driven Application Systems. As a main result, we provide details of the convergence of the filter equation to a homogenized (reduced order) filter in the correlated case. We present a particle filtering method that makes use of the reduced order filtering equation to efficiently solve high-dimensional multi-scale models. We numerically demonstrate an implementation of the particle method on a two-dimensional multi-scale problem with correlated noise, and a scalable testbed atmospheric model that is chaotic and has multiple timescales.
11:50 MCMC Smoothing for Generalized Random Tour Particle Filters
Particle filters are powerful and general tools for performing nonlinear, non-Gaussian filtering. They provide an estimate of the distribution on target state at the time of the last measurement. However, it is often desirable to compute the posterior distribution on the target path over an interval of time given all the measurements received in that interval, i.e., a smoothed estimate of the target's path. The process of computing this distribution is called smoothing. This paper presents a Markov Chain Monte Carlo (MCMC) approach to smoothing when the target motion is given by a Generalized Random Tour (GRT) model, a non-Gaussian motion model. This model is particularly appropriate in maritime tracking situations which often involve non-linear measurements. Since the filter is non-linear and non-Gaussian, one cannot apply a Kalman smoother. It is easy and natural to simulate target paths using a GRT model, but the transition function does not have a closed analytic form. As a result, one cannot use standard methods for particle filter smoothing. In this paper, we describe a method for performing MCMC smoothing for GRT particle filters and demonstrate the results in examples.

1e - SS - Lower Bounds for Parameter Estimation and Beyond

Room: LR6
Chair: Carsten Fritsche
10:30 Posterior Cramer-Rao Bound for Target Tracking in the Presence of Multipath
This paper considers the general problem of tracking a non-cooperative target in the presence of multipath. The multipath effect occurs intermittently, according to a discrete-time Markov chain, and exerts an additional unknown measurement error (i.e. a bias), with biases auto-correlated if they occur across successive sampling times. We calculate the posterior Cramer-Rao bound (PCRB) for this problem by augmenting the target state with the multipath bias. An established, efficient Riccati recursion is then used to determine the PCRB, thereby providing mean squared error performance bounds for both the estimation of the target state and the multipath bias. The approach is demonstrated in a simulated scenario in which an airborne radar tracks a low altitude airborne target that is moving in a horizontal plane with nearly constant velocity, using measurements of azimuth. elevation and range. The measurements are intermittently corrupted by multipath bias resulting from specular reflection at the surface boundary. It is shown that the PCRB increases rapidly when the multipath effect occurs, indicating that optimal target tracking performance is significantly degraded at such times. Future work will compare the PCRB developed herein with alternative PCRB methodologies that do not implicitly condition on the multipath effects, and also compare the bound to the performance of a tracking algorithm that is designed to identify and adjust for the multipath effects.
10:50 A General Class of Bayesian Lower Bounds Tighter than the Weiss-Weinstein Family
In this paper, Bayesian lower bounds (BLBs) are obtained via a general form of the Pythagorean theorem where the inner product derives from the joint or the a-posteriori probability density function (pdf). When joint pdf is considered, the BLBs obtained encompass the Weiss-Weinstein family (WWF). When a-posteriori pdf is considered, by resorting to an embedding between two ad hoc subspaces, it is shown that any "standard" BLBs of the WWF admits a "tighter" form which upper bounds the "standard" form. Interestingly enough, this latter result may explain why the "standard" BLBs of the WWF are not always as tight as expected, as exemplified in the case of the Bayesian Cramér-Rao Bound. As a consequence an updated definition of efficiency is proposed, as well as the introduction of an updated class of efficient estimators.
11:10 Bound on the Estimation of a 3-D Trajectory from a Stationary Passive Sensor and Its Attainability
It has been shown in previous works that the trajectory of a thrusting/ballistic object in three-dimensional space is observable with two-dimensional measurements from a stationary passive sensor. The measurements can either start from the launch point or start in flight, i.e., with delayed acquisition. The observability of the target trajectory was investigated by testing the invertibility of the Fisher Information Matrix (FIM) numerically. This work discusses the observability of the trajectory via the uniqueness of the target state vector for a certain sequence of 2-d angle-only measurements (azimuth and elevation angles) from a single fixed passive sensor. The discussion starts with a polynomial motion from which the results are extended to nonlinear thrusting/ballistic motion. Two cases: (i) known thrust and drag coefficient, (ii) unknown thrust and drag coefficient are considered. The gravity acceleration is shown to be the crucial part that guarantees the observability in all the cases.
11:30 Bobrovsky-Zakai Bound for Filtering, Prediction and Smoothing of Nonlinear Dynamic Systems
In this paper, recursive Bobrovsky-Zakai bounds for filtering, prediction and smoothing of nonlinear dynamic systems are presented. The similarities and differences to an existing Bobrovsky-Zakai bound in the literature for the filtering case are highlighted. The tightness of the derived bounds are illustrated on a simple example where a linear system with non-Gaussian measurement likelihood is considered. The proposed bounds are also compared with the performance of some well-known filters/predictors/smoothers and other Bayesian bounds.
11:50 Multivariate Bayesian Cramér-Rao-Type Bound for Stochastic Filtering Involving Periodic States
In many stochastic filtering problems, some of the states have periodic nature, i.e. the observation model is periodic with respect to these states. For estimation of these periodic states, we are interested in the modulo-T error and not in the plain error value. Thus, in this case, the commonly-used Bayesian mean-squared-error (MSE) lower bounds are inappropriate for performance analysis, since the MSE risk is based on the plain error and is inappropriate for periodic state estimation. In contrast, the mean-cyclic-error (MCE) is an appropriate risk for estimation of periodic states. In a mixed periodic and nonperiodic setting, a mixed MCE and MSE lower bound can be useful for performance analysis and design of filters. In this paper, we present the mixed Bayesian Cramér-Rao bound (BCRB) for stochastic filtering. The mixed BCRB is composed of a cyclic part and a noncyclic part for estimation of the periodic and the nonperiodic states, respectively. Direct computation of the mixed BCRB is not practical, since it requires matrix inversion, whose dimensions increase with time. Therefore, we propose a recursive method with low computational complexity for computation of the mixed BCRB at each time step. The mixed BCRB is examined for direction-of-arrival tracking scenarios and compared to the performance of a particle filter. It is shown that in the considered scenarios the mixed BCRB is informative and can be approached by the particle filter. In addition, the inappropriateness of MSE bounds for estimation of periodic states is demonstrated.

1f - Point Process Methods

Room: LR11
Chair: Ba-Tuong Vo
10:30 Computationally Efficient Distributed Multi-Sensor Multi-Bernoulli Filter
This paper proposes a computationally efficient distributed fusion algorithm with multi-Bernoulli (MB) random finite sets (RFSs) based on generalized Covariance Intersection (GCI). The GCI fusion with MB filter (GCI-MB) involves the computation of the generalized MB (GMB) fused density determined by a set of hypotheses growing exponentially with object number. Hence, its applications with multiple targets are quite restrictive, which further motivates an efficient fusion algorithm. In this paper, we propose a novel approximation of the GCI-MB fusion. By discarding the hypotheses with negligible weights, the GCI-GMB fusion amounts to parallelized fusions performed with several smaller groups of Bernoulli components. As such, the computation of the GMB fused density is significantly simplified, with the number of hypotheses reduced dramatically and a practical appealing parallelizable structure achieved. Based on the proposed approximation, a computationally efficient GCI- MB fusion algorithm which can harness large amount of ob- jects is devised. Furthermore, we present the analysis on both the characterization of the L1-error and the computational complexity of the proposed fusion algorithm compared with the standard GCI-GMB fusion. Our analysis shows that the proposed fusion algorithm can reduce the computational expense as well as memories dramatically with slight approximation error. Numerical experiments using the Gaussian implementation for a challenging scenario with twenty objects demonstrate the performance of the proposed fusion algorithm.
10:50 Multi-Scan Generalized Labeled Multi-Bernoulli Filter
This paper extends the generalized labeled multi-Bernoulli (GLMB) tracking filter to a batch multi-target tracker. In a labeled random finite set formulation, a multi-target tracking filter propagates the labeled multi-target filtering density while a batch multi-target tracker propagates the labeled multi-target posterior density. The GLMB filter is an analytic solution to the labeled multi-target filtering recursion. In this work, we show that the GLMB filter can be extended to an analytic multi-object posterior recursion.
11:10 Handling of Multiple Measurement Hypotheses in an Efficient Labeled Multi-Bernoulli Filter
The detection and motion estimation of an unknown number of traffic participants in dense, cluttered environments is an essential task for autonomous driving systems. Recent research using Random Finite Sets show promising results and are the state-of-the-art for object tracking. This paper proposes an extension to the Labeled Multi-Bernoulli Filter (LMB) for handling multiple measurement hypotheses as they can occur with object detection using lidar, camera and radar. Real-time performance is achieved using an efficient Gibbs sampling, which directly handles multiple measurement hypotheses. The algorithm and its modifications are analyzed in detail using a simple example. Finally, two simulations show that the proposed algorithm is able to handle multiple measurement hypotheses better than the standard LMB filter. The performance increases even if these hypotheses have a significant systematic, non-Gaussian error.
11:30 A New Cardinalized Probability Hypothesis Density Filter with Efficient Track Continuity and Extraction
The cardinalized probability hypothesis density (CPHD) filter was proposed as a practical approximation of the multi-target Bayes filter with tractable computational complexity. However, the CPHD filter has limitations in dealing with missed detections, extracting target state in its particle implementations, and maintaining track continuity. In this paper, a new improved CPHD filter is proposed as a solution to address these limitations, with track continuity and extraction easily achieved. This filter inherits tractable computational complexity and addresses the drawbacks of the standard CPHD filter. In simulations, the proposed filter is implemented by using Gaussian mixtures, and simulation results demonstrate the good performance of the proposed filter compared to the conventional multi-target filter in different challenging environments.
11:50 A Variational Bayesian Labeled multi-Bernoulli Filter for Tracking with Inverse Wishart Distribution
In multi-target tracking (MTT), the imprecise model for sensor characteristics might result in poor performance. The Variational Bayesian labeled multi-Bernoulli (VB-LMB) filter based on Gamma distribution can handle this problem. However, the predictive likelihood of the existing VB-LMB filter is simply treated as a Gaussian, which is inaccurate. In this paper, a VB-LMB filter with inverse Wishart distribution is presented to perform MTT under the unknown sensor characteristics. The measurement noise covariance is modeled as an IW distribution. This distribution has potential to deal with the full noise covariance matrix compared with the Gamma distribution. Since the state and the measurement noise covariance are coupled, the updated equation can be solved by variational Bayesian (VB) method. The predictive likelihood is calculated via minimizing the Kullback-Leibler divergence by the VB lower bound. A MTT scenario is used to evaluate the proposed method. Simulation results show that our approach has better performance than the existing VB-LMB filter with the Gamma distribution.

1g - Modelling, Simulation and Evaluation

Room: LR12
Chair: James Llinas
10:30 Identifying Interactions for Information Fusion System Design Using Machine Learning Techniques
An Information Fusion System (IFS) is a complex system consisting of various interdependent elements such as sensors, information processors, fusers, sense-makers, and resource managers, etc. These elements are typically designed and evaluated independently, but isolated performance evaluation does not scale to a system-level performance in complex systems. Since, the IFS capability results from the collective behavior of these elements, identifying interactions becomes critical for engineering an IFS. In this paper, we investigate machine learning techniques (deep neural networks and general linear models) to provide holistic performance evaluation of the IFS, where the objective is to understand IFS design implications based on variations and interactions of its constituents elements. The challenge for employing machine learning techniques is the availability of a data set to build a predictive performance model of the IFS. We utilize Optimal Design of Experiments to provide the data collection strategy for building the machine learning models, and our results demonstrate that it is imperative to include interactions in the data collection strategy. This attests to the significance of interactions and advises against the independent design and evaluation of IFS constituent elements. Furthermore, we demonstrate how the IFS designers can leverage insights from statistical analysis to exploit interactions between elements to improve IFS design and its performance.
10:50 Application of A New Distribution to High Grazing Angle Sea-Clutter
Many studies show that the sea clutter obeys the K distribution. With the continuous improvement of the radar resolution and the increase of the grazing angle, the distribution of sea clutter gradually deviates from the K distribution. Although good fitting results can be obtained through the more complicated sea clutter models such as KK distribution, KA distribution, K+Rayleigh distribution and so on, the computational efficiency of the detection algorithm is reduced obviously. For computational complexities are caused by large number of parameter estimations and non-closed expression.To solve the problem, a hybrid sea-clutter distribution (KR distribution) with closed-form expression for high-resolution and high grazing angle situation is proposed, which fits the measured data better than K distribution. The method of parameter estimation for KR distribution is given, that is, the geometric segmentation parameter estimation method based on path gain. The results show that the fitting effect of KR distribution is better than both of K distribution and Rayleigh distribution under the conditions of high resolution and large grazing angle. The fitting effect will be better as the grazing angle increases. The parameter estimation method of geometric segmentation based on path gain can find the fitting parameters accurately.
11:10 Motion Artefacts Modelling in the Application of a Wireless Electrocardiogram
Wireless electrocardiograms can be useful for a wide range of applications including early detection of seizures onset, West Syndrome, sleep apnoea, Temporal Lobe Epilepsy (TLE), supra-ventricular tachycardia, atrial flutter, and atrial fibrillation. One of the advantages of using wireless sensing platforms in telemedicine is that patients can freely move and carry out everyday activities unhindered whilst vital cardiac action potentials are being measured. However, the measurements are highly sensitive of motion as motion changes the electrical interface between the electrodes and the skin and, thereby, distorts the useful signal. In this paper we propose a strategy for establishing the statistics of motion artefacts. Our approach employs 3D accelerometers to reason about the movement affecting the electrodes of a wireless electrocardiogram and takes advantage of the structure and sequence of cardiac action potentials.
11:30 On Effect of Information Loss on Fuser Quality and Utility
We abstract the accuracy performance of any fuser into a fusion quality measure that is within the range [0,1] and monotonically decreases with increasing errors. Although there are many possible ways to map the actual error to this quality measure, we adopt a mapping function consisting of both concave and convex regions and its parameters can be tuned based on system design requirements. The effect of communication loss over the links from the sensors, where estimates are generated, to the fuser is then considered, and based on the variations of the quality measure with loss, we define the overall fuser utility that characterizes the resilience of a fuser with increasingly adverse communication constraints. Tracking examples are shown to demonstrate the comparative quality and utility performances of several closed-form fusers.
11:50 State Estimation with a Heading Constraint
The state estimation problem with a heading constraint, in which only the fixed heading of the target is prior known without any other information about the constraint, is investigated. This situation can be discovered in some practical applications, such as the tracking of a vehicle moving on a straight road. Then the heading of the target is limited to the direction of the road segment which may be known a priori. But the specific information of the road may not be available. The partly known prior information of the road constraint can be utilized to enhance the tracking performance. Since the existing estimation methods cannot be applied to produce constrained estimates in this situation, a filtering algorithm is proposed to address this issue. The state augmentation approach, which straightforwardly augment the intercept of the constraint line into the state vector, is presented. Then the heading constraint can be formulated using the state components and the prior known heading. The pseudo-measurements can be constructed accordingly and incorporated into the estimator, which is called heading constraint Kalman filter(HCKF). The performance of the HCKF is evaluated by comparing against one unconstrained state estimation algorithm as well as two popularly constrained state estimation methods. Simulation results demonstrate the effectiveness and the superior performance of the proposed method.

1h - Extended Object/ Group Tracking

Room: JDB-Seminar Room
Chair: Emre Ozkan
10:30 3D Extended Object Tracking Using Recursive Gaussian Processes
In this study, we consider the challenging task of tracking dynamic 3D objects with unknown shapes by using sparse point cloud measurements gathered from the surface of the objects. We propose a Gaussian process based algorithm that is capable of tracking the dynamic behavior of the object and learn its shape in 3D simultaneously. Our solution does not require any parametric model assumption for the unknown shape. The shape of the objects is learned online via a Gaussian process. The proposed method can jointly estimate the position, orientation, and the shape of the object. The inference is performed by an extended Kalman filter which is suitable for online real-time applications. Lastly, we demonstrate the initial results of a promising approach, which aims at reducing the computational complexity.
10:55 Extended Object Tracking and Shape Classification
Recent extended target tracking algorithms can provide reliable shape estimates while tracking objects. The estimated extent of the objects can be used for online classification. In this work, we propose to use a Bayesian classifier to identify different objects based on their contour estimates while tracking. The proposed method can use the uncertainty information provided by the estimation covariance of the tracker.
11:20 A Gaussian Process Convolution Particle Filter for Multiple Extended Objects Tracking with Non-Regular Shapes
Extended object tracking has become an integral part of various autonomous systems in diverse fields. Although it has been extensively studied over the past decade, many complex challenges remain in the context of extended object tracking. In this paper, a new method for tracking multiple irregularly shaped extended objects using surface measurements is proposed. The Gaussian Process Convolution Particle Filter proposed in~\cite{aftab2017gaussian}, designed to track a single extended/group object, is enhanced for tracking multiple extended objects. A convolution kernel is proposed to estimate the multi-object likelihood. A target birth/death model based on the proposed method is also introduced for automatic initiation and deletion of the objects. The proposed approach is validated on real-world LiDAR data which shows that the method is efficient in tracking multiple irregularly shaped extended objects in challenging scenarios involving occlusion, dense clutter and low object detection.
11:45 Maximum Likelihood Extended Target Detection for Tsunami Warning Using Ocean Surface Radar
The ocean surface radar are used for tsunami early warning systems. This radar can observe radial current velocity at every range and azimuth resolution cell in radar coverage. However, the measured velocity has measurement errors, which might make the approach of a tsunami unclear. In this paper, we propose a tsunami detection technique that reduces false alarms by estimating tsunami parameters statistically by maximum likelihood estimation. Multiple detection points of tsunami exceeding the range and azimuth resolution are integrated and detected assuming the arrival wavefront of tsunami. The validity of the proposed algorithm was confirmed through Sumatra earthquake tsunami simulations.

1i - Track Before Detect

Room: JDB-Teaching Room
Chair: Thia Kirubarajan
10:30 Non-Bayesian Track-Before-Detect Using Cauchy-Schwarz Divergence-Based Information Fusion
In this paper we present a novel non-Bayesian filtering method for tracking multiple objects with a particular application in time-lapse cell microscopic video sequence. In our method the heat-map of the frame sequence is extracted and represented as a pseudo-probability hypothesis density of the image. The pseudo-probability hypothesis density is used as measurements and fused with a prior Poisson random finite set density. We employed Cauchy-Schwarz divergence for information fusion. The presented algorithm was tested on a publicly available cell microscopic video sequence.
10:50 Track-before-detect Strategies for Multiple-PRF Radar System with Range and Doppler Ambiguities
Medium pulse repetition frequency (MPRF) radar system is widely applied in practice since it combines the desirable features of both low and high PRF radars. However, the corresponding signal processing is more complicated due to range and Doppler ambiguities. $N$ staggered PRFs are designed in the system with the intention of being able to solve ambiguities. Traditional target tracking method solves ambiguities initially by performing ambiguity resolutions over thresholded measurements, while it has a poor performance when target signal-to-noise ratio (SNR) is low. Multi-frame track-before-detect (MF-TBD) is an advantageous method to track dim targets. Unfortunately, it fails to be applied in multiple-PRF radar system directly as the target state space and the measurement space is not the one-to-one correspondence, which results in the increased complexity of the algorithm. In this paper, TBD strategies for multiple-PRF radar system with range and Doppler ambiguities are proposed to solve those issues. A set of pseudo measurements are synthesized according to different PRF data. Then a cross-boundary MF-TBD for ambiguous measurements is applied over the $N$ sets of pseudo measurements, producing the ambiguous plot-sequences. Besides, the joint fusion ambiguity resolution is proposed to obtain unambiguous plot-sequences. Simulations show that the proposed methods significantly outperform the classical ambiguity resolution Kalman filter (CAR-KF) algorithm.
11:10 Track-Before-Detect Technique in Mixed Coordinates
Conventional track-before-detect (TBD) usually considers remote target with constant Cartesian velocity following an approximate straight line motion in sensor coordinates. This model inaccuracy may lead to integrated energy loss especially in near scene. In this paper, a multi-frame TBD technique in mixed coordinates (MC-MF-TBD) is proposed for weak target detection and tracking in non-Cartesian sensors. Predicted position of a cell in sensor coordinates is obtained by converting Cartesian-coordinate predicted position, achieved according to an assumed velocity in a Cartesian frame, back to sensor coordinates. Then, measurement of each cell is added onto the cell closest to the predicted position to realize energy integration. The procedure of multi-frame accumulation in mixed coordinates is derived in detail. To match the unknown target velocity, a mixed-coordinate-based velocity filter bank is presented and the filter mismatch loss is investigated. Simulation results demonstrate the superiority of MC-MF-TBD compared with other MF-TBD strategies.
11:30 A Method for Resolving the Merit Function Expansion of Dynamic Programming TBD
Existing dynamic programming based track-before-detect (DP-TBD) strategies suffer from merit function expansion phenomenon (MFEP), which aggravated the burden of designing the detection threshold. The traditional constant false alarm rate (CFAR) detection can't be use to exactly estimate the noise energy from the area of merit function expansion. the threshold setting of existing DP-TBD strategies usually resort to the traditional Monte-Carlo counting, the extreme-value theory or its generalized version. For the nonhomogeneous clutter background and the fluctuating target, all of these constant threshold setting strategies inevitably exist the target losing or higher false alarm rate. In addition, for the multi-target scenes, in order to avoid solving high-dimensional optimization problems, existing the most effective DP-TBD methods all use the additional heuristic procedures to extract target trajectories one-by-one from the merit function expansion area by assuming target tracks are always independent. To overcome the aforementioned challenges, a novel one-step greedy optimization TBD algorithm (OSP-TBD) is proposed in this paper. By constraining the physically admissible trajectories, such that different targets do not occupy the same resolution cell during the same stage and the trajectory with higher merit function (MF) is estimated ahead of others, OSP-TBD can eliminate the MFEP intrinsically and traditional CFAR procedure can be used to detect target adaptively. Besides, the proposed OSP-TBD algorithm can be used to process multi-target situation directly and declare all of the target trajectories corresponding to the states whose MF at the final frame exceed the given detection threshold without any additional heuristic procedure. Numerical simulations are used to assess the performance of the proposed strategies.
11:50 Detection of Multiple Targets in an Image
Detecting the locations of multiple targets in image frames remains a challenging problem in various applications including target tracking, robotics and autonomous systems. Most of the existing approaches developed to address this problem rely on threshold-based detection and are ad-hoc natured; there are no systematic approaches to detect multiple targets in a single image. In this paper, such an approach is developed by assuming the point spread function (PSF) of target signal intensities in the image to be Gaussian and that the noise is independently and identically distributed (i.i.d.). It is shown that, after joint estimation of target position and PSF, the multiple target detection problem decomposes into parallel single target detection problems. The performance of the proposed approach is demonstrated in this paper through a simulated example.

Wednesday, July 11 12:10 - 13:10

Lunch

Wednesday, July 11 13:10 - 14:50

2a - Localisation 1

Room: LR0
13:10 Direct Position Determination of Non-Cooperative Emitters with Multiple Uncalibrated Receivers
In this work, we consider direct position determination of non-cooperative emitters with multiple receivers, which are uncalibrated in the sense that each applies an unknown complex scaling to the received signals, and the possibly unequal receiver noise powers are unknown. Neglecting the uncertainty in these unknown parameters might result in performance loss in localization accuracy. Taking these unknowns as redundant parameters, the maximum likelihood estimator is derived in this paper under a low signal-to-noise ratio (SNR) assumption. Normalization of received signals is introduced in the derived estimator to whiten the receiver noises into unit power. Comparison of the proposed algorithm with its predecessors is made and similarity between these estimators is illustrated. Simulation results demonstrate that the proposed method performs comparatively well in all SNRs despite the low SNR assumption in its derivation. In terms of localization accuracy, the proposed method has a comparable performance with estimator which exploits exact prior knowledge of receiver noise powers. For cases with uncertainty in the prior knowledge, the proposed method outperforms other estimators in low SNR area.
13:30 Convex Combination for Source Localization Using Received Signal Strength Measurements
Source localization is of great importance for wireless sensor network applications. Locating emission sources using received signal strength measurements is investigated in this paper. As RSS localization is a non-convex optimization problem, it is difficult to achieve global optima. Many optimization methods have been proposed to relax it to a convex optimization problem. Unlike these methods, we propose a convex combination scheme. By introducing a highly accurate linear approximation of a logarithmic function, the source location is represented as a convex combination of a set of virtual anchors. Then the original problem is relaxed as a convex optimization problem of finding the optimal combination coefficients, which can be solved efficiently using the constrained least squares approach. To obtain the virtual nodes, we construct parallel lines and use their intersections to form a convex polygon, which covers the source location with certain probability. The vertices of the polygon are taken as the virtual nodes. Numerical examples verify the performance of the proposed method in terms of both localization accuracy and computational efficiency.
13:50 A Crowd-sensing Procrustes-Based Method for Indoor Positioning in a Common Frame of Reference
Indoor positioning techniques have been applying for the last years in a wide number of applications, and have become attractive tools for emerging technologies like Internet of Things (IoT). Numerous indoor localization methods are highly constrained since they take into account the knowledge of building information, inter-node distances, some absolute position information and/or WiFi fingerprints. Here, we propose a less constrained crowd-sensing positioning method based only on dead reckoning information, and on relative positions of detected access points (APs). Our method comprises selection of APs, choice of a common frame of reference, a partial Procrustes transformation, and rotational and translational transformations applied to the trajectories of the users. By performing Monte Carlo simulations, the results show that the estimation of trajectories and APs performed by our crowd-sensing method is statistically accurate.
14:10 GNSS Ambiguity Resolution by Adaptive Mixture Kalman Filter
The precision of global navigation satellite systems (GNSSs) rely heavily on accurate carrier phase ambiguity resolution. The ambiguities are known to take integer values, but the set of ambiguity values is unbounded. We propose a mixture Kalman filter solution to GNSS ambiguity resolution. By marginalizing out the set of ambiguities and exploiting a likelihood proposal for generating the ambiguities, we can bound the possible values to a tight and dense set of integers, which allows for extracting the integer solution as a maximum-likelihood estimate from a mixture Kalman filter. We verify the efficacy of the approach in simulation including a comparison with a wellknown integer least-squares based method. The results indicate that our proposed switched mixture Kalman filter repeatedly finds the correct integers in cases where the other method fails.
14:30 An Ensemble Kalman Filter for Feature-Based SLAM with Unknown Associations
In this paper, we present a new approach for solving the SLAM problem using the Ensemble Kalman Filter (EnKF). In contrast to other Kalman filter based approaches, the EnKF uses a small set of ensemble members to represent the state, thus circumventing the computation of the large covariance matrix traditionally used with Kalman filters, making this approach a viable application in high-dimensional state spaces. Our approach adapts techniques from the geoscientific community such as localization to the SLAM problem domain as well as using the Optimal Subpattern Assignment (OSPA) metric for data association. We then compare the results of our algorithm with an extended Kalman filter (EKF) and FastSlam, showing that our approach yields a more robust, accurate, and computationally less demanding solution than the EKF and similar results to FastSlam.

2b - SS: Multi-sensor Data Fusion for Navigation and Localisation 1

Room: LR1
Chair: Lyudmila Mihaylova
13:10 Passive Multi-Target Tracking Using the Adaptive Birth Intensity PHD Filter
Passive multi-target tracking applications require the integration of multiple spatially distributed sensor measurements to distinguish true tracks from ghost tracks. A popular multi-target tracking approach for these applications is the particle filter implementation of Mahler's probability hypothesis density (PHD) filter, which jointly updates the union of all target state space estimates without requiring computationally complex measurement-to-track data association. Although this technique is attractive for implementation in computationally limited platforms, the performance benefits can be significantly overshadowed by inefficient sampling of the target birth particles over the region of interest. We propose a multi-sensor extension of the adaptive birth intensity PHD filter described in (Ristic, 2012) to achieve efficient birth particle sampling driven by online sensor measurements from multiple sensors. The proposed approach is demonstrated using distributed time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA) measurements, in which we describe exact techniques for sampling from the target state space conditioned on the observations. Numerical results are presented that demonstrate the increased particle density efficiency of the proposed approach over a uniform birth particle sampler.
13:30 Robust Bayesian Filtering Using Bayesian Model Averaging and Restricted Variational Bayes
Bayesian filters can be made robust to outliers if the solutions are developed under the assumption of heavy-tailed distributed noise. However, in the absence of outliers, these robust solutions perform worse than the standard Gaussian assumption based filters. In this work, we develop a novel robust filter that adopts both Gaussian and multivariate t-distributions to model the outliers contaminated measurement noise. The effects of these distributions are combined within a Bayesian Model Averaging (BMA) framework. Moreover, to reduce the computational complexity of the proposed algorithm, a restricted variational Bayes (RVB) approach handles the multivariate t-distribution instead of its standard iterative VB (IVB) counterpart. The performance of the proposed filter is compared against a standard cubature Kalman filter (CKF) and a robust CKF (employing IVB method) in a representative simulation example concerning target tracking using range and bearing measurements. In the presence of outliers, the proposed algorithm shows a 38% improvement over CKF in terms of root-mean-square-error (RMSE) and is computationally 2.5 times more efficient than the robust CKF.
13:50 A Novel Robust Rauch-Tung-Striebel Smoother Based on Slash and Generalized Hyperbolic Skew Student's T-Distributions
In this paper, a novel robust Rauch-Tung-Striebel smoother is proposed based on the Slash and generalized hyperbolic skew Student's t-distributions. A novel hierarchical Gaussian state-space model is constructed by formulating the Slash distribution as a Gaussian scale mixture form and formulating the generalized hyperbolic skew Student's t-distribution as a Gaussian variance-mean mixture form, based on which the state trajectory, mixing parameters and unknown noise parameters are jointly inferred using the variational Bayesian approach. The posterior probability density functions of mixing parameters of the Slash and generalized hyperbolic skew Student's t-distributions are, respectively, approximated as truncated Gamma and generalized inverse Gaussian. Simulation results illustrate that the proposed robust Rauch-Tung-Striebel smoother has better estimation accuracy than existing state-of-the-art smoothers.
14:10 A Fast Numerical Method for the Optimal Data Fusion in the Presence of Unknown Correlations
In the presence of unknown correlations, the optimal data fusion, in the sense of Minimum Mean Square Error, can be formulated as a problem of minimizing a non-differentiable but convex function. The popular projected subgradient methods are known to converge slowly. This paper presents the OSGA- V based formulations and method for the minimization of the optimal data fusion problem, achieving much faster convergence rate than projected subgradient methods. We expect this method to significantly reduce the computational cost and time to achieve optimal data fusion in the presence of unknown correlations. Apologize for somethings that are NOT part of the abstract (I found nowhere else to put this piece of info): one referenced earlier paper has a URL in biblio section. The URL IS correct, however, I observed clicking on it directly might NOT work (software bug?). One might need to manually enter/correct the URL in the browser to match the URL text in the biblio to properly access it.
14:30 Fusion of GPS/OSM/DEM Data by Particle Filtering for Vehicle Attitude Estimation
The objective of this work is to estimate the localization and attitude of a land-vehicle by fusing GPS, OSM and DEM data through a nonlinear filter. We focus on the heading and pitch angles of the vehicle, knowing that these parameters are essential in the optimization of the route planning and energy management for an EV. This paper investigates the performance of particle filtering and probabilistic map-matching algorithms for tracking a vehicle with the help of digital roadmaps to improve the ground-location. Also, the filter fuses DEM data through a TIN method in order to bound altitude errors caused by GPS. The proposed method is evaluated through an urban transport network scenario and experimental results show that the proposed estimator can accurately estimate the vehicle location and attitude.

2c - SS: Situational Understanding Through Equivocal Sources

Room: LR2
Chair: Geeth Ranmal de Mel
13:10 Source Location with Quantized Sensor Data Corrupted by False Information
In this paper, we investigate the problem of source location estimation in wireless sensor networks (WSNs) based on quantized data in the presence of false information attacks. Using a Gaussian mixture to model the possible attacks, we develop a maximum likelihood estimator (MLE) to locate the source with sensor data corrupted by injected false information, and call the approach quantized received signal strength with a Gaussian mixture model (Q-RSS-GM). The Cramer-Rao lower bound (CRLB) for this estimation problem is also derived to evaluate the estimator's performance. Simulation results show that the proposed estimator is robust in various cases with different attack probabilities and parameter mismatch, and it significantly outperforms the approach that ignores the possible false information attacks.
13:30 Supporting Scientific Enquiry with Uncertain Sources
In this paper we propose a computational methodology for assessing the impact of trust associated to sources of information in scientific enquiry activities building upon recent proposals of an ontology for situational understanding and results in computational argumentation. Often trust in the source of information serves as a proxy for evaluating the quality of the information itself, especially in the cases of information overhead. We show how our computational methodology, composed of an ontology for representing uncertain information and sources, as well as an argumentative process of conjecture and refutation, support human analysts in scientific enquiry, as well as highlighting issues that demand further investigation.
13:50 MINI-DASS (Mission-Informed Needed Information - Discoverable, Available Information ……… A Unique Information-Based Approach to Maximizing the Utility of ISR Assets and the "Magic Rabbits" That Your Mother Was Afraid to Tell You About
Warfighters require the ability to develop and maintain the best possible situational understanding in order to make the best-informed decisions possible. Current DOTMLPF (Doctrine, Organization, Training, Material, Leadership, Personnel, and Facilities) solutions do not: a. Enable clear and concise specification of information requirements b. Identify adequately and leverage all available information sources c. Evaluate the capability of available information sources against information requirements d. Assign information requirements to best available information sources; filter available information against information requirements to assess "goodness" e. Route filtered information efficiently to the owner of the information requirement MINI-DASS was conceived and is being developed to address these shortcomings by automating the required functions described above. An established MMF (Mission and Means Framework) used for kinetic missions is being applied to non-kinetic information gathering missions. A unique implementation looks at constructing information requests from an information-centric perspective rather than the traditional source-centric perspective. This includes modelling equivocality in social, physical, and other sensing sources with a focus on modelling credibility and relevancy in information and sources ISR assets are also modeled from an information-centric prospective rather the traditional platform-centric perspective; an information source may be a sensor, social media, fusion engine, algorithm or analytic cell product. In order to detach that solution from the requirement, a key paradigm shift utilized is the metaphorical concept of information comes from "magic rabbits". What information do you need the rabbits to provide?
14:10 Trust Estimation of Sources over Correlated Propositions
This work analyzes the impact of correlated propositions when estimating the reporting behavior of information sources. These behavior estimates are critical for fusion, and traditional methods assume the propositions are statistically independent. A new source behavior estimation methods is presented that accounts for statistical dependencies between the training propositions. Simulations seem to indicate that the potential performance gains for accounting for the correlations is small relative to the increased computational complexity. One may conclude that the traditional independence assumption in source behavior estimation methods is reasonable even in cases where it is actually violated.
14:30 Decision Making with Linguistic Information Based on D Numbers and OWAWA Operator
D numbers has been previously introduced in linguistic decision making due to its effectiveness and flexibility in dealing with uncertain information. The study applies the integration operator of D numbers to obtain the decision result by aggregating different opinions of experts which may be imprecise and uncertain. However, sometimes it would be more reasonable to consider the risk preference of the decision maker. In this paper, we proposed an improved aggregating method for linguistic information based on D numbers and OWAWA operator. The main advantage is that it can integrate the degree of importance that each experts has and the risk preference of the decision maker in the aggregation of linguistic information. An example is used to demonstrate the flexibility and reasonability of the proposed method.

2d - Deep Learning 1

Room: LR5
Chair: Thomas Powers
13:10 Deep Learning Based Fusion of RGB and Infrared Images for the Detection of Abnormal Condition of Fused Magnesium Furnace
In fused magnesium furnace (FMF) process, semi-molten is one of the most harmful abnormal working conditions, under which the furnace wall is thinned by the overheated fused magnesium because of the uneven impurities in raw material. If the condition is not detected on its early stage, the furnace can be burnt through. At present, semi-molten is detected by experienced operators by directly "observing the fire" at the production site of FMF. There is a high risk and the labor intensity is high. Such practice of detection relied on human may cause safety issues and can lead to missed or false detection. This work introduces a detection technology for the semi-molten working condition of FMF based on the fusion of RGB images and infrared thermal images. The classifier is established using deep Convolutional Neural Network (CNN) model trained using historical data. Also, to tackle the problem of insufficient training data, the Deep Convolutional Generative Adversarial Networks (DCGAN) is employed to generate extra samples. Finally, industrial experiments carried out in a magnesium oxide plant to show the effectiveness of the technology.
13:30 Deep Generative Acoustic Models for Real-Time Sonar Systems
High-fidelity acoustic models are crucial to the performance of sonar systems since they identify where there is signal excess in the surrounding environment. Sonar operators use these models to optimize sonar parameters in applications like target detection and tracking. Unfortunately, high-fidelity generative models like Comprehensive Acoustic System Simulation (CASS) are very computationally intensive and can not be run in real time. One way around this limitation is to pre-compute maps for a region given expected environmental parameters, but this approach is impractical in highly variable littoral regions. Instead, we propose to emulate a high-fidelity acoustic model with a deep neural network (DNN). DNNs are very efficient at test time, requiring only a few iterations of simple linear and non-linear operations, and can easily be run in real-time on a sonar platform. Deep models continue to advance the state-of-the-art in fields like speech and audio processing, computer vision, and natural language processing. These advances motivate the use of deep learning for a real-time generative acoustic model. We train a deep feedforward sigmoid network on a dataset generated by CASS and demonstrate promising results.
13:50 Open-Book Testing and Multi-Label Deep Generative Models
Deep Generative Models (DGMs) are very powerful semi-supervised classifiers. We aim to further improve their prediction accuracies by constructing novel generative models that incorporate multiple labels and by proposing open-book testing, a new testing paradigm that leverages the semi-supervised nature of DGMs. We perform all of our experiments on the NORB data set. Open-book testing allows unlabeled test data to be used while training in an effort to combat overfitting. We show experimentally that open-book testing significantly increases classification performance even though no label information is provided. Further, we develop five new multi-label DGMs. One is a generic multi-label model and four are custom-tailored to the NORB data set. We find that, compared to a single-label classifier, the presence of additional labels degrades performance despite open-book testing but is nearly perfect at 99.7% when a priori independence is enforced.
14:10 Bottle Detection in the Wild Using Low-Altitude Unmanned Aerial Vehicles
In this paper, we propose a new dataset and benchmark for low altitude UAV object detection, aiming to find and localize waste plastic bottles in the wild, as well as to inspire the development of object detection models to be capable of detecting small and transparent objects. To this end, we collect 25,407 UAV images of bottles with various kinds of backgrounds. Unlike traditional horizontal bounding box based annotation methods, we use the oriented bounding box to accurately and compactly annotate the bottles, which provides more detailed information for subsequent robotic grasping. The fully annotated images contain 34,791 bottles, each of which is annotated by an arbitrary (5 d.o.f.) quadrilateral. To build a baseline for bottle detection, we evaluate several state-of-the-art object detection algorithms on our UAV-Bottle Dataset (UAV-BD), such as Faster R-CNN, SSD, YOLOv2 and RRPN. We also present an analysis of the dataset along with baseline approaches. Both the dataset and benchmark are made publicly available to the vision community on our website to advance research in the area of object detection from UAVs.
14:30 A Priori Independence for Deep Generative Models
Deep Generative Models are powerful learning tools that utilize an autoencoder structure for generating data and inferring latent variables. The most basic version has a single latent variable that encodes the training data in an unsupervised manner, and the extension to semi-supervised learning combines a latent class label with a continuous latent variable whose purpose is to provide the variations in the data that are not caused by the class label. The structure of the two-variable generative model should imply that both the class label and the continuous variable are free parameters that can be chosen to generate data, that is, the two are a priori independent. However, we show that this is the case only for certain data sets. We propose two objective functions for guiding the variables to be a priori independent, we use a novel training procedure to optimize the objectives, and we show experimentally that the objectives successfully produce the desired independence. We perform all of our experiments on the multi-labeled NORB dataset.

2e - SS: Forty Years of Multiple Hypothesis Tracking 1

Room: LR6
Chair: Stefano Coraluppi
13:10 Forty Years of Multiple Hypothesis Tracking - A Review of Key Developments
Multiple hypothesis tracking addresses difficult multiple target tracking problems by making association decisions using multiple scans or frames of data. This paper reviews forty years of its development, including the original measurement-oriented approach of Reid, track-oriented approach first formulated by Morefield, distributed processing, and recent graph-based approaches. It also discusses relationship with random set approaches for tracking.
13:30 Seeing the Forest Through the Trees: Multiple-Hypothesis Tracking and Graph-Based Tracking Extensions
This paper addresses some concerns that have been expressed in the literature regarding the multiple-hypothesis tracking (MHT) paradigm for multi-target tracking (MTT). We clarify that MHT is a mathematically valid maximum a posteriori (MAP) estimation approach to the MTT problem. We identify some extensions to the MHT approach that have emerged over the years, and discuss the graph-based tracking (GBT) simplification that achieves significant computational reduction by introducing path-independence approximations. We provide some suggestions for future MTT performance evaluation efforts and conclude by indicating some current research directions.
13:50 Radar Resource Management for Multiple Hypothesis Tracking
In traditional radar-based Multiple Hypothesis Tracking (MHT), objects are revisited on the basis of a scan of the Area of Interest or object-by-object with focused beams directed to the estimated position of each object. Both schemes aim to obtain a track picture, which represents the tracks in the scene in terms of their positions and uncertainties. This paper introduces three radar resource management algorithms with different philosophies but based in the common premise that densely packed objects should receive a larger amount of energy than isolated objects. In Algorithm 1, we only direct energy to objects with gate intersection, i.e. objects for which a measurement could gate with more than one track. The purpose is to resolve the potential conflicts between measurement-to-track assignments that would occur with gate intersections. This results on a reduction of the update rate of the remaining tracks. In Algorithm 2 the objective is to consolidate the track picture as a collection of tracks including those under temporary confusion. To do so, we select different times-on-object (TOO) to different tracks, to maximize an upper bound of the entropy of the track picture, viewed as a normalized intensity function. Algorithm 3 combines Algorithm 1 and Algorithm 2. Performance of the three algorithms are evaluated using the track purity, the RMS (Root-Mean-Squared) of the total track error, and the Mean Optimized Subpattern Assignment metric for tracks (T-MOSPA). Computer simulations indicate the superiority of Algorithm 3 over the other two and the baseline algorithm. Quantitatively, Algorithm 3 results on a 25.3% decrease in the RMS tracking error and an increase of 8.2% in the track purity with respect to the baseline scheme which uses constant TOOs and update rates for all tracks.
14:10 Track Initiation for Maritime Radar Tracking with and Without Prior Information
Reliable track initiation is an important component of a tracking system, especially when it is used as part of a more general collision avoidance (COLAV) system. Some tracking methods (e.g., IPDA) come with an in-built track initiation capability, while other methods (e.g., JPDA) lack this capability, which in such cases typically is taken care of by heuristic rules such as the M/N logic. Although Reids multiple hypothesis tracking (MHT) is capable of track initiation, many implementations do not include track initiation in the MHT framework due to the increased complexity. While MHT is fundamentally Bayesian, the non-Bayesian sequential probability ratio test (SPRT) of Van Keuk is often used for track initiation. In this paper we derive a Bayesian SPRT for track initiation based on Reid's MHT. The approach is compared with the classical SPRT, both from a theoretical perspective and using simulations. Furthermore, the paper provides a comparison between the two SPRT versions, the IPDA and the M/N logic in terms of SOC curves and track initiation time. The initiation methods are also tested on real radar data recorded during full-scale maritime COLAV experiments. The simulations and real-world data sets include scenarios with highly non-stationary clutter.
14:30 An MHT Approach to Multi-Sensor Passive Sonar Tracking
This paper proposes a distributed MHT approach to passive sonar tracking for a field of fixed and moving sensors. It includes single-sensor narrowband and broadband measurement-space tracking at each sensor, followed by 2D Cartesian multi-sensor fusion. Tracking and fusion are performed in real time, with a small delay due to the MHT processing with each component of the architecture. Specific innovations include the use of statistically-consistent measurement-space target statistics, unbiased solution cross-fixing for Cartesian initialization, robust distributed MHT track scoring and management, and temporal uncertainty estimation to support targeting decisions.

2f - Applications of Information Fusion

Room: LR11
Chair: Murat Efe
13:10 Probabilistic Fusion Framework for Collaborative Robots 3D Mapping
Fusion of local 3D maps generated by individual robots to a globally consistent 3D map is one of the fundamental challenges in multi-robot mapping missions. In this paper, we propose a probabilistic mathematical formulation to address the integrated map fusion problem. More specifically, the problem of estimating fused map posterior can be factorized into a product of relative transformation posterior and the global map posterior, which enables us to solve map matching and map merging problems efficiently. In addition, a distributed communication strategy is employed to share map information among robots. The proposed approach is evaluated in indoor and mixed environments, which shows its utility in 3D map fusion for multi-robot mapping missions.
13:30 A Do It Yourself Mobile Communications Signal Based Passive Radar
Passive radar systems have received a renewed interest in the last decade or so due to its numerous advantages and the technological advancements that rendered the receiver design a rather easy task. This paper aims to give the recipe for a repeatable low cost UMTS based passive radar system composed of all commercially of the shelf (COTS) equipment and demonstrate its use for detecting ground targets on the field. We realize that this is not the first attempt to develop a passive radar system based on mobile signals, nor it is the first one to employ COTS equipment, however, unlike previous reports this aspires to give a step-by-step description of the hardware and algorithm design of the radar system that would enable other researchers to build their own which yields it of great practical value.
13:50 Real-Time Creation of a Target Situation Picture with the HENSOLDT Passive Radar System
HENSOLDT has developed a multi-band passive radar system which is using analog and digital audio and/or video broadcasting stations to create an air situation picture in real-time. This paper presents the underlying principles and building blocks of the tracking and data fusion software within the HENSOLDT Passive Radar System. Tracking results will demonstrate both overall surveillance performance as well as the capability to track maneuvering targets.
14:10 On Defense Strategies for Recursive System of Systems Using Aggregated Correlations
We consider a class of Recursive System of Systems (RSoS), wherein systems are recursively defined and the basic systems at finest level are composed of discrete cyber and physical components. This formulation captures the models of systems that are adaptively refined to account for their varied structure, such as sites of a heterogeneous distributed computing infrastructure. The components can be disrupted by cyber or physical means, and can also be suitably reinforced to survive the attacks. We characterize the disruptions at each level of recursion using aggregate failure correlation functions that specify the conditional failure probability of RSoS given the failure of an individual system at that level. At finest levels, the survival probabilities of basic systems satisfy simple product-form, first-order differential conditions, which generalize conditions based on contest success functions and statistical independence of component survival probabilities. We formulate the problem of ensuring the performance of RSoS as a game between an attacker and a provider, each with a utility function composed of a survival probability term and a cost term, both expressed in terms of the number of basic system components attacked and reinforced. We derive sensitivity functions at Nash Equilibrium that highlight the dependence of survival probabilities of systems on cost terms, correlation functions, and their partial derivatives. We apply these results to a simplified model of distributed high-performance computing infrastructures.
14:30 Subjective Logic Based Score Level Fusion: Combining Faces and Fingerprints
Biometric systems are prone to random and systematic errors which are typically attributed to the variations in terms of inter-session data capture and intra-session variability. Furthermore, these errors cannot be defined and modeled mathematically in many cases, but we can associate them with uncertainty based on certain conditions. In such cases, one of the possible approach to improve biometric system performance is to employ multi-biometric fusion by incorporating the uncertainties. In the literature, researchers have proposed many fusion techniques, but most of these techniques do not take uncertainty into account while performing fusion. Since the decision made by uni-modal biometric comparators do not consider the uncertainty involved in such decisions, it is essential first to model the uncertainty before combining the decision from multiple uni-modal biometric systems efficiently. To this end, we propose a score level multi-biometric fusion scheme using Subjective Logic which incorporates the uncertainty of the system's information channels while fusing the scores. Extensive experiments are carried out on the multi-biometric NIST BSSR1, and the proposed scheme has indicated a superior performance with a genuine match rate of 99.02\% at a false match rate fixed to 0.01\%.

2g - Sonar, Radar, Video Tracking

Room: LR12
Chair: Norikazu Ikoma
13:10 Prioritizing Goals in Cognitive Sonar: Tracking Multiple Targets
We consider the challenge of prioritizing among a set of simultaneous goals in an intelligent active sonar system. The cognitive (or intelligent) sonar system we propose is one that is capable of making decisions and tuning parameters to best achieve a set of possibly competing goals. Integrating intelligence in the sonar system reduces the burden on system operators and has the potential to greatly improve how quickly and how well systems can meet surveillance and tracking goals. We propose applying the goal-driven autonomy (GDA) architecture to intelligent active sonar, and in this paper we focus on the issue of assigning priority to system goals when multiple goals are simultaneously active. The goals are to track each of three targets simultaneously, and the decision to be made is which sector of the surveillance space to illuminate at each ping interval. We propose a metric for determining priority that incorporates both uncertainty about target state and the level of risk posed by the target. Simulation results indicate that the proposed goal management takes actions that decrease uncertainty about higher risk targets and has the potential to prioritize goals based on intelligence gathered from multiple system sources.
13:30 Multiple Target Tracking in Automotive FCM Radar by Multi-Bernoulli Filter with Elimination of Other Targets
To protect vulnerable road users, such as pedestrians, it is important to realize multi-target tracking in complex scenes. Due to the low signal-to-noise ratio (SNR) of pedestrian targets, the track-before-detect (TBD) approach seems to be effective. However by using an actual radar sensor, an observation interference between targets, especially pedestrians and higher SNR objects, such as roadside objects, may occur and lead to an incorrect tracking result. In this paper, we describe the algorithm of the Sequential-Monte-Carlo (SMC) multi-Bernoulli filter for TBD by eliminating other targets from the original observation of an automotive fast chirp modulation (FCM) radar that is suited for complex scenes. The approach is validated through the simulation of an urban road scene.
13:50 Comparing Visual Tracker Fusion on Thermal Image Sequences
Visual object tracking is a challenging task in computer vision, especially if there are no constraints to the scenario and the objects are arbitrary. The number of tracking algorithms is very large and all have diverses advantages and disadvantages. Normally they show various behaviour and their failures in the tracking process occur at different moments in the sequence. So far, there is no tracker which can solve all scenarios robustly and accurately. One possible approach to this problem is using a whole collection of tracking algorithms and fusing them. There exist various strategies to fuse tracking algorithms. In some of them only the resulting outputs are fused. This means that new algorithms can be integrated with less effort. This fusion can be called "high-level" because the tracking algorithms only interact through the last step in their procedure. Three fusion methods are investigated. They are called Weighted mean fusion, MAD fusion and attraction field fusion. In order to evaluate the three different approaches a collection of thermal image sequences has been investigated. These sequences show maritime scenarios with various objects such as ships and other vessels.
14:10 Robust Multiple Human Targets Tracking for Through-wall Imaging Radar
This paper deals with the tracking problems for multiple human targets hidden behind the wall using through-wall imaging radar (TWIR). We propose a robust tracking algorithm in image domain, combining mean-shift algorithm with Kalman filter. Comparing with the traditional mean-shift algorithm, the proposed algorithm has a greater performance in multiple human targets tracking, especially considering the case of the temporary loss of target. Real data validates the robustness of the proposed algorithm.
14:30 Situation Awareness in Telecommunication Networks Using Topic Modeling
For an operator of wireless telecommunication networks to make timely interventions in the network before minor faults escalate into issues that can lead to substandard system performance, good situation awareness is of high importance. Due to the increasing complexity of such networks, as well as the explosion of traffic load, it has become necessary to aid human operators to reach a good level of situation awareness through the use of exploratory data analysis and information fusion techniques. However, to understand the results of such techniques is often cognitively challenging and time consuming. In this paper, we present how telecommunication operators can be aided in their data analysis and sense-making process through the usage and visualization of topic modeling results. We present how topic modeling can be used to extract knowledge from base station counter readings and make design suggestions for how to visualize the analysis results for a telecommunication operator.

2h - SS: Big Data Fusion and Analytics

Room: JDB-Seminar Room
Chair: Subrata Das
13:10 A Sensor Fault-Resilient Framework for Predictive Emission Monitoring Systems
The acronym PEMS stands for Predictive Emission Monitoring Systems and designates software analyzers able to provide a reliable and real-time estimate of emission concentrations by means of a data-driven model using real process measurements as input data. The model is built by resorting to measured process values along with true emission values from a portable Continuous Emission Monitoring System (CEMS) during the data collection period. Once on-line, PEMS performance in terms of emission prediction accuracy is strongly affected by the quality of the sensors input data. In order to ensure that the performance requirements imposed by the regulatory environmental agencies are met, a so-called Sensor Evaluation System (SES) must be included in the design of a new Robust PEMS (R-PEMS). The main goal of this paper is to introduce a technical solution that is capable of: i) detecting whether the sensors input data to the PEMS is faulty; ii) identifying which sensor is faulty; iii) whenever possible, substituting the faulty sensor input with a reconciled value with the objective of recovering the PEMS performance prior to the fault. Finally, we empirically verify the performances of the proposed SES using a real data set collected at a oil refinery.
13:30 Learning Capsules for Vehicle Logo Recognition
Vehicle logo recognition is an important part of vehicle identification in intelligent transportation systems. Stateof- the-art vehicle logo recognition approaches use automatically learned features from a convolutional neural networks (CNNs). However, CNNs do not perform well when images are rotated and very noisy. This paper proposes an image recognition framework with a capsule network. A capsule is a group of neurons, whose length can represent the existence probability of an entity or part of an entity. The orientation of a capsule contains information about the instantiation parameters such as positions and orientations. Capsules are learned by a routing process, which is more effective than a pooling process in CNNs. This paper, for the first time, develops a capsule learning framework in the field of intelligent transportation systems. By testing with the largest publicly available vehicle logo dataset, the proposed framework gives a quick solution and achieves an accuracy of 99.87%. The learning capsules framework has been tested with different image changes such as rotation and occlusion. Image degradations including blurring and noise effects are also considered, and the proposed framework has proven to be superior to CNNs.
13:50 American Sign Language Posture Understanding with Deep Neural Networks
Sign language is a visually oriented, natural, non-verbal communication medium. Having shared similar linguistic properties with its respective spoken language, it consists of a set of gestures, postures and facial expressions. Though, sign language is a mode of communication between the deaf people. Most of the other people do not know sign language interpretations. Therefore, it would be constructive if we can translate the sign postures artificially. In this paper, a capsule-based deep neural network sign posture translator for an American Sign Language(ASL) fingerspelling dataset, has been presented. This paper presents a capsule-based deep neural network approach for sign posture translation and validates it over an American Sign Language(ASL) fingerspelling dataset. The performance validation shows that the approach can successfully identify sign language postures, with accuracy about 99\%.Unlike previous neural network approaches, which mainly used transfer learning from pre-trained models, the developed capsule network architecture does not require training in advance. The framework includes a capsule network with adaptive pooling which is the key to its high accuracy. The framework is not limited to sign language understanding, but it has scope for non-verbal communication in Human-Robot Interaction (HRI) also.
14:10 The Neighbor Course Distribution Method with Gaussian Mixture Models for AIS-based Vessel Trajectory Prediction
When operating an autonomous surface vessel (ASV) in a marine environment it is vital that the vessel is equipped with a collision avoidance (COLAV) system. This system must be able to predict the trajectories of other vessels in order to avoid them. The increasingly available automatic identification system (AIS) data can be used for this task. In this paper, we present a data-driven approach to predict vessel positions 5-15 minutes into the future using AIS data. The predictions are given as Gaussian Mixture Models (GMMs), thus the predictions give a measure of uncertainty and can handle multimodality. A nearest neighbor algorithm is applied on two different data structures. Tests to determine the accuracy and covariance consistency of both structures are performed on real data.
14:30 A Parallel Platform for Fusion of Heterogeneous Stream Data
This paper presents a novel parallel platform, C-Storm (Copula-based Storm), for the computationally complex problem of fusion of heterogeneous data streams for inference. C-Storm is designed by marrying copula-based dependence modeling for highly accurate inference and a highly-regarded parallel computing platform Storm for fast stream data processing. C-Storm has the following desirable features: 1) C-Storm offers fast inference responses. 2) C-Storm provides high inference accuracies. 3) C-Storm is a general-purpose inference platform that can support various data fusion applications. 4) C-Storm is easy to use and its users do not need to know details of Storm or copula theory. We implemented C-Storm based on Apache Storm 1.0.2 and conducted extensive experiments using a typical data fusion application. Experimental results show that C-Storm offers a significant 4.7x speedup over a commonly used sequential baseline and higher degree of parallelism leads to better performance.

2i - Intent, Behaviour, Swarm Modelling

Room: JDB-Teaching Room
Chair: Bashar I. Ahmad
13:10 Learning Correlation Graph and Anomalous Employee Behavior for Insider Threat Detection
Insider attacks can result in significant costs to an organization. There is an urgent need for an automatic insider threat detector with good accuracy and low false alarms. In this work, we propose a graph based insider threat detector to identify potential insider attackers based on identifying not only self-anomalous behaviors of an employee but also anomalies relative to other employees with similar job roles. A machine learning approach is developed to first infer the correlation graph among the organization's employees. Then, a graph signal processing method is designed to identify the potential insiders with detection and false positive rates better than performing detection independently on each employee. Our approach demonstrates that the correlated behaviors of an organization's employees should be exploited for a better detection of suspicious behaviors.
13:30 Model-based Heterogeneous Optimal Space Constellation Design
Few tools exist for designing constellations of heterogeneous satellites. A new modular tool for total mission design of heterogeneous constellations, including spacecraft design, orbit selection, and launch manifestation, is proposed. The component modules and algorithms are discussed, including a novel crossover method for genetic algorithms and a novel constraint formulation for launch manifestation of maneuverable vehicles. Finally, the expandability of the tool to multiple domains and various applications is highlighted.
13:50 Entropy-Based Intention Change Detection with a Multi-Hypotheses Filter
In the future, pedestrians and fully automated vehicles have to operate in an environment they share. To minimize the risk for pedestrians, it is very important to predict precisely their future movement. One important information source is the intention of the pedestrian. For the integration of the intention information, a Multi-Hypotheses filter is used, where different hypotheses for the intention of the pedestrian are considered. An intention change detector based on the Multi-Hypotheses filter utilizing an entropy-based confidence score is developed. With this contribution, critical real-world situations like a pedestrian crossing the street instead of following the sidewalk are tackled. The evaluation of the intention change detector is performed in simulation and for real-world data. Firstly, the proposed approach is evaluated using simulated trajectory data, where trajectories with intention changes are generated by a self-made trajectory generator (open source). Secondly, the course of the confidence score is evaluated for a real-world scenario, where the detection of the pedestrians is performed by the combination of a deep learning network (Tiny YOLO) and background subtraction. It is shown that the mean distance to the sidewalk at the detection of the intention change is below 1.5 m, even in the case of high sensor noise. For lower sensor noise level, the intention change of the pedestrian is even detected before entering the street. Key contributions are the proposal of the Multi-Hypotheses filter, the derivation of the confidence score, the proposal of the intention detector based on the confidence score and the detection of the pedestrians and other obstacles by the fusion of background subtraction and a deep learning network.
14:10 A Meta-tracking Approach for Predicting the Driver or Passenger Intent
This paper proposes a Bayesian framework for estimating the probability of a driver or passenger(s) returning to the vehicle, from the available partial (noisy) track of his/her location. The latter can be provided by a smartphone navigational service and/or other dedicated user to vehicle positioning solution, for instance RF-based. The introduced approach treats the addressed intent prediction problem, i.e. not tracking the object's state (e.g. the driver/passenger position, velocity, etc.) or predicting its next few values, within an object tracking formulation, leading to a Kalman-filter-based implementation of the inference routine. Hence, it is dubbed meta-tracker in lieu of a conventional "sensor-level" tracking algorithm and relies on utilising bridging distributions to encapsulate the long term dependencies in the trajectory followed by the driver or passenger as dictated by the intended endpoint, if any. Two example trajectories are shown to demonstrate the effectiveness of this flexible framework.
14:30 Prediction of Rendezvous in Maritime Situational Awareness
In this work, we consider the problem of algorithmically predicting rendezvous among vessels based on their trajectory forecasts in a maritime environment. The problem is treated as hypothesis testing on the expected value of the distance between trajectories. We relate this quantity to the first and second degree Wasserstein distances between trajectory forecast distributions. These distributions are obtained using integrated Ornstein-Uhlenbeck process models with the trajectory measurements collected so far. Building upon these results, we propose an algorithm which traverses the trajectories observed so far for detecting rendezvous over a rolling time horizon. We demonstrate the efficacy of the proposed algorithm using simulations.

Wednesday, July 11 14:50 - 15:20

Refreshment Break

Wednesday, July 11 15:20 - 17:00

3a - Dempster-Schafer Theory

Room: LR0
Chair: Jean Dezert
15:20 Extending Deng Entropy to the Open World in the Evidence Theory
Dempster-Shafer evidence theory (DST) is widely used in intelligent information processing, especially for information fusion. Recently, measuring the information volume in the framework of DST draws a lot of attention. Many theories and tools have been proposed to model the uncertain degree in DST, including Deng entropy. However, Deng entropy and the other uncertainty measures in DST pay no attention to the uncertainty in the frame of discernment (FOD) in the open world, which is the reason of this paper. To address this issue, Deng entropy is extended to the open world in DST framework.With the extended Deng entropy (EDE) in the open world, the uncertain information represented by FOD and the mass function of the empty set now can be properly modelled while measuring the uncertain degree in DST. EDE can be regarded as a generalization of Deng entropy in the open world and it can be degenerated to Deng entropy in the closed world if the mass value of the empty set is zero. A few numerical examples are presented to verify the applicable and useful of the new measure.
15:40 Provenance Across Evidence Combination in Theory of Belief Functions
Theory of belief functions (Dempster-Shafer theory) is one of the most commonly used mathematical frameworks in the field of uncertain information representation. Two important areas of research in its context are that of evidence combination and decision making. Although they are often considered theoretically separate, the combination process itself drives the final decision. The information contained in each of the sources propagates through the belief aggregation process, and impacts the final assessment to some degree. If the decision made has an impact on the real-world; particularly in scenarios where a wrong decision may bring about significant risks it is prudent to be able to identify the key drivers of this assessment. In this paper we present a novel method of identifying the relative contribution of each source of evidence to the final belief, thus making it possible to track provenance across the information fusion process. This is likely to be useful in intelligent decision-support systems utilising belief functions, where lack of transparency may be hindering adoption. Unlike traditional methods, which focus on the content of the contributing source only, the approach proposed here is based on analysis of dissimilarity between the contributing sources; the result of the fusion process and the decision made. The behaviour of this metric is analysed through simulation. It is shown that the proposed measure performs well with regard to identifying the source having the most significant impact on the decision, often outperforming more traditional metrics.
16:00 Efficient Computing of Dempster-Shafer Theoretic Conditionals for Big Hard/Soft Data Fusion
While hard sensor fusion is a highly developed discipline with vast methods, the inclusion of soft evidence continues to gain significant interest because soft sensors in the fusion process has certain merits, such as the ability to model attributes of interest (e.g., emotional level) that hard sensor could not. However, how to combine the hard/soft sensor data efficiently is a challenging problem, especially when the data set becomes large. In this study, a novel algorithm is proposed to apply the Conditional Core Theorem (CCT) in computing the Fagin-Halpern conditionals in the fusion of bodies of evidence with disparate frames of discernment. The computational complexity of the proposed algorithm is derived analytically and simulations are carried out to demonstrate the efficiency of the proposed algorithm.
16:20 Magnitude Design of FIR Evidence Filters with Prescribed Transition Rolloff Using Bisection and ICMEE
The Dempster-Shafer (DS) belief based evidence filtering is a promising method to infer "frequency" characteristics of various events of interest from the temporally and spatially distributed sensor data. Evidence filters have found a wide range of applications in military surveillance, environment monitoring, homeland security, etc. Designing evidence filters is challenging because of the nonnegative constraint on its impulse response. A new algorithm is proposed in this paper for the magnitude design of evidence filters with prescribed transition rolloff. It uses a bisection method to convert the design problem into a series of constrained minimax magnitude error subproblems, which are then solved using an iterative constrained minimax elliptic error (ICMEE) method. Simulation examples demonstrate the effectiveness of the proposed design method for evidence filters.
16:40 Uncertain Pattern Classification Based on Evidence Fusion in Different Domains
It is a challenging problem for pattern classification with few labeled instances. Transfer learning provides an efficient solution to improve the classification accuracy using some training knowledge in the related domain (called source domain). Nevertheless, the single transformation in one direction may be uncertain in some cases, and this is harmful for classification. So we propose a new classification method based on the fusion of data transformations in different directions between source domain and target domain. At first, the mapping of target in the source domain is estimated by K-nearest neighbor technique using some one-to-one instance pairs, and the estimated mapping instance (pattern) can be classified in the source domain according to the available training data. Then, the credibility of classification result is evaluated. If the credibility achieves the expected threshold, the classification result is directly output. Otherwise, it indicates that the transformation may be not very reliable, and the labeled instances in source domain will be transferred to target domain for the classification of target. The two versions of classification results will be fused with different weights based on evidential reasoning, and the weighting factors are optimized using the available training instances. By doing this, we can efficiently reduce the uncertainty of transformation and improve the classification accuracy. Some real data sets from UCI have been employed to validate the effectiveness of the proposed by comparing with other related methods

3b - SS: Evaluation of Techniques for Uncertainty Representation

Room: LR1
Chair: Paulo C.G. Costa
15:20 Application of URREF Criteria to Assess Knowledge Representation in Cyber Threat Models
Systems for threat analysis enable users to understand the nature and behavior of threats and to undertake a deeper analysis for detailed exploration of threat profile and risk estimation. Models for threat analysis require significant resources to be developed and are often relevant to limited application tasks. This paper investigated the implicit and explicit uncertainty assessments to be taken into account for threat analysis systems to be effective for providing a relevant threat characterization. The intent of this paper is twofold. The first is to present and discuss an approach to define a model for cyber threats within a simplified expert model and to translate it into a Bayesian network as a tool for the development of practical scenarios for cyber threats analysis. The second is to address the question of assessing the Bayesian network build and its intrinsic knowledge representation model and to Show how modeling decisions impact the outcome of the system. The paper describes the construction of an expert model and the corresponding BN to analyze cyber threats, investigates various types of induced uncertainty with the URREF criteria simplicity and expressiveness and implements an assessment procedure to evaluate the overall approach.
15:40 Experimental Comparison of Ad Hoc Methods for Classification of Maritime Vessels Based on Real-life AIS Data
Classification of Maritime Vessels is a recurrent task in maritime surveillance systems. Classification can be conducted by different methods, e.g., Naïve Bayes, k Nearest Neighbor, Decision Tree, Fuzzy Rule, or Neural Networks. The Automatic Identification System (AIS) is cooperative system of VHF-radio data exchange. By broadcasting of navigational data and ships' information it supports maritime safety, surveillance, and information. Based on measured AIS datasets of five maritime hotspots, easily implementable standard classification (i.e. ad hoc) methods from Data Science are compared to each other. These are evaluated in terms of accuracy. This experimental evaluation is motivated by the following question: Up to which degree properties and behavior, e.g., vessel's type, can be detected by using large quantities of ship's positional, motion, and dimensions' data as provided by AIS? Future applications might include detection of fraudulent self-declarations of types, e.g., during illegal fishing activities.
16:00 Towards the Rational Development and Evaluation of Complex Fusion Systems: a URREF-Driven Approach
The choices of the uncertainty representations and reasoning methods have a critical impact on the development and deployment life cycles of modern fusion solutions. They influence the development effort, the quality of the resulting solutions as well as the deployment costs. This paper shows how the URREF concepts enable a systematic \modifAL{and rational} development and evaluation of complex fusion systems. In particular, the paper proposes a URREF-driven approach to the development and deployment of composite fusion systems. In this approach the URREF criteria play a critical role throughout the development life cycle as they facilitate informed design choices and enable systematic and tractable evaluation of the resulting systems. The paper establishes relations between the URREF criteria and evaluation subjects in the context of a development life cycle. The concepts are illustrated with the help of a high-level fusion approach supporting estimation of the whereabouts of wildlife poachers.
16:20 Latent Variable Bayesian Networks Constructed Using Structural Equation Modelling
Bayesian networks in fusion systems often contain latent variables. They play an important role in fusion systems as they provide context which lead to better choices of data sources to fuse. Latent variables in Bayesian networks are mostly constructed by means of expert knowledge modelling. We propose using theory-driven structural equation modelling (SEM) to identify and structure latent variables in a Bayesian network. The linking of SEM and Bayesian networks is motivated by the fact that both methods are can be shown to be causal models. We compare this approach to a data-driven approach where latent factors are induced by means of unsupervised learning. We identify appropriate metrics for URREF ontology criteria for both approaches.
16:40 Information and Source Quality Ontology in Support to Maritime Situational Awareness
To support situation awareness, the benefit of a variety of sources makes no doubt although it brings additional challenges related to heterogeneity in data format, semantics, uncertainty type, for instance, but also challenges related to possible conflicting information. Information and source quality are intertwined concepts which assessments connect with the evaluation of uncertainty handling in information fusion solutions.While the Uncertainty Representation Reasoning Evaluation Framework (URREF) ontology focuses on assessment criteria, peripheral concepts still play a critical role in the assessment. In this paper, we propose an Information and Source Quality (ISQ) ontology formalising the relationships between information-related concepts, and discuss information interpretation in support of Maritime Situation Awareness.Specifically, this paper links the concepts of Information Source, Dataset and Piece of Information, and connects them to the corresponding quality concepts. Such concepts link to the upper level concepts of the URREF ontology Source (of information) and data Quality. The ontology further expands to the uncertainty modelling and the algorithm design. We conclude on future work and identify avenues leveraging this work, especially the extension to the formalisation of the evaluation process.

3c - SS: Directional Estimation

Room: LR2
Chair: Kailai Li
15:20 Directional Statistics with the Spherical Normal Distribution
A well-known problem in directional statistics --- the study of data distributed on the unit sphere --- is that current models disregard the curvature of the underlying sample space. This ensures computationally efficiency, but can influence results. To investigate this, we develop efficient inference techniques for data distributed by the curvature-aware spherical normal distribution. We derive closed-form expressions for the normalization constant when the distribution is isotropic, and a fast and accurate approximation for the anisotropic case on the two-sphere. We further develop approximate posterior inference techniques for the mean and concentration of the distribution, and propose a fast sampling algorithm for simulating the distribution. Combined, this provides the tools needed for practical inference on the unit sphere in a manner that respects the curvature of the underlying sample space.
15:40 Nonlinear Progressive Filtering for SE(2) Estimation
In this paper, we present a novel nonlinear progressive filtering approach for estimating SE(2) states represented by unit dual quaternions. Unlike previously published approaches, the measurement model no longer needs to be assumed as identity. Our solution utilizes deterministic sampling on a Bingham-like probability distribution, which has been adapted to simultaneously model orientation and translation. During the measurement update step, the estimate gets progressively updated. Our approach inherently incorporates the nonlinear structure of SE(2) and enables a flexible measurement update step. We also give an evaluation for planar rigid body motion estimation with a case study that is close to real-world scenarios.
16:00 Numerical Calculation of the Fisher-Bingham Integral by the Holonomic Gradient Method
The Fisher-Bingham integral is the normalizing constant of a basic distribution in directional statistics. We describe the numerical calculation of the Fisher-Bingham integral utilizing the holonomic gradient method, which is an approach to evaluate multiple integrals.
16:20 A New Estimation Methodology for Standard Directional Distributions
One of the major stumbling blocks in the use of standard directional distributions is One of the major stumbling blocks in the use of standard directional distributions is difficulty in implementing the well-established maximum likelihood estimation method. The root of the problem is that their normalizing constants are complicated. To circumvent this problem, sometimes a member from a non-exponential family is used and then a relevant member from the exponential family is matched. These alternative distributions have simple moment estimators though the matching can be somewhat arbitrary. Firstly, we resolve the estimation problem by introducing the score matching estimators for directional distributions. Secondly, we resolve the matching problem optimally by introducing the score matching approximation. We give some insight into these proposals through the basic directional distributions, though the results are optimal and universally applicable for distributions on compact manifolds.
16:40 From Wirtinger to Fisher Information Inequalities on Spheres and Rotation Groups
The concepts of Fisher Information matrix and covariance are generalized to the setting of probability densities on spheres and rotation groups, and inequalities relating these quantities are derived. Probability density functions on these spaces arise in various scenarios in the fields of structural biology, robotics, and computer vision. The approach taken is to first derive matrix generalizations of Wirtinger's inequality for tori and spheres and generalize these to rotation groups. Then new inequalities are derived that relate the covariances of probability density functions on spheres and rotation groups with their Fisher information. These inequalities are different than the Cramer-Rao bound, and can be used to estimate the rate of increase of the entropy of a diffusion process.

3d - Machine Learning

Room: LR5
Chair: David W Krout
15:20 Ship Classification Using Deep Learning Techniques
In the last five years, the state-of-the-art in computer vision has improved greatly thanks to an increased use of deep convolutional neural networks (CNNs), advances in graphical processing unit (GPU) acceleration and the availability of large labelled datasets such as ImageNet. Obtaining datasets as comprehensively labelled as ImageNet for ship classification remains a challenge. As a result, we experiment with pre-trained CNNs based on the Inception and ResNet architectures to perform ship classification. Instead of training a CNN using random parameter initialization, we use transfer learning. We fine-tune pre-trained CNNs to perform maritime vessel image classification on a limited ship image dataset. We achieve a significant improvement in classification accuracy compared to the previous state-of-the-art results for the Maritime Vessel (Marvel) dataset.
15:40 Identifying Agile Waveforms with Neural Networks
With the advent of widespread digital technology, modern radar and communication systems have grown more complex and agile, rendering them difficult to adequately document and identify. The traditional solution of comparing incoming signals to a library of known waveforms is therefore becoming unworkable. The authors present two solutions to the problem of modern radio frequency waveform identification: a deep neural network and a recurrent neural network using GRUs. Both networks are designed to fuse together an arbitrary number of agile RF pulses and identify the emitter that produced them. Compared to a naive DNN approach that simply averaged together pulses before classification, our solution adds a pre-projection step, which preserves information about sequential agility, even after averaging across pulses. After being trained against a set of 15 highly ambiguous emitters, the naive DNN identified 52.2% of test waveforms, our DNN with projection identified 72.3%, and our RNN solution identified 84.8%.
16:00 Machine Learning: Defining Worldwide Cyclone Labels for Training
A model that labels both tropical and extra tropical cyclones was developed based off a set of strict heuristics for the purpose of creating a worldwide labeled dataset for cyclones. The heuristics are defined from time, pressure, vorticity, and gradient thresholds and do not have a terrain cut-off. The dataset provides a cyclone center and Region of Interest (ROI) for the area of the cyclone at some given time stamp and this is then applied to GOES 15 water vapor imagery for purposes of a training dataset in deep learning network applications.
16:20 A Multi-User Multi-Task Model for Stress Monitoring from Wearable Sensors
This study presents results on stress monitoring based on peripheral physiological signals acquired from wearable sensors. Using machine learning methods on data collected during laboratory experiment on 6 subjects exposed to 3 kind of well know stress tasks (TSST, MST and SECPT) we evaluate the influence of inter-subject and inter-task variability on stress/no stress classification. The relevant set of features and type of classifier (choice between SVM, Decision Tree, and Naïve Bayes) is first selected by using Leave-One-Subject-Out (LOSO) cross validation. The model learnt on the 3 tasks reach 85% of good classification rate over the validation set using SVMs as classifier and as features: the mean of the heart rate and the sum of all skin response's width-prominence products. After that, different results obtained using several partitioning of learning/validation sets are compared (e.g. learning on one task and on all subjects and validating on the other tasks) in order to evaluate the subjects and tasks influence. We observe that our model is robust to the inter-subject and inter-task variability with small differences for SECPT which suggests a slightly different physiological reaction to physical stressors.
16:40 PDAFAI with a Neural Network Acoustic Emulator
Recent developments in Deep Learning have revitalized previous work in emulating acoustic propagation using Neural Networks. This paper will present results incorporating a Neural Network (NN) acoustic emulator into a PDAFAI tracking algorithm. The NN is used to improve SNR estimates for contacts in a multiple target tracking algorithm. Due to the computationally efficiency of a NN, this can happen in real time. This paper presents comparative results for real data using a probabilistic data association (PDA) algorithm and PDA with amplitude information (PDAFAI).

3e - SS: Advances in Motion Estimation using Inertial Sensors

Room: LR6
Chair: Manon Kok
15:20 Robust Gyroscope-Aided Camera Self-Calibration
Camera calibration for estimating the intrinsic parameters and lens distortion is a prerequisite for various monocular vision applications including feature tracking and video stabilization. This application paper proposes a model for estimating the parameters on the fly by fusing gyroscope and camera data, both readily available in modern day smartphones. The model is based on joint estimation of visual feature positions, camera parameters, and the camera pose, the movement of which is assumed to follow the movement predicted by the gyroscope. Our model assumes the camera movement to be free, but continuous and differentiable, and individual features are assumed to stay stationary. The estimation is performed online using an extended Kalman filter, and it is shown to outperform existing methods in robustness and insensitivity to initialization. We demonstrate the method using simulated data and empirical data from an iPad.
15:40 Motion Artifact Reduction in Ambulatory Electrocardiography Using Inertial Measurement Units and Kalman Filtering
Electrocardiography (ECG) using lightweight and inexpensive ambulatory ECG devices makes it possible to monitor patients during their daily activities and can give important insight in arrhythmias and other cardiac diseases. However, everyday activities cause several kinds of motion artifacts which deteriorate the ECG quality and thus complicate both automated and manual ECG analysis. In this paper, we discuss some of the challenges associated with long-term ambulatory ECG and propose a baseline wander compensation algorithm based on inertial measurement units (IMUs) attached to each ECG electrode. The IMUs are used for estimating the local electrode motion which in turn is used as the reference signal for baseline wander reduction. We evaluate the proposed algorithm on data gathered in clinical trials and show that the baseline wander is successfully removed, without compromising the ECG's morphology.
16:00 Inertial Sensor Array Processing with Motion Models
By arranging a large number of inertial sensors in an array and fusing their measurements, it is possible to create inertial sensor assemblies with a high performance-to-price ratio. Recently, a maximum likelihood estimator for fusing inertial array measurements collected at a given sampling instance was developed. In this paper, the maximum likelihood estimator is extended by introducing a motion model and deriving a maximum a posteriori estimator that jointly estimates the array dynamics at multiple sampling instances. Simulation examples are used to demonstrate that the proposed sensor fusion method have the potential to yield significant improvements in estimation accuracy. Further, by including the motion model, we resolve the sign ambiguity of gyro-free implementations, and thereby open up for implementations based on accelerometer-only arrays.
16:20 Magnetic Odometry - A Model-Based Approach Using A Sensor Array
A model-based method to perform odometry using an array of magnetometers that sense variations in a local magnetic field is presented. The method requires no prior knowledge of the magnetic field, nor does it compile any map of it. Assuming that the local variations in the magnetic field can be described by a curl and divergence free polynomial model, a maximum likelihood estimator is derived. To gain insight into the array design criteria and the achievable estimation performance, the identifiability conditions of the estimation problem are analyzed and the Cramér-Rao bound for the one-dimensional case is derived. The analysis shows that with a second-order model it is sufficient to have six magnetometer triads in a plane to obtain local identifiability. Further, the Cramér-Rao bound shows that the estimation error is inversely proportional to the ratio between the rate of change of the magnetic field and the noise variance, as well as the length scale of the array. The performance of the proposed estimator is evaluated using real-world data. The results show that, when there are sufficient variations in the magnetic field, the estimation error is of the order of a few percent of the displacement. The method also outperforms current state-of-the-art method for magnetic odometry.
16:40 A Method for Lower Back Motion Assessment Using Wearable 6D Inertial Sensors
Inertial sensors have become widely used for motion tracking of the upper and lower limbs. We propose a method that facilitates clinical assessment of lower back motions by means of a wireless inertial sensor network. The sensor units are attached to the right and left side of the lumbar region, the pelvis and the thighs, respectively. Since magnetometers are known to be unreliable in indoor environments, we use only 3D accelerometer and 3D gyroscope readings. Compensation of integration drift in the horizontal plane is achieved by estimating the gyroscope biases from initial rest phases, for which we present an algorithm that carefully selects samples representing the rest phase. For the estimation of sensor orientations, both a smoothing algorithm and a filtering algorithm are presented. From these orientations, we determine three-dimensional joint angles between the thighs and the pelvis and between the pelvis and the lumbar region. We compare the orientations and joint angles to measurements of an optical motion tracking system that tracks each skin-mounted sensor by means of reflective markers. Eight subjects perform a neutral initial pose, then flexion/extension, lateral flexion, and rotation of the trunk. The root mean square deviation between inertial and optical angles is about one degree for angles in the frontal and sagittal plane and about two degrees for angles in the transverse plane (both values averaged over all trials). We choose five features that characterize the initial pose and the three motions. Interindividual differences of all features are found to be clearly larger than the observed measurement deviations. These results indicate that the proposed inertial sensor-based method is a promising tool for lower back motion assessment.

3f - SS: AI Enabled Fusion for Federated Environments

Room: LR11
Chair: Dinesh Verma
15:20 A Policy System for Control of Data Fusion Processes and Derived Data
The paper proposes an attribute-based policy system for a coalition setting in which multiple parties provide data to be used in data fusion processes while at the same time retaining control of how their own data are used in these processes. The framework consists of three main types of policies: (a) access control policies - these allow one to specify controls on the fusion process (e.g., which user can use which data fusion tool) and on the input data to the fusion process; (b) fusion policies - whether data needs to be pre-processed before being used (for example, whether data must be anonymized before being used, or encrypted and thus fusions must be performed on encrypted data); and, (c) derived data usage policies - these allow one to specify who is authorized to access the data resulting from the fusion. As all these policies are attribute-based policies, they support high-level, flexible and expressive policy specifications. The paper also briefly discusses technologies for supporting policy enforcement and novel approaches supporting the automatic generation of policies.
15:40 Distributed Machine Learning in Coalition Environments: Overview of Techniques
Many modern applications generate a significant amount of data in dispersed geographical areas. To analyze and make use of the data, data fusion and machine learning techniques are usually applied. These algorithms traditionally run in data center environments where all the data are available at a central location. It is challenging to run them in distributed coalition environments, where it is impractical to send all the raw data to a single place due to bandwidth and security constraints. This problem has gained notable attention recently. In this paper, we provide an overview of available techniques and recent results of performing data fusion and machine learning in a distributed coalition environment, without sharing the raw data among local processing nodes. We discuss techniques for distributed model training, scoring, and outline some applications where these techniques are applicable and beneficial.
16:00 Learning and Reasoning in Complex Coalition Information Environments: a Critical Analysis
In this paper we provide a critical analysis with metrics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight—i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight—i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU.
16:20 Security Issues for Distributed Fusion in Coalition Environments
When sensor fusion operations are conducted in coalition environments, security of the data and infrastructure used for model fusion are very important. AI enabled sensor fusion infrastructure can be attacked on many fronts, including attacks on the data used for sensor information fusion and disrupting the communication between devices and the fusion nodes, in addition to the traditional security attacks. As the infrastructure for sensor fusion becomes more automated with multiple intelligent assistants for data collection, different types of attacks are possible. AI enabled approaches can be used to improve the security and resiliency of federated networks, and the data that is shared across coalition problems. In this paper, we discuss the challenges associated with security of coalition infrastructures, and approaches to improve the security using AI and machine learning techniques.
16:40 Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning
Machine Learning has made great strides in recent years to the point where it is now widely available in many products and services we encounter in our everyday lives. However, as machine learning begins to be applied in mission critical applications, predictability, robustness and justifiability become paramount. In this paper we explore two of the most significant issues - interpretability and adversarial examples - and their relevance for military coalition operations. We argue that these two issues are potentially strongly related to one another, and insights in one can provide insights for the other. We illustrate these ideas with relevant examples from the literature and our own experiments.

3g - Sensor/Resource Management

Room: LR12
Chair: Johan P de Villiers
15:20 Remote State Estimation with Data-driven Communication and Guaranteed Stability
This paper deals with the problem of remote state estimation with limited communication resources. We propose an online data-driven communication scheme based on cumulative innovation and derive the corresponding minimum mean square error (MMSE) estimator. The communication scheme allows to achieve a trade-off between communication costs and estimation performance. The remote estimator can improve the estimation performance based on the fact that no transmission of data indicates a small cumulative innovation. Further, it is proved that the estimator has guaranteed stability—the expected norm of the mean square error (MSE) matrix is bounded and an upper bound is given. We also derive the conditional probability of a future transmission. A simulation example is provided to illustrate the effectiveness of the proposed method.
15:45 Covariance Cost Functions for Scheduling Multistatic Sonobuoy Fields
Sonobuoy fields, comprising a network of sonar transmitters and receivers, are used to find and track underwater targets. For a given environment and sonobuoy field layout, the performance of such a field depends on the scheduling, that is, deciding which source should transmit, and which waveform should be transmitted at any given time. In this paper, we explore the choice of cost function used in myopic scheduling and its effect on tracking performance. Specifically, we consider 5 different cost functions derived from the predicted error covariance matrix of the track. Importantly, our cost functions combine both positional and velocity covariance information to allow the scheduler to choose the optimum source-waveform action. Using realistic multistatic sonobuoy simulations, we demonstrate that each cost function results in a different choice of source-waveform actions, which in turn affects the performance of the scheduler. In particular, we show there is a trade-off between position and velocity error performance such that no one cost function is superior in both.
16:10 A Suboptimal Multi-sensor Management Based on Cauchy-Schwarz Divergence for Multi-target Tracking
In this paper, we address the problem of multi-sensor management for multi-target tracking via labeled random finite sets (LRFS) in sensor network systems which require both precision and real-time. Considering the optimal multi-sensor management strategy (proposed in [1] named joint decision making (JDM) algorithm) suffers from the high-dimensional computational complexity, to compromise between tractability and fidelity, an alternative multi-sensor management strategy is proposed. By sequentially calculating the Cauchy-Schwarz (CS) divergence between global generalized Covariance Intersection (GCI) fusion result and the GCI fusion result of two sensors, the JDM algorithm is simplified as a hybrid decision making with a two-dimensional optimization problem, which is referred to as the HDM algorithm, and meanwhile the proposed HDM algorithm is superior to the independent decision making (IDM) algorithm [1] in precision due to the IDM algorithm completely ignores the correlation among sensors. The computational complexity of the proposed method is also provided by comparison with the JDM and IDM algorithms. The efficiency as well as the performance of the proposed method is well demonstrated in a challenging multi-sensor multi-target tracking scenario by numerical results.
16:35 Efficient Resource Management for Phased Array Radar Based on the Estimation of Target's Maneuvering Parameters
For phased array radar, effective resource management is of great importance to exploit its tactical performance. An efficient adaptive sampling period algorithm is proposed, where the sampling period is calculated rapidly based on the online estimated maneuvering parameters of the target. An offline library is built for the estimation. The algorithm then is extended to joint adaptive sampling period and waveform case, where the parameter pair that has least resource consumption is chosen. Simulation results demonstrate that compared with conventional algorithms, the proposed algorithms can realize the resource management of the system with less computational complexities.

3h - Fuzzy Sets/ Set Membership

Room: JDB-Seminar Room
Chair: Patrice Brot
15:20 Simplified Desirability Level Metrics for Estimation Performance Evaluation
Different estimators have different optimization criteria according to the concrete application considered. Most existing metrics on estimation performance are some averages of estimation error terms, which usually give "big" or "small" results to show the "bad" or "good" performance of the evaluated estimators. These kinds of metrics are only insufficient statistics of discreet set of estimation errors in some sense and reflect certain narrow aspect of estimation performance. However, an error distribution function is important information which is usually overlooked. To handle this problem, a metric, called desirability level, is provided in [1] to measure how similar estimation error probability density function (pdf) is relative to a desired pdf. This study firstly proposes extended desirability level metric which has simpler form based on the original one. Then a new metric based on principal component analysis is introduced. Illustration examples are given to demonstrate the effectiveness of our proposed measures.
15:40 Training Instance Random Sampling Based Evidential Classification Forest Algorithms
Modelling and handling epistemic uncertainty with belief function theory, different ways to learn classification forests from evidential training data are explored. In this paper, multiple base classifiers are learned on uncertain training subsets generated by training instance random sampling approach. For base classifier learning, with the tool of evidential likelihood function, gini impurity intervals of uncertain datasets are calculated for attribute splitting and consonant mass functions of labels are generated for leaf node prediction. The construction of gini impurity based belief binary classification tree is proposed and then compared with C4.5 belief classification tree. For base classifier combination strategy, both evidence combination method for consonant mass function outputs and majority voting method for precise label outputs are discussed. The performances of different proposed algorithms are compared and analysed with experiments on UCI Balance scale dataset.
16:00 Geo-Referencing of a Multi-Sensor System Based on Set-membership Kalman Filter
In this paper, a novel set-membership Kalman filter is applied on a data set which is obtained from a real world experiment. In this experiment, taken from the scope of geo-referencing of terrestrial laser scanner, a multi-sensor system has captured the trajectory of two GNSS antennas. The dynamical system contains the random uncertainty and set-membership uncertainty simultaneously. Both estimated results from classic extended Kalman filter and novel set-membership Kalman filter are shown and compared. Detailed analysis of the set-membership Kalman filter is given in the end.
16:20 Multisensor Data Fusion Using Equation Checking and Fuzzy Rules
A new multi sensor data fusion method is contemplated: generic and simple. It can easily handle a large variety of sensors failure scenarios, coming from aeronautics world constraints. It is based on equations checking and fuzzy set combinations. Two different industrial applications are proposed, related to aircraft parameters handling (aircraft physical state sensing).
16:40 A Fuzzy Approach for Detecting and Defending Against Spoofing Attacks on Low Interaction Honeypots
Honeypots are one of the established entrapment mechanisms for baiting attackers in the field of network security. They gather real-time and valuable information from the attacker regarding their attack processes, which is not possible by other se- curity means. Despite this invaluable contribution of the honeypot in moulding a cohesive security policy, the honeypot is normally designed with fewer resources, as security personnel do not consider it as part of the operational network. Consequently, such limited capability or low-interaction honeypots are vulnerable to common security attacks. A spoofing attack is one such attack that can be carried out on these low-interaction honeypots making them ineffectual. Unfortunately, these low-interaction honeypots have very limited or no capability to detect and defend against this type attack due their inadequate ability to respond, versus a more complex honeypot with greater deceptive capabilities. Therefore, this paper proposes a resource-optimised fuzzy approach for detecting and defending against a spoofing attack on a low-interaction honeypot. Primarily, it proposes a detection mechanism for the spoofing attack based on the analysis of experimental data gathered from the honeypot and its internal network. Subsequently, the paper proposes a fuzzy approach for predicting and alerting, in a timely manner, the spoofing attack on low-interaction honeypots to prevent it from the attack. Finally, experimental simulation is utilised to demonstrate that any low-interaction honeypot can be made a spoofing attack- aware honeypot by employing the proposed fuzzy approach.

3i - Imaging Methods / Image Processing

Room: JDB-Teaching Room
Chair: Michael Ulrich
15:20 Fast Multi-Coil Parallel MR Imaging Based on a Combination of the Optimum Interpolation Approximation and Compressed Sensing
This paper introduces a new parallel imaging method based on a combination of the optimum interpolation approximation (OIA) and compressed sensing (CS) method. The proposed method is a kind of improved SENSE method because all sensitivity maps of receiver coils are assumed to be known. Firstly, we introduce a theory of the OIA that minimizes the supremum value of any measure of approximation error. Secondly, a practical method of computing the optimum interpolation functions of the OIA is presented. Because the proof of the optimality is based on the set theory, the OIA is optimized for a group of MR images, not for particular ones. Thirdly, we present a new combinational method of the OIA and the CS method. In this method, the sufficiently accurate reconstruction of a particular target MR image by the CS method is used as the weighting function of the OIA. Because the accuracy of the OIA tends to improve as the weighting function approaches to the target MR image, the proposed combinational method becomes optimum for reconstructing the particular target MR image. Finally, we compare the reconstruction accuracy of the proposed combinational method with that of the CS-only method.
15:40 Person Recognition Based on Micro-Doppler and Thermal Infrared Camera Fusion for Firefighting
This paper examines the recognition of real persons, mirrored persons and other objects using thermal infrared (TIR) images and radar micro-Doppler (m-D). Mirrored persons lead to confusion of firefighters, when only a TIR camera is used. However, mirrored persons exhibit the m-D of the mirroring objects, hence radar can resolve this ambiguity. In this paper, multiple sensor fusion architectures are investigated for this classification task. The first approach uses an attention stage, where bounding boxes of candidates for real/mirrored persons are determined in TIR images. These bounding boxes are associated to the radar targets and subsequently classified. A joint classification of the radar m-D and TIR image at measurement level is compared to a separate classification with subsequent combination (object level). Furthermore, a classification of the complete scene is proposed, omitting the TIR attention stage and data association. Experiments with real measurements are used for an evaluation of the presented approaches.
16:00 Color Image Segmentation Based on Evidence Theory and Two-dimensional Histogram
Image segmentation is one of the most important tasks in image processing and pattern recognition. Image segmentation based on two-dimensional histogram considers not only the target pixel information but also its neighborhood information. It segments the image according to the calculated threshold, which is a hard decision method actually. However, there is uncertainty when labeling the pixels around the threshold. In this paper, we propose a new color image segmentation method based on information fusion. We use two thresholds to model the uncertainty and use Cautious OWA with evidential reasoning (COWA-ER) to implement the fusion-based color image segmentation. Experimental results show that the proposed method achieves better performance compared with the traditional two dimensional histogram method.
16:20 Probabilistic Multi Hypothesis Tracker for an Event Based Sensor
The Event-Based Sensor (EBS) is a new class of imaging sensor where each pixel independently reports "events" in response to changes in log intensity, rather than outputting image frames containing the absolute intensity at each pixel. Positive and negative events are emitted from the sensor when the change in log intensity exceeds certain controllable thresholds internal to the device. For objects moving through the field of view, a change in intensity can be related to motion. The sensor records events independently and asynchronously for each pixel with a very high temporal resolution, allowing the detection of objects moving very quickly through the field of view. Recently this type of sensor has been applied to the detection of orbiting space objects using a ground-based telescope. This paper describes a method to treat the data generated by the EBS as a classical detect-then-track problem by collating the events spatially and temporally to form target measurements. An efficient multi-target tracking algorithm, the probabilistic multi-hypothesis tracker (PMHT) is then applied to the EBS measurements to produce tracks. This method is demonstrated by automatically generating tracks on orbiting space objects from data collected by the EBS.
16:40 Object Tracking via Deep Multi-view Compressive Model for Visible and Infrared Sequences
In this paper, we present a novel visual tracker based on visible and infrared sequences. The extended region proposal network helps to automatically generate 'object-like' proposals and 'distance-based' proposals. In contrast to traditional tracking approaches that exploit the same or similar structural features for template matching, this approach dynamically manages the new compressive layers to refine the target-recognition performance. This paper presents an attractive multi-sensor fusion method which demonstrates the ability to enhance tracking precision, robustness, and reliability compared with that of single sensor. The integration of multiple features from different sensors with distinct characteristics resolves incorrect merge events caused by the inappropriate feature extracting and classifier for a frame. Long-term trajectories for object tracking are calculated using online support vector machines classifier. This algorithm illustrates favorable performance compared to the state-of-the-art methods on challenging videos.

Wednesday, July 11 18:00 - 20:00

Welcome Reception

Taking place at St John's College

18.00 drinks and canapes punting jazz band 20.00 close

Thursday, July 12 6:30 - 7:30

5K Race

Thursday, July 12 8:45 - 9:00

StoneSoup

Paul Thomas, DSTL

Thursday, July 12 9:00 - 10:00

Plenary: Fusion of Multi-band Images Using Bayesian Approaches: Beyond Pansharpening

Jean-Yves Tourneret, University of Toulouse
Chair: Lyudmila Mihaylova

Abstract: This talk will discuss several methods for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic images) in order to provide images with improved spectral and spatial resolutions. The first part will be devoted to summarizing the main image fusion methods based on component substitution, multiresolution analysis, Bayesian inference and matrix factorization. The second part will present recent Bayesian fusion strategies exploiting prior information about the target image to be recovered, constructed by interpolation or by using dictionary learning techniques. The resulting Bayesian estimators can be computed by using samples generated by Markov chain Monte Carlo algorithms, by exploiting the efficiency of alternating optimization methods or by solving Sylvester matrix equations.

Bio: Jean-Yves TOURNERET (SM'08) received the ingénieur degree in electrical engineering from the Ecole Nationale Supérieure d'Electronique, d'Electrotechnique, d'Informatique, d'Hydraulique et des Télécommunications (ENSEEIHT) de Toulouse in 1989 and the PhD degree from the National Polytechnic Institute from Toulouse in 1992. He is currently a professor in the University of Toulouse (ENSEEIHT) and a member of the IRIT laboratory (UMR 5505 of the CNRS). His research activities are centered around statistical signal and image processing with a particular interest to Bayesian and Markov chain Monte Carlo (MCMC) methods. He has been involved in the organization of several conferences including the European conference on signal processing EUSIPCO'02 (program chair), the international conference ICASSP'06 (plenaries), the statistical signal processing workshop SSP'12 (international liaisons), the International Workshop on Computational Advances in Multi-Sensor Adaptive Processing CAMSAP 2013 (local arrangements), the statistical signal processing workshop SSP'2014 (special sessions), the workshop on machine learning for signal processing MLSP'2014 (special sessions). He has been the general chair of the CIMI workshop on optimization and statistics in image processing hold in Toulouse in 2013 (with F. Malgouyres and D. Kouamé), of the International Workshop on Computational Advances in Multi-Sensor Adaptive Processing in 2015 (with P. Djuric) and 2019 (with D. Brie). He has been a member of different technical committees including the Signal Processing Theory and Methods (SPTM) committee of the IEEE Signal Processing Society (2001-2007, 2010-2015) and the EURASIP SAT committee on Theoretical and Methodological Trends in Signal Processing (TMTSP). He has been serving as an associate editor for the IEEE Transactions on Signal Processing (2008-2011, 2015-present) and for the EURASIP journal on Signal Processing (2013-present).

Thursday, July 12 10:00 - 10:30

Refreshment Break

Thursday, July 12 10:30 - 12:10

4a - SS: Forty Years of Multiple Hypothesis Tracking 2

Room: LR0
Chair: Stefano Coraluppi
10:30 Success Rates and Posterior Probabilities in Multiple Hypothesis Tracking
In multiple hypothesis tracking (MHT) the outcome space is partitioned into a discrete collection of events known as association hypotheses, whose posterior probabilities are calculated. The discrete nature of this problem means that it can be viewed as a classification problem. In this paper we argue that the hypothesis probabilities must obey some bounds from classification theory. These bounds can be used to investigate the correctness of MHT implementations.
10:50 On Anti-symmetry in Multiple Target Tracking
The notion of indistinguishable targets is well-established in advanced target tracking. Conceptually, it is rooted in quantum physics where functions of joint particle states are considered that are either symmetric or anti-symmetric under permutation of the particle labels. This symmetry dichotomy explains why two disjunct classes of particles exist in nature: bosons and fermions. Besides symmetry, also anti-symmetry has a place in multiple target tracking as we will show, leading to well-defined probability density functions describing the joint target states. Inbuilt anti-symmetry implies a target tracking version of Pauli's exclusion principle and seems to indicate that real world targets are to be considered as fermions in this sense. We discuss the concept and illustrate its potential benefits by an example.
11:10 Analytic Combinatorics and Labeling in High Level Fusion and Multihypothesis Tracking
The method of analytic combinatorics and labeling is shown to be a unifying framework in which to pose both high and low level data fusion problems. The method uses labeled generating functions. Several examples from high level fusion and multitarget tracking illustrate the method. Examples from high level fusion include natural language processing and noisy graph association problems. Examples from multitarget tracking include multidimensional assignment problems, unlabeled and labeled JPDA, labeled multiBernoulli filters, and multihypothesis tracking.
11:30 Poisson Multi-Bernoulli Mixture Trackers: Continuity Through Random Finite Sets of Trajectories
The Poisson multi-Bernoulli mixture (PMBM) is an unlabelled multi-target distribution for which the prediction and update are closed. It has a Poisson birth process, and new Bernoulli components are generated on each new measurement as a part of the Bayesian measurement update. The PMBM filter is similar to the multiple hypothesis tracker (MHT), but seemingly does not provide explicit continuity between time steps. This paper considers a recently developed formulation of the multi-target tracking problem as a random finite set (RFS) of trajectories, and derives two trajectory RFS filters, called PMBM trackers. The PMBM trackers efficiently estimate the set of trajectories, and share hypothesis structure with the PMBM filter. By showing that the prediction and update in the PMBM filter can be viewed as an efficient method for calculating the time marginals of the RFS of trajectories, continuity in the same sense as MHTis established for the PMBM filter.
11:50 Conditions for MHT to Be an Exact Bayesian Solution to the Multiple Target Tracking Problem
This paper finds conditions under which Multiple Hypothesis Tracking (MHT) is an exact Bayesian solution to the multiple-target tracking problem. The crucial condition is that measurements arrive in scans, but otherwise the conditions are minimally restrictive. In order to produce a computationally feasible implementation of MHT, some approximations must be made, but this true for any (existing) method of producing an exact Bayesian solution. Limiting the number of hypotheses considered is an example of such an approximation.

4b - SS: Physics-based and Human-derived Information Fusion

Room: LR1
Chairs: Erik Blasch, Paul Thomas
10:30 Fusing Bearing-only Measurements with and Without Propagation Delays Using Particle Trajectories
This paper considers the problem of tracking a manoeuvring target when some of the measurements are delayed by the time taken to propagate through some medium. We are especially interested in bearing-only measurements, since it is possible to extract range information by fusing measurements which have negligible propagation delay (such as from electrooptical sensors) and measurements which have a propagation delay proportional to the range to the target (such as from acoustic sensors). This requires us to handle measurements which appear out of sequence, and with emission times unknown to the tracker. Unlike previous approaches, a particle filter is used, which handles out-of-sequence measurements by storing a history of hypothesised target states and measurement emission times for each particle. This allows new target states and times to be inserted into the trajectory of each target by interpolating between adjacent states in the history.
10:50 Physics-Based and Human-derived Information Fusion Video Activity Analysis
With ubiquitous data acquired from sensors, there is an ever increasing ability to abstract content from the combination of physics-based and human-derived information fusion (PHIF). The advancement of PHIF tools include graphical information fusion methods, target tracking techniques, and natural language understanding. Current discussions revolve around dynamic data-driven applications systems (DDDAS) which seeks to leverage high-dimensional modeling with real-time physical systems. An example of a model includes a learned dictionary that can be leveraged as information queries. In this paper, we discuss the DDDAS paradigm of sensor measurements, information processing, environmental modeling, and software implementation to deliver content for PHIF systems. Experimental results demonstrate the DDDAS-based Live Video Computing DataBase Modeling approach allows data discovery and query-based flexibility for awareness to provide narratives of unknown situations.
11:10 Convolutional Neural Networks for Aerial Multi-Label Pedestrian Detection
Low resolution of objects of interest in aerial images makes pedestrian and action detection extremely challenging tasks. Furthermore, computational resources limit deep convolutional neural networks to work with high resolution images. In order to alleviate these challenges, we propose a two-step, yes and no question answering framework to find desired individuals doing one or multiple specific actions in aerial images. First, a deep object detector, Single Shot Multibox Detector (SSD), is used to generate object proposals from small aerial images. Second, we use another deep network to make a latent common space for the fusion of high resolution aerial object proposals with the possible pedestrian action labels provided by human-based sources in a multi-label scheme.
11:30 Learning the Parameters of Spatially-Referring Natural Language Likelihoods in Binary Models
Despite imprecision and possible ambiguity expressed by spatially referring natural language statements, they are potentially useful "measurements" (also known as soft data) for target localisation and tracking. The likelihood functions of such measurements typically include the parameters that model the inherent uncertainty in soft data. Adopting a binary model for spatially referring statements involving the word "near", the paper derives the theoretical posterior Cramer-Rao bound for the estimation (learning) of the parameter which features in the likelihood function. A numerical analysis of the bound is presented with an example demonstrating estimation/learning in practice.
11:50 On Integrating Human Decisions with Physical Sensors for Binary Decision Making
Allowing humans to act as soft sensors is increasingly becoming an attractive solution to enhance decision making performance when the available physical (hard) sensors are limited. While the fusion problem with hard data has a rich history, fusion of hard and soft data requires further understanding due to human related factors associated with human sensor data. In this work, we investigate how the presence of human sensors can be modeled in the statistical signal processing framework and the factors that need to be taken into account when integrating soft human sensor data with hard data in a signal detection framework. We consider two cases. In the first case, both types of sensors are assumed to make threshold based individual decisions using identical observations. While physical sensors use a fixed threshold, the thresholds used by human sensors are assumed to be random variables. With a given distribution for the random thresholds used at the human sensors, by properly designing the thresholds at the physical sensors, an enhanced detection performance can be observed in the integrated system compared to performing fusion with only physical sensors. In the second case, we evaluate the fusion performance when human sensors possess some side information regarding the phenomenon in addition to the common observations available at the two types of sensors.

4c - Belief Functions

Room: LR2
Chair: Jean Dezert
10:30 Context Awareness in Uncertain Pervasive Computing and Sensors Environment
Building context-aware pervasive computing systems - such as ambient intelligent spaces or ubiquitous robots - needs to take into account the quality of contextual information collected from sensors. Such information are often inaccurate, uncertain or subject to noise due to environment and user dynamics. Dempster-Shafer theory has been extensively adopted to handle uncertainty in situation and activity recognition. This theory is used to represent, manipulate and decide under uncertainty. However, combining information using Dempster's rule may produce counterintuitive decision in highly conflicting evidences due to sources failure. Recently, a variety of rules were proposed to overcome such drawback. Inspired by Murphy's rule, we propose in this paper a new rule called "Weighted Average Combination Rule" (WACR) to deal with context recognition in highly dynamic environment such as ambient intelligence spaces. The proposed WACR rule is based on evidence arithmetic average and cardinality. WACR rule was applied to some conflictual evidence examples and has been shown to reap more appropriate decisions than other alternative rules for decision-making in activity-aware systems. To demonstrate the applicability and performance of our approach, we have studied a scenario of context recognition in an ambient intelligent environment. In this scenario, we simulated a smart kitchen composed of status devices and RFID sensors that allow determining what is the artifact in use by the inhabitant and for which activity.
10:50 2CoBel: An Efficient Belief Function Extension for Two-dimensional Continuous Spaces
This paper introduces an innovative approach for handling 2D compound hypotheses within the Belief Function Theory framework. We propose a polygon-based generic rep- resentation which relies on polygon clipping operators. This approach allows us to account in the computational cost for the precision of the representation independently of the cardinality of the discernment frame. For the BBA combination and decision making, we propose efficient algorithms which rely on hashes for fast lookup, and on a topological ordering of the focal elements within a directed acyclic graph encoding their interconnections. Additionally, an implementation of the functionalities proposed in this paper is provided as an open source library. Experimental results on a pedestrian localization problem are reported. The experiments show that the solution is accurate and that it fully benefits from the scalability of the 2D search space granularity provided by our representation.
11:10 Total Belief Theorem and Generalized Bayes' Theorem
This paper presents two new theoretical contributions for reasoning under uncertainty: 1) the Total Belief Theorem (TBT) which is a direct generalization of the Total Probability Theorem, and 2) the Generalized Bayes' Theorem drawn from TBT. A constructive justification of Fagin-Halpern belief conditioning formulas proposed in the nineties is also given. We also show how our new approach and formulas work through simple illustrative examples.
11:30 Credibilistic Independence of Two Propositions
In this paper the notion of (probabilistic) independence of two events defined classically in the theory of probability is extended in the theory of belief functions as the credibilistic independence of two propositions. This new notion of independence which is compatible with the probabilistic independence as soon as the belief function is Bayesian, is defined from Fagin-Halpern belief conditioning formulas drawn from Total Belief Theorem (TBT) when working in the framework of belief functions to model epistemic uncertainties. Our analysis shows that two uncertain propositions whose non degenerate belief intervals are strictly included in [0,1] can never be credibilistically independent. We give some illustrative examples of this notion at the end of the paper.
11:50 Motion State Classification for Automotive LIDAR Based on Evidential Grid Maps and Transferable Belief Model
The vast variety of different obstacles in the automotive environment still poses a major challenge for the realization of autonomous vehicles. This contribution tries to distinguish between stationary and dynamic obstacles by applying a transferable belief model. In particular we build an occupancy grid representation of the environment and correct it with a tailored transferable belief model that accounts for inconsistencies as well as non-local features. Based on these results we define a classifier that distinguishes between stationary and dynamic cells. The presented algorithm is evaluated qualitatively on real-world Valeo Scala LIDAR data and quantitatively based on an IPG Carmaker simulation.

4d - Parameter Estimation/ Covariance Estimation/ Model Callibration

Room: LR5
Chair: Peter Willett
10:30 Improved Shrinkage-to-Tapering Estimation for High-Dimensional Covariance Matrices
In this paper, an improved shrinkage-to-tapering oracle approximating (ISTOA) approach for estimating high-dimensional covariance matrices is proposed. Since the oracle shrinkage coefficient of shrinkage-to-tapering oracle (STO) estimator is greater than one for some tapering parameter, the optimal shrinkage coefficient is obtained by thresholding the oracle coefficients. The corresponding normalized mean-squared error (MSE) is also obtained. Moreover, an improved shrinkage-to-tapering estimator is proposed by plugging the unbiased and consistent estimators of some functions of unknown covariance matrix into the optimal coefficient and corresponding normalized MSE. Compared with the STO approximating (STOA) approach using iteration to approximate the oracle coefficient, a closed-form formula of the estimated coefficient and the normalized MSE are derived for given tapering parameter. Numerical simulations and an application to adaptive beamforming show the comparable performance of the proposed estimator.
10:50 Weak in the NEES?: Auto-tuning Kalman Filters with Bayesian Optimization
Kalman filters are routinely used for many data fusion applications including navigation, tracking, and simultaneous localization and mapping problems. However, significant time and effort is frequently required to tune various Kalman filter model parameters, e.g. process noise covariance, pre-whitening filter models for non-white noise, etc. Conventional optimization techniques for tuning can get stuck in poor local minima and can be expensive to implement with real sensor data. To address these issues, a new ``black box'' Bayesian optimization strategy is developed for automatically tuning Kalman filters. In this approach, performance is characterized by one of two stochastic objective functions: normalized estimation error squared (NEES) when ground truth state models are available, or the normalized innovation error squared (NIS) when only sensor data is available. By intelligently sampling the parameter space to both learn and exploit a nonparametric Gaussian process surrogate function for the NEES/NIS costs, Bayesian optimization can efficiently identify multiple local minima and provide uncertainty quantification on its results.
11:10 On Constraints in Parameter Estimation and Model Misspecification
Under perfect model specification several deterministic (non-Bayesian) parameter bounds have been established, including the Cramer-Rao, Bhattacharyya, and the Barankin bound; where each is known to apply only to estimators sharing the same mean as a function of the true parameter. This requirement of common mean represents a constraint on the class of estimators to which the bound applies. While consideration of model misspecification adds an additional complexity to this approach, the need for constraints remains a necessary consequence of applying the covariance inequality. These inherent constraints will be examined more closely under misspecification and discussed in detail along with a review of Vuong's original contribution of the misspecified Cramer-Rao bound (MCRB). Recent work derives the same MCRB as Vuong via a different approach, but applicable only to a class of estimators that is more restrictive. An argument is presented herein, however, that broadens this class to include all unbiased estimators of the pseudo-true parameters and strengthens the tie to Vuong's work. Interestingly, an inherent constraint of the covariance inequality, when met by the choice in score function, yields a generalization of the necessary conditions identified by Blyth and Roberts to obtain an inequality of the Cramer-Rao type.
11:30 Robust MIMO Channel Estimation from Incomplete and Corrupted Measurements
Location-aware communication is one of the enabling techniques for future 5G networks. It requires accurate temporal and spatial channel estimation from multidimensional data. Most of the existing channel estimation techniques assume that the measurements are complete and noise is Gaussian. While these approaches are brittle to corrupted or outlying measurements, which are ubiquitous in real applications. To address these issues, we develop a `p-norm minimization based iteratively reweighted higher-order singular value decomposition algorithm. It is robust to Gaussian as well as the impulsive noise even when the measurement data is incomplete. Compared with the state-of-the-art techniques, accurate estimation results are achieved for the proposed approach.

4e - SS: Advanced Nonlinear Filters 2

Room: LR6
Chair: Uwe D. Hanebeck
10:30 Particle Gaussian Mixture Filters-II
In our previous work, we proposed a particle Gaussian mixture (PGM-I) filter for nonlinear estimation. The PGM-I filter uses the transition kernel of the state Markov chain to sample from the propagated prior. It constructs a Gaussian mixture representation of the propagated prior density by clustering the samples. The measurement data is incorporated by updating individual mixture modes using the Kalman measurement update. However, the Kalman measurement update is inexact when the measurement function is nonlinear and leads to the restrictive assumption that the number of modes remain fixed during the measurement update. In this paper, we introduce an alternate PGM-II filter that employs parallelized Markov Chain Monte Carlo sampling to perform the measurement update. The PGM-II filter update is asymptotically exact and does not enforce any assumptions on the number of Gaussian modes. The PGM-II filter is employed in the estimation of two test case systems. The results indicate that the PGM-II filter is suitable for handling nonlinear/non-Gaussian measurement update.
10:50 A Novel Gaussian Mixture Approximation for Nonlinear Estimation
A novel adaptive nonlinear estimator is presented to accurately incorporate nonlinear/non-Gaussian measurement in a Bayesian framework. The underlying algorithm relies on a Gaussian Mixture Model (GMM) to approximate the probability density function (pdf) of the state conditioned on all current and past measurement. Automatic mixture components refining is performed to ensure that the posterior GMM approximation of the pdf accurately represents the true distribution.
11:10 On Canonical Polyadic Decomposition of Non-Linear Gaussian Likelihood Functions
Non-linear filtering arises in many sensor applications such as for instance robotics, military reconnaissance, advanced driver assistance systems and other safety and security data processing algorithms. Since a closed-form of the Bayesian estimation approach is intractable in general, approximative methods have to be applied. Kalman or particle based approaches have the drawback of either a Gaussian approximation or a curse of dimensionality which both leads to a reduction in the performance in challenging scenarios. An approach to overcome this situation is state estimation using decomposed tensors. In this paper, a novel method to compute a non-linear likelihood function in Canonical Polyadic Decomposition form is presented, which avoids the full expansion of the discretized state space for each measurement. An exemplary application in a radar scenario is presented.
11:30 Non-Linear Continuous-Discrete Smoothing by Basis Function Expansions of Brownian Motion
This paper is concerned with inferring the state of a It\^o stochastic differential equation (SDE) from noisy discrete-time measurements. The problem is approached by considering basis function expansions of Brownian motion, that as a consequence give approximations to the underlying stochastic differential equation in terms of an ordinary differential equation with random coefficients. This allows for representing the latent process at the measurement points as a discrete time system with a non-linear transformation of the previous state and a noise term. The smoothing problem can then be solved by sigma-point or Taylor series approximations of this non-linear function, implementations of which are detailed. Furthermore, a method for interpolating the smoothing solution between measurement instances is developed. The developed methods are compared to the Type III smoother in simulation examples involving (i) hyperbolic tangent drift and (ii) the Lorenz 63 system where the present method is found to be better at reconstructing the smoothing solution at the measurement points, while the interpolation scheme between measurement instances appear to suffer from edge effects, serving as an invitation to future research.
11:50 Recursive Sliding-Window Algorithm for Constrained Multiple-Model MAP Estimation
In this paper, we propose a new algorithm for a recursive implementation of constrained multiple-model (MM) maximum a posteriori (MAP) estimation. The recursive procedure is formulated in a sliding window fashion, where the measurements are processed sequentially. For each recursion, an iterative alternating coordinate-ascent (ACA) maximization process and our previously developed constrained sequential list Viterbi algorithm (CSLVA) are used to find the best constrained solution (mode and state sequence estimates) within the window. Performance results from simulation of two application examples are provided to demonstrate the capabilities of the proposed method.

4f - Network Tracking

Room: LR11
Chair: Ramkumar Natarajan
10:30 An Optimal Node Depth Adjustment Method with Computation-efficiency for Target Tracking in UWSNs
The effective node depth adjustment of distance-measuring sensors for target tracking in underwater wireless sensor networks (UWSNs) is investigated in this paper. Due to the limited energy and bandwidth in UWSNs, there is only a part of sensors participating in the tracking task. In this paper, the mobility of sensor nodes in depth is utilized to improve the tracking accuracy. Firstly, considering the complexity of the underwater environment, the measurement error is formulated as addictive and multiplicative noise. Secondly, the relationship between the depth of sensor nodes and the Fisher information matrix (FIM) is derived and taken as the metric for tracking accuracy. Thirdly, the optimal depth adjustment is determined for active sensors with low complexity by simplifying the objective function. Finally, by combining the optimal depth adjustment and traditional sensor selection algorithm, the best task sensors are selected for the purpose of the energy-efficiency. The simulation results illustrate the performance of the proposed method for improving the tracking accuracy and computational efficiency on the premise of employing the same number of sensors.
10:50 Mixed Iterative Adaptive Dynamic Programming Sensor Scheduling for Target Tracking in Energy Harvesting Wireless Sensor Networks
With the development of energy harvesting technologies, the building of wireless sensor networks (WSNs) based on energy harvesting has become possible, and helps to weaken the limitation of battery energy in WSNs. The main objective of target tracking is to improve the tracking accuracy and to optimize the resource utilization, hence sensor scheduling is essential. To minimize the global performance based on tracking error and energy consumption, this paper proposes a novel adaptive sensor scheduling approach using the mixed iterative adaptive dynamic programming (MIADP). MIADP consists of two iterations, P-iteration to update the infinite-step iterative value function and V-iteration to obtain the multi-step iterative control law sequence. Finally, simulations demonstrate that our scheme has advantages in the global trade-off between energy cost and tracking accuracy.
11:10 Efficient Factor Graph Fusion for Multi-robot Mapping and Beyond
This work presents a novel method to efficiently factorize the combination of multiple factor graphs having common variables of estimation. Variable ordering, a well-known variable elimination technique in linear algebra is employed to efficiently solve a factor graph. Our primary contribution in this work is to reuse the variable ordering of the graphs being combined to find the ordering of the fused graph called fusion ordering. By reusing the variable ordering of the parent graphs we were able to produce an order-of-magnitude difference in the time required for solving the fused graph. A formal verification is provided to show that the proposed strategy does not violate any of the relevant standards. The fusion ordering is experimented on the standard dataset used in the sparse linear algebra community called SuiteSparse \cite{suitesparse}. Recent factor graph formulation for Simultaneous Localization and Mapping (SLAM) like Incremental Smoothing and Mapping (ISAM) using the Bayes tree has been very successful and garnered much attention. In the case of mapping, multi-robot system has a great advantage over a single robot that provides faster map coverage and better estimation quality. We also demonstrate the improvement of our ordering scheme on a real-world multi-robot AP Hill dataset.
11:30 Distributed Evidential EM Algorithm for Gaussian Mixtures in Sensor Network with Uncertain Data
In this paper, the problem of clustering in distributed sensor networks with uncertain measurements is considered. It is assumed that each node in the sensor network can be described as a mixture of some elementary conditions. Therefore, the measurements of the sensors can be modeled using a Gaussian mixture model, in which the uncertainty on the attributes is represented by the belief functions. We present a novel algorithm, called distributed evidential expectation maximization (DEEM) algorithm, for the estimation of the Gaussian components in the mixture model. The effectiveness of the proposed algorithm is demonstrated through simulations of sensor networks with uncertain data.
11:50 User Positioning in mmW 5G Networks Using Beam-RSRP Measurements and Kalman Filtering
In this paper, we exploit the 3D-beamforming features of multiantenna equipment employed in fifth generation (5G) networks, operating in the millimeter wave (mmW) band, for accurate positioning and tracking of users. We consider sequential estimation of users' positions, and propose a two-stage extended Kalman filter (EKF) that is based on reference signal received power (RSRP) measurements. In particular, beamformed downlink (DL) reference signals (RSs) are transmitted by multiple base stations (BSs) and measured by user equipments (UEs) employing receive beamforming. The so-obtained beam-RSRP (BRSRP) measurements are fed back to the BSs where the corresponding directions of departure (DoDs) are sequentially estimated by a novel EKF. Such angle estimates from multiple BSs are subsequently fused on a central entity into 3D position estimates of UEs by means of an angle-based EKF. The proposed positioning scheme is scalable since the computational burden is shared among different network entities, namely transmission/reception points (TRPs) and 5G-NR Node B (gNB), and may be accomplished with the signalling currently specified for 5G. We assess the performance of the proposed algorithm on a realistic outdoor 5G deployment with a detailed ray tracing propagation model based on the METIS Madrid map. Numerical results with a system operating at 39GHz show that sub-meter 3D positioning accuracy is achievable in future mmW 5G networks.

4g - Networks/ Community Detection/ Sentiment analysis/Anomaly detection

Room: LR12
Chair: Muralidhar Rangaswamy
10:30 Controlled Sentiment Sampling for Information Fusion in Social Networks
This paper deals with the problem of information fusion for state/ parameter estimation in social networks. The information consists of the sentiment of the opinions expressed by people. The average sentiment of the opinions expressed by people constitutes a noisy observation of an unknown state. A controller seeks to estimate the state by controlling the dynamics of sampling that minimizes an objective function comprising of state estimation error and the cost of acquiring sentiments. The stochastic control problem is formulated as a partially observed Markov decision process (POMDP), and sufficient conditions under which a myopic policy forms an upper-bound to the optimal policy of the POMDP are provided. The myopic policy minimizes the immediate costs while ignoring the expected costs incurred over time, and is computationally inexpensive for large state spaces. Finally, the performance of the proposed myopic policy is evaluated for POMDP formulation whose parameters are computed from real-data set obtained from Twitter.
10:50 Semi-supervised Soft Label Propagation Based on Mass Function for Community Detection
With the growing complexity of networks in practical applications, the accuracy and robustness of community detection approaches need to be improved. The semi-supervised label propagation (SLP) is known for its near-linear computation and making full use of the prior information. However, performance of existing SLP algorithms is seriously affected by outliers and the label information propagated in the process is categorical which leads to not completely accurate detection results. A semi-supervised soft label propagation based on mass function (SSLP) is proposed in this paper. We deal with the imprecise community assignment of each node by mass function which can be regarded as a soft label. And the mass assigned to the whole set of all communities measures the degree of outliers. Based on manifold assumption, the label of each node depends on its neighbors whose label information can be combined by Dempster's rule of combination. After convergence of SSLP, detection results will be given based on the mass matrix. Experiments on several real-world and artificial networks show that the proposed SSLP is competitive and even better.
11:10 Maritime Anomaly Detection Based on Mean-Reverting Stochastic Processes Applied to a Real-World Scenario
A novel anomaly detection procedure is presented, based on the Ornstein-Uhlenbeck (OU) mean-reverting stochastic process. The considered anomaly is a vessel that deviates from a planned route, changing its nominal velocity. In order to hide this behavior, the vessel switches off its Automatic Identification System (AIS) device for a certain time, and then tries to revert to the previous nominal velocity. The decision that has to be taken is either declaring that a deviation happened or not, relying only upon two consecutive AIS contacts. A proper statistical hypothesis testing procedure that builds on the changes in the OU process long-term velocity parameter of the vessel is the core of the proposed approach and enables for the solution of the anomaly detection problem.
11:30 Hybrid Bernoulli Filtering for Detection and Tracking of Anomalous Path Deviations
This paper presents a solution to the problem of sequential joint anomaly detection and tracking of a target subject to switching unknown path deviations. Based on a dynamic model described by Ornstein-Uhlenbeck (OU) stochastic processes, the anomaly is represented by a target (e.g., a marine vessel) that deviates from a preset route by changing its nominal mean velocity. The Random Finite Set (RFS) paradigm is adopted to model the switching nature of target's anomalous behavior in the presence of spurious measurements and detection uncertainty. Combining these two ingredients, the problem of jointly detecting target's path deviations and estimating its kinematic state can be formulated within the Bayesian framework, and analytically solved by means of a hybrid Bernoulli filter that sequentially updates the joint posterior density of the unknown OU velocity input (a Bernoulli RFS) and of the target's state random vector. We illustrate the effectiveness of the proposed filter, implemented in Gaussian-mixture form, in a simulated scenario of vessel tracking for maritime traffic monitoring.
11:50 Short Term Traffic Flow Prediction with Particle Methods in the Presence of Sparse Data
Traffic estimation and prediction approaches face challenges when dealing with missing data, for instance when sensors are not operational. Also the communication infrastructure upon which traffic measurements are transmitted for processing could cause more than 40% missing data in some cases, e.g. due to weather conditions. The cost of installing and managing traffic sensing devices is high making it impracticable to cover all locations needed for effective observation of the full road network resulting in sparse data. This present work adds to existing body of knowledge by proposing a particle based framework for dealing with these challenges. An expression of the likelihood function is derived for the case when the missing value is calculated based on Kriging interpolation. With the Kriging interpolation, the missing values of the measurements are predicted, which are subsequently used in the estimation of likelihood terms in the particle algorithm. The results show 23% to 36.34% improvement in RMSE values for the synthetic data used.

4h - Multisensor Fusion

Room: JDB-Seminar Room
Chair: Martin Ulmke
10:30 Correlation of Gaussian Mixture Tracks
In this paper, methods are developed and evaluated for the correlation of Gaussian mixture tracks from two sensors. The hypothesis likelihoods for the case of a single target are given using the minimum mean square error and the maximum likelihood estimates of common origin between two Gaussian mixtures. A correlation test is developed as a likelihood ratio of the single target hypothesis to the hypothesis of two separate targets. A negative log likelihood cost is formulated and used in an optimal assignment method to perform track-to-track correlation for multiple targets between two sensors. Simulations are performed to compare the minimum mean square error and maximum likelihood approaches with Gaussian mixture tracks to a baseline method using unbiased converted measurements for sensors with a given probability of detection and bias significance. Results are shown to compare the performance of the correlation methods with respect to probability of correct correlation and root mean squared error versus track density for several different aspect angles between sensors.
10:50 Multi-sensor Multi-object Tracking with Different Fields-of-view Using the LMB Filter
A key issue in multi-sensor surveillance is the capability to surveil a much larger region than the field-of- view (FoV) of any individual sensor by exploiting cooperation among sensor nodes. Whenever a centralized or distributed information fusion approach is undertaken, this goal cannot be achieved unless a suitable fusion approach is devised. This paper proposes a novel approach for dealing with different FoVs within the context of Generalized Covariance Intersection (GCI) fusion. The approach can be used to perform multi-object tracking on both a centralized and a distributed peer-to peer sensor network. Simulation experiments on realistic tracking scenarios demonstrate the effectiveness of the proposed solution.
11:10 Iterated Extended Kalman Filter with Implicit Measurement Equation and Nonlinear Constraints for Information-Based Georeferencing
Accurate, reliable and complete georeferencing with kinematic multi-sensor systems (MSS) is very demanding if common types of observations (e.g. usually GNSS) are imprecise or completely absence. The main reasons for this are challenging areas of indoor applications or inner-city areas with shadowing and multipath effects. However, those complex and tough environments are rather the rule than the exception. Consequently, we are developing an information-based georeferencing approach which can still estimate precise and accurate pose parameters when other current methods may fail. We modified an iterated extended Kalman filter (IEKF) approach which can deal with implicit measurement equations and introduced nonlinear equality constraints for the state parameters to integrate additional information. Hence, we can make use of geometric circumstances in the direct environment of the MSS and provide a more precise and reliable georeferencing.
11:30 Multi-Sensor Fusion for Obstacle Detection and Recognition: A Belief-based Approach
This paper presents an obstacle detection and classification method for intelligent vehicles. We use both a camera and radar in a multi-sensor perception framework. Our main goal is to improve the reliability of pedestrian and vehicle recognition of the system, avoiding false alarms and reducing miss detections in an uncertain environment with imprecise models. To deal with this issue, an evidential sensor fusion is developed and implemented. Simulation results and preliminary experimental test are presented and confirm the reliability improvement.
11:50 On Multi-Sensor Radar Configurations for Vehicle Tracking in Autonomous Driving Environments
Regardless of their significantly high price-tag, LiDAR sensors dominate tracking and situational awareness generation in autonomous driving applications due to LiDAR's capability to generate a precise 3-D view of the driving environment. However, a multi-sensor Radar system can potentially provide high-resolution tracking capabilities with better system performance, visibility and complementary information than a single Radar tracking system, yet at a significantly lower cost compared to LiDAR. This paper presents a multi-sensor tracking system that utilizes three different Radar configurations with applications to autonomous driving. Three different Radar configurations were compared to evaluate tracking performance of one vs two Radar sensors, operating in either a Stepped Frequency Waveform or a hybrid Stepped Frequency Waveform and Continuous Waveform. By utilizing standard Blind-Spot Monitoring tests as defined by the National Highway Traffic Safety Administration, an illustrative experiment was then carried out to evaluate the real-life tracking performance of all three Radar configurations.

4i - Probability and Point Process Based Methods

Room: JDB-Teaching Room
Chair: Daniel Clark
10:30 Fast Kernel Density Estimation Using Gaussian Filter Approximation
It is common practice to use a sample-based representation to solve problems having a probabilistic interpretation. In many real world scenarios one is then interested in finding a "best estimate" of the underlying problem, e.g. the position of a robot. This is often done by means of simple parametric point estimators, providing the sample statistics. However, in complex scenarios this frequently results in a poor representation, due to multimodal densities and limited sample sizes. Recovering the probability density function using a kernel density estimation yields a promising approach to solve the state estimation problem i.e. finding the "real" most probable state, but comes with high computational costs. Especially in time critical and time sequential scenarios, this turns out to be impractical. Therefore, this work uses techniques from digital signal processing in the context of estimation theory, to allow rapid computations of kernel density estimates. The gains in computational efficiency are realized by substituting the Gaussian filter with an approximate filter based on the box filter. Our approach outperforms other state of the art solutions, due to a fully linear complexity and a negligible overhead, even for small sample sets. Finally, our findings are evaluated and tested within a real world sensor fusion system.
10:50 Non-linear Estimation with Generalised Compressed Kalman Filter
The optimal estimation of dynamic random fields is a relevant problem in diverse areas of robotics application. The associated estimation process in these problems implicitly requires dealing with high dimensional multi-variate Probability Density Functions (PDFs) with unaffordable processing cost. The Generalised Compressed Kalman Filter (GCKF) with subsystem switching and proper information exchange architecture is capable of solving such problems with comparable performance to the optimal full Gaussian estimators but at a remarkably lower cost. In this paper, an explicit algorithm is proposed for replacing the Kalman Filter core with a suitable Gaussian Filter core to solve non-linear estimation problems. The computational advantages of GCKF are highlighted, where the computational complexities of different Gaussian Filters are compared against their compressed counterpart. The performance of the algorithm has been verified through its application in solving linear Stochastic Partial Differential Equations (SPDEs) with unknown parameters.
11:10 A Linear-Complexity Second-Order Multi-Object Filter via Factorial Cumulants
Multi-target tracking solutions with low computational complexity are required in order to address large scale tracking problems. Solutions based on statistics determined from point processes, such as the PHD filter, CPHD filter, and newer second-order PHD filter filter are some examples of these algorithms. There are few solutions of linear complexity in the number of targets and number of measurements, with the PHD filter being the exception. However, the trade-off is that it is unable to propagate beyond first-order moment statistics. In this paper, a new filter is proposed with the same complexity as the PHD filter that also propagates second-order information via the second-order factorial cumulant. The results show that the algorithm is more robust than the PHD filter in challenging clutter environments.
11:30 Alternative EM Algorithms for Nonlinear State-space Models
The expectation-maximization algorithm is a commonly employed tool for system identification. However, for a large set of state-space models, the maximization step cannot be solved analytically. In these situations, a natural remedy is to make use of the expectation-maximization gradient algorithm, i.e., to replace the maximization step by a single iteration of Newton's method. We propose alternative expectation-maximization algorithms that replace the maximization step with a single iteration of some other well-known optimization method. These algorithms parallel the expectation-maximization gradient algorithm while relaxing the assumption of a concave objective function. The benefit of the proposed expectation-maximization algorithms is demonstrated with examples based on standard observation models in tracking and localization.
11:50 Fusion of Dependent Detection Systems Using Copula Theory
The US Air Force has multiple detection systems for specific applications that could be combined to work together to yield better accuracy than the individual systems. The amount of time and money used to design, build, simulate, test, validate and verify such combining can be long and expense. Also, there can be several ways to combine these multiple systems, thus, generating more time and cost to determine an optimal (or approximately optimal) combination rule. This paper considers a simple version of this greater problem posed as follows. Suppose we have two legacy detections system families that are designed to detect the same "target" and we conjecture that combining them would yield a new detection system with improved accuracy. Suppose we know the ROC functions of both detection system families, but do not know (or have access to) the data that produced them. Can we construct the ROC function of the combined systems from the individual ROC functions? Copula theory has been in existence since 1959. This theory produces the means to address the dependence between random variables. This paper takes copula and applies it to the fusion of detection systems. Examples will be given that demonstrate how the formulas are used.

Thursday, July 12 12:10 - 13:10

Lunch

Thursday, July 12 13:10 - 14:50

5a - Data Association

Room: LR0
Chair: Felix Govaers
13:10 Narrow-Band Through-Wall Imaging with Received Signal Strength Data
This paper solves the through-wall imaging (TWI) problem with a narrow-band system, and proposes an adaptive TWI method based on data fusion of multiple scan paths. First, we use a Wentzel-Kramers-Brillouin-based (WKB-based) approximation to model the interaction of the transmitted wave with the unknown area. Then we use Radon inverse transform to reconstruct the image from the received signal strength data of different paths. Furthermore, we evaluate the impact of scan paths on imaging. Finally, finite-difference time-domain (FDTD) simulation results demonstrate the validity of proposed method.
13:30 A Fast Track to Track Association Algorithm by Sequence Processing of Target States
Track association and fusion posses great importance in distributed sensor systems. In this study, we propose a novel statistical method based on temporal covariance estimation of local sensor tracks in a time interval and an association cost based on Mahalanobis distance is derived to associate tracks from different sensors. Contrary to associating the sensor tracks at each scan, we use a sequence of statistical features obtained from a state-space based tracking algorithm and process them in blocks to reduce false association probability. The effectiveness of the proposed method is illustrated by various three-dimensional multi-target tracking simulation scenarios. The algorithm is fast. As the number of tracks to associate increases, the computation time increases only linearly while the degradation in the association performance is small.
13:50 Evidential Object Association Using Heterogeneous Sensor Data
Multiple Object Association is a considerable and challenging process in highly cluttered environments. Its aim is to thoroughly relate known objects to new detected ones which is hard in such conditions. The recurrent occlusions and pose variation of objects can raise ambiguity in decision-making. The objective of this paper is to establish a multi-feature fusion approach based on two sources describing, distinctly, the position and the motion direction of considered objects. The proposition is based on Dempster-Shafer theory to model the uncertainty and unreliability of sources and to ensure an evidential combination. Experimental results with real data from the KITTI database are presented to evaluate the proposed solution.
14:10 Tracking Multiple Maneuvering Targets Using Integer Programming and Spline Interpolation
In this paper, we propose an integer programming based model for tracking multiple maneuverable targets in a planar region. The objective function of this model uses both pairs and triplets of observations, which offer more accurate representation for constant velocity targets. Triplet scores in this model are calculated using a novel approach based on cubic spline interpolation, while the data association problem is solved using a specialized multi-dimensional assignment formulation. We show that the spline interpolation based scoring model provides more accurate reconstruction of trajectories, when compared to a naive model based on linear interpolation, on various randomly generated trajectories, at the expense of modest increase in computation time. The proposed multi-dimensional assignment formulation has nice structural properties and tight linear programming relaxation bound, which results in small computation times.
14:30 Non-elliptical Validation Gate for Maritime Target Tracking
The usual elliptical validation regions may not give a good representation of the possible vehicle positions if the time period of the prediction is long compared to the time scales of vehicle maneuvers. This paper proposes a fast, non-elliptical validation region giving a better coverage of the possible vehicle positions in such cases.

5b - SS: Novel Information Fusion Methodologies for Space Domain Awareness

Room: LR1
Chair: Kyle DeMars
13:10 A New Representation of Uncertainty for Data Fusion in SSA Detection and Tracking Problems
A key challenge in the design of a Resident Space Object (RSO) detection and tracking algorithm is the scarcity of available information on the various sources of uncertainty in the underlying estimation problem, such as the orbital mechanics or the data sources producing the observations. Some of these uncertain components lack statistical description, such as the Two-Line Elements (TLEs), so that their description as a random variable and their exploitation in a standard Bayesian filtering algorithm remains largely unexplored. This paper exploits uncertain variables and outer probability measures (o.p.m.s), a generalization of the concepts of random variables and probability distributions, in order to propose a representation of all the uncertain components of a typical RSO tracking problem that matches the information available to a space analyst, and fuse them into a coherent Bayesian estimation filter. This algorithm is then illustrated on a scenario where the kinematic state of a Low-Earth Orbit (LEO) satellite is estimated, using realistically simulated radar observations and real TLEs queried from the U.S. Strategic Command (USSTRATCOM)'s catalog.
13:30 Filtering When Object Custody is Ambiguous
Filtering involves predicting the future state of a space object in orbit about the earth given observations (e.g. angles-only or radar measurements) about its current and past states. The task is simplest when the identity of the object is known. A recently developed "adapted structural (AST)" coordinate system enables the task to be carried out in a computationally efficient manner. Propagation for a single state (or a small number of sigma points) can be carried out using Keplerian dynamics or using a numerically more expensive propagator to accommodate perturbation effects. In either case, the uncertainty can be represented in AST coordinates as Gaussian to a high level of accuracy. An unscented Kalman Filter has been developed in this situation; in particular, there is no need to use particle filters. However, when object custody is uncertain, i.e. when the latest observation might correspond to two or more objects in a catalog, the filtering task is more complicated. In this case we propose a mixture of Gaussians in AST coordinates to represent the state. The paper will demonstrate the feasibility of this approach.
13:50 Fusion Methodologies for Orbit Determination with Distributed Sensor Networks
Given that a single ground-based sensor, such as a radar or electro-optical telescope, is limited to observing only a small portion of an object's orbit, tracking accuracy can be greatly improved by collecting data with multiple geographically disparate sensors. Processing the data provided by such a distributed sensor network, however, poses complications in that full cooperation, i.e. direct sharing of raw measurement data, is usually implausible. Alternatively, cooperation within the network can be more feasibly established by instead sharing the posterior state densities produced by each sensor's tracking scheme and fusing these densities directly. This paper investigates the use of geometric averaging approaches to probability density fusion to exploit the diversity of a cooperative, distributed sensor network. These methods not only require approximate methods to perform sensor fusion, but they also require numerical procedures to determine an ideal weighting for each density. Computationally efficient approximations to these fusion techniques are formulated and compared to more expensive methods to determine the efficacy of the approximations. A numerical simulation considering the tracking of a space object in low Earth orbit with three cooperating ground-based radar stations is presented to produce conclusions on the discussed approaches.
14:10 Autonomous Multi-Phenomenology Space Domain Sensor Tasking and Adaptive Estimation
Space domain awareness using current human-in-the-loop methods is becoming decreasingly viable. This work presents an approach to sensor network management, maneuver detection, and adaptive estimation for tracking many non-maneuvering and multiple maneuvering satellites with a space object surveillance and identification (SOSI) network. The proposed method integrates a suboptimal partially observable Markov decision process (POMDP) with an unscented Kalman filter (UKF) to task sensors and maintain viable orbit estimates for all targets. The method also implements autonomous maneuver detection based on the innovations squared metric. Once detected, the network instantiates a multiple model adaptive estimation (MMAE) filter with various possible maneuvers. This study implemented both a static multiple model (SMM) filter and an interacting multiple model (IMM) filter in order to compare the two methods. When comparing the two multiple model filters' responsiveness and accuracy in this framework, it is shown that the IMM marginally outperforms the SMM for a substantial, impulsive maneuver in three different orbital regimes.
14:30 Space Situational Awareness Sensor Tasking: A Comparison Between Step-scan Tasking and Dynamic, Real-time Tasking
Modern societies rely on space assets for a number of critical tasks, including communications, imagery, and position, navigation and timing. The interference or collision of these space assets with another orbiting object has the potential to be catastrophic. Tracking orbiting objects, known as Space Situational Awareness (SSA), is of paramount importance to ensure such events do not occur. With the number of objects orbiting Earth growing disproportionately to the number of sensors used to detect them, allocating sensor resources efficiently is becoming increasingly more important. Typically, SSA sensors are step-scanned across a region of interest in the sky to produce detections. No information about the tracked object's uncertainty, future viewing opportunity or changes in environment is used to direct the sensor, resulting in a poor utilisation of sensor resources. Tracker of Things in Space (TOTIS) was built to task, in real time, its associated sensors using information from its space object catalogue. This allows for TOTIS to react to changes in operating environment and independently manage uncertainty of all tracked objects. This produces a more efficient allocation of sensor resources. This paper presents a comparison study between the traditional step-scan tasking methodology and the methodology employed in TOTIS.

5c - SS: Indoor Positioning

Room: LR2
Chair: Roland Hostettler
13:10 Continuous-Discrete Von Mises-Fisher Filtering on S2 for Reference Vector Tracking
This paper is concerned with tracking of reference vectors in the continuous-discrete-time setting. For this end, an It\^o stochastic differential equation, using the gyroscope as input, is formulated that explicitly accounts for the geometry of the problem. The filtering problem is solved by restricting the prediction and filtering distributions to the von Mises--Fisher class, resulting in ordinary differential equations for the parameters. A strategy for approximating Bayesian updates and marginal likelihoods is developed for the class of conditionally spherical measurement distributions, which is realistic for sensors such as accelerometers and magnetometers, and includes robust likelihoods. Furthermore, computationally efficient and numerically robust implementations are presented. The method is compared to other state-of-the-art filters in simulation experiments involving tracking of the local gravity vector. Additionally, the methodology is demonstrated in the calibration of a smartphone's accelerometer and magnetometer. Lastly, the method is compared to state-of-the-art in gravity vector tracking for smartphones in two use cases, where it is shown to be more robust to unmodeled accelerations.
13:30 Scalable Magnetic Field SLAM in 3D Using Gaussian Process Maps
We present a method for scalable and fully 3D magnetic field simultaneous localisation and mapping (SLAM) using local anomalies in the magnetic field as a source of position information. These anomalies are due to the presence of ferromagnetic material in the structure of buildings and in objects such as furniture. We represent the magnetic field map using a Gaussian process model and take well-known physical properties of the magnetic field into account. We build local magnetic field maps using three-dimensional hexagonal block tiling. To make our approach computationally tractable we use reduced-rank Gaussian process regression in combination with a Rao--Blackwellised particle filter. We show that it is possible to obtain accurate position and orientation estimates using measurements from a smartphone, and that our approach provides a scalable magnetic SLAM algorithm in terms of both computational complexity and map storage.
13:50 Inertial Odometry on Handheld Smartphones
Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in real-time and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup.
14:10 Proof of Concept Tests on Cooperative Tactical Pedestrian Indoor Navigation
In this paper we discuss the effect of cooperation in foot-mounted pedestrian indoor navigation. We study methods to use Ultra-Wide Band (UWB) range measurements between two pedestrians, as well as sharing location information between them. Our aim is to handle the heading offset between two separate pedestrian inertial navigation solutions and to represent the collaborators in a common coordinate frame. Furthermore, we study the effect of the proposed method also on height estimation. Our approach fuses measurements from several sensors, such as Inertial Measurement Units, UWB radios and a barometer using Bayesian filtering. First results from tests done in a realistic scenario show that the method can work in tactical operations.
14:30 Gaussian Processes for RSS Fingerprints Construction in Indoor Localization
Location-based applications attract more and more attention in recent years. Examples of such applications include commercial advertisements, social networking software and patient monitoring. The received signal strength (RSS) based location fingerprinting is one of the most popular solutions for indoor localization. However, there is a big challenge in collecting and maintaining a relatively large RSS fingerprint database. In this work, we propose and compare two algorithms namely, the Gaussian process (GP) and Gaussian process with variogram, to estimate and construct the RSS fingerprints with incomplete data. The fingerprint of unknown reference points is estimated based on measurements at a limited number of surrounding locations. To validate the effectiveness of both algorithms, experiments using Bluetooth-low-energy (BLE) infrastructure have been conducted. The constructed RSS fingerprints are compared to the true measurements, and the result is analyzed. Finally, using the constructed fingerprints, the localization performance of a probabilistic fingerprinting method is evaluated.

5d - SS: Multi-layered Fusion Processes: Exploiting Multiple Models and Levels of Abstraction for Understanding and Sense-Making 1

Room: LR5
Chair: Lauro Snidaro
13:10 Polymorphic Information Exchange Model for the Purpose of Multi-level Fusion of Hard and Soft Information
The Joint Directors of Laboratories (JDL) model has been successfully implemented for miscellaneous fusion problems for over three decades. During that time, this very well received by the fusion society model became a sort of classics, as emerging solutions for years have been delivered with respect to the JDL's fusion level structure. However, with all the respect for the classic, not all fusion problems can be easily covered by this model. Multi-level fusion is one these problems, where the necessity of cooperation with diverse models often becomes critical if the operation levels do not refer directly the exact levels of JDL model. This paper presents a concept of the polymorphic information exchange model, which can be applied for the multi-level fusion purposes. The polymorphism enables the designer to release from the assumption of any predefined structure of the information exchange model.
13:30 Context-Based Goal-Driven Reasoning for Improved Target Tracking
Tracking objects in complex dynamic environments can be less challenging once their intents are recognized. Inferring on targets' future actions based on their past can be addressed via probabilistic reasoning. Context information plays a crucial role in the reasoning process as it provides additional clues about targets' intents. However, architectures combining context reasoning with target tracking are merely not existing. The framework here discussed views target's actions as a Hidden Markov Model (HMM) with relevant context associated with each node. Context is at each time step selected based on immediate and goal driven sets of actions. Inference in the HMM is conditioned on prior target's measurements and the belief state conditioned on context. This posterior is then compared with the target's state estimate in order to adjust the switching probability in the Interactive Multiple Models (IMM) tracking process.
13:50 Conflict Management for Bayesian and DST Multi-Sensor Occupancy Grid Mapping
Grid-based environment mapping and obstacle detection becomes increasingly more challenging when sensors' readings are highly contrasting.Without measurement prediction,commonly used approaches to grid fusion weight the sensor grids equally unless specified otherwise by the user. Empirically adjusted measurement weights are tailored only for certain scenarios and are not at all suited for a general purpose mapping. It therefore becomes apparent, that sensor weights need to be adjusted recursively during the map building process. We show that discrepancies between the grids can be exploited in such a manner where fusion of contradicting information will be less susceptible to sensor weighting and the accuracy of the mapped environment can be further improved. We present a realization of such a conflict resolution occupancy grid mapping, which combines grid-based mapping and situation assessment in a holistic approach.
14:10 Control Diffusion of Information Collection for Situation Understanding Using Boosting MLNs
Information fusion includes the integration of data for situational understanding. As a situation unfolds, maintaining awareness depends on diverse collections of data. In complex and dynamic scenarios, human operators face the difficult task of choosing which data to collect next. Hence, there is a need for multi-layered fusion processes that exploit multiple models and levels of abstraction for understanding and sense-making Data collection has its roots in sensor management; however, there is an analogous need for data management - such as the incorporation of public domain data. Mature sensor management includes methods to utilize platform, sensor, and scene modeling so as to guide the user for future data collection. Additionally, these physics-based models could be a method to guide human-derived information models. Using the Data Fusion Information Group Model (DFIG), we develop an equivalent method for diffusion control. This paper focuses on recent techniques in statistical relational learning (SRL), Markov logic networks (MLN), and ontologies to support the control diffusion of data sensing to answer user queries.
14:30 High-level Tracking Using Bayesian Context Fusion
This paper presents a Bayesian tracking approach that exploits various types of context information. The filtering accuracy and precision are improved by using uncertain information about (i) the constraints on the target mobility, (ii) environmental influences on the sensor performance and (iii) typical target behaviors. The approach combines particle filters with exact Bayesian networks. The overall process is equivalent to approximate inference on elaborate dynamic Bayesian networks that systematically capture non-trivial correlations between the estimated states of the dynamic processes, the associated observations and the various factors influencing the dynamic processes. The particle filter supports reasoning about continuous dynamic processes spanning large areas, while the Bayesian networks are used for the implementation of advanced sensor models and for the fusion of uncertain data on mobility constraints. The derivation of the solution is based on the decomposability principles of Bayesian networks. The approach is illustrated with the help of a challenging wildlife protection application. A set of qualitative experiments shows the improvement in tracking performance by considering the different types of context information.

5e - Stone Soup

Room: LR6
Chair: Paul Thomas

5f - Point Process Methods 2

Room: LR11
Chair: Ángel F. García-Fernández
13:10 Multipath Generalized Labeled Multi-Bernoulli Filter
Traditional multitarget tracking algorithms assume that each target can generate at most one detection per scan. However, in the OTHR, a target may produce multiple detections because of multipath propagation. In this paper, we propose a new algorithm, called multipath generalized labeled multi-Bernoulli (MP-GLMB) filter, to effectively track multiple targets in such multiple-detection systems. The proposed technique is based on the labeled random finite set (RFS), which estimates the number of targets and the trajectories of their states. The proposed MP-GLMB filter is compared with the multipath version of the probability hypothesis density (PHD) filter and the multi-target multi-Bernoulli (MeMber) filter, and simulation results show that our algorithm has improved tracking performance.
13:30 Trajectory Probability Hypothesis Density Filter
This paper presents the probability hypothesis density (PHD) filter for sets of trajectories: the trajectory probability density (TPHD) filter. The TPHD filter is capable of estimating trajectories in a principled way without requiring to evaluate all measurement-to-target association hypotheses. The TPHD filter is based on recursively obtaining the best Poisson approximation to the multitrajectory filtering density in the sense of minimising the Kullback-Leibler divergence. We also propose a Gaussian mixture implementation of the TPHD recursion.
13:50 Receding Horizon Estimation for Multi-Target Tracking via Random Finite Set Approach
This paper proposes a robust multi-target tracking algorithm for uncertainty in dynamic motion modeling. To address this issue, the multi-target tracking problem is formulated under random finite set (RFS) framework with finite length memory filtering called receding horizon estimation (RHE). The proposed algorithm is based on the generalized labeled multi-Bernoulli (GLMB) filter which enables RHE for multi-target tracking. The proposed algorithm, a Receding Horizon GLMB (RH-GLMB) filter, is evaluated through a numerical example and visual tracking datasets where dynamic modeling uncertainty exists.
14:10 Distinguishing Wanted and Unwanted Targets Using Point Processes
In many applications, objects of interest navigate in the same environment with unimportant objects that show similar motion behaviours. One prominent example is maritime surveillance in the presence of sea clutter since the sea often looks like a strongly fluctuating population of real targets due to the temporal correlation found in radar measurements of the sea surface. Conventional clutter models usually do not account for temporal correlation but model clutter as spontaneous instances of false measurements. In contrast, it would be desirable to describe such "undesired targets" with their own mathematical model in order to distinguish them properly from the population of true targets. This paper presents a variation of the Panjer Probability Hypothesis Density (PHD) filter which propagates two populations at the same time, assuming their independence. The performance of the proposed method is analysed on simulated data using a Gaussian-Mixture implementation.
14:30 GMPHD Based Multi-Scan Clutter Sparsity Estimation
In order to solve the problem of multi-target tracking in clutter with unknown density, a Gaussian mixture probability hypothesis density (GMPHD) based multi-scan clutter sparsity estimation (MCSE) algorithm is proposed. First, the GMPHD filter is used to estimate the cardinality and state of the target with the clutter density in last step. Then all Measurements originated from the targets are eliminated online, which helps to reduce the effects on the clutter density estimation of target-originated measurements. Last, a multi-scan clutter sparsity estimation algorithm is proposed to update the current clutter density. Simulation results verify the effectiveness of the proposed algorithm.

5g - Sensor Registration

Room: LR12
Chair: Wen Yang
13:10 Joint Registration of Multiple Point Sets by Preserving Global and Local Structure
Previous works on joint multiple point sets registration often use the Gaussian mixture model (GMM) and Registration is then cast to a clustering problem, which aims to exploit global relationships on the multiple point sets. Local relationships on the multiple point sets are usually ignored. In this paper, the multiple point sets are assumed to be generated by a GMM and local features, such as shape context, are proposed to infer the membership probabilities of the GMM. The problem of joint multiple point sets registration is then performed by maximum likelihood. The parameters of GMM and registration are estimated by an expectation maximization algorithm. Comprehensive experiments demonstrate that our proposed method has better performances than the state-of-the-art methods.
13:30 Joint Tracking and Registration in Multi-Target Multi-Sensor Surveillance Using Factor Graphs
This paper describes a new approach for joint tracking and registration in multi-target multi-sensor surveillance in presence of sensor biases. Due to a mutual dependency on target state and sensor bias, measurements cause cross-couplings between target and bias estimates. Therefore, an all-in-one filter (e.g. extended Kalman filter) only is practical for a limited amount of sensors/targets without impairing real-time capability. Since Bayesian filtering historically is the method of choice in target tracking applications, bias estimation is attempted to be decoupled from the tracking filters for the sake of computational efficiency. Current approaches either solve the registration problem first and ignore remaining bias errors during the tracking phase, or make simplifying assumptions about cross-correlations in order to get the tracking filters decoupled from bias estimation. In contrast, we propose a joint estimator for both target states and sensor biases that preserves real-time capability and exhibits a decoupling with all cross-correlations considered. The basic idea is driven by graphical inference methods on factor graphs and entails a departure from the classical filtering framework. We introduce a multiple-model state transition factor to cope with non-cooperative targets. The benefits are threefold. First, conditional independence between target states given the mutual biases is exploited to attain high parallelism scalable in the number of targets. Second, operational delays are brought to a minimum as registration and tracking is performed simultaneously. Third, out-of-sequence measurements are easily integrable. The proposed algorithm is evaluated using a simulated scenario.
13:50 On Universal Sensor Registration
We present a simple approach for sensor registration in target tracking applications. The proposed method uses targets of opportunity and, without making assumptions on their dynamical models, allows simultaneous calibration of multiple three- and two-dimensional sensors. Whereas for two-sensor scenarios only relative registration is possible, in practical cases with three or more sensors unambiguous absolute calibration may be achieved. The derived algorithms are straightforward to implement and do not require tuning of parameters. The performance of the algorithms is tested in a numerical study.
14:10 Accurate Registration of Multitemporal UAV Images Based on Detection of Major Changes
Accurate registration of multitemporal images captured by UAV usually involves affine transformation and complicated non-rigid transformation, which makes it very difficult to achieve satisfying results. Pixel-wise correspondence is effective for handling images with complicated non-rigidity. However, objects with changes deform severely because there should not be pixel-wise correspondence while the algorithm wrongly matches the pixels. In this paper, we propose a coarse-to-fine registration method for multitemporal UAV images. First, a projective model is used to eliminate large scale changes as well as perspective distortion. Then the major changes of different temporal UAV images are most detected, which is used to mask the dense matches in changed areas. Finally, the optical flow field method is used to handle complicated non-rigid changes by matching dense SIFT feature. Experimental results on a challenging set of multitemporal UAV images demonstrate the effectiveness of our approach.
14:30 Consensus and EM Based Sensor Registration in Distributed Sensor Networks
The conventional approaches to the sensor registration problem require a centralized processing structure. This demand cannot be met in the distributed sensor network (DSN). In this paper, we address this sensor registration problem in the DSN. For this purpose, we embed the consensus algorithm into the EM iteration procedure such that the conditional expectation of the log-likelihood function can be evaluated in a fully distributed way. Thus each sensor can obtain its estimated bias. Simulations are performed in order to demonstrate the effectiveness of the proposed algorithms.

5h - Image Fusion

Room: JDB-Seminar Room
Chair: Simon Maskell
13:10 The Information Fusion of the IR and UV Image for the Insulation Fault Diagnosis
The infrared (IR) and ultraviolet (UV) images can reflect the abnormal temperature rise and the partial discharge of insulation equipment respectively. The information fusion of them is helpful for the accurate diagnosis of the insulation faults. In this paper, the information fusion of the IR and UV image for the insulation fault diagnosis is studied in two terms: the image fusion and the feature fusion, to improve the visual effect and the accuracy of fault diagnosis. In the image fusion, we extracted the edge of insulation equipment in the IR and UV images, and used Speeded Up Robust Feature (SURF) to do the IR-UV image registration, and then we proposed the IR-UV image fusion algorithm and the feature extraction method after fusion. In the feature fusion, the IR and UV image features were studied in the experiments with different humidity and insulation fault levels. Based on the experimental results, a fuzzy inference model of insulation faults based on the IR-UV image features is constructed, and the information fusion of the IR and UV images on the feature level is realized. The accuracy test shows that the proposed IR-UV image fusion algorithm could fully retain external insulation fault feature and enhance the visual effect. After the feature fusion of the IR and UV image feature, the accuracy rate of the insulation fault diagnosis increased to 94%.
13:30 MIMO Through-Wall-Radar Spotlight Imaging Based on Arithmetic Image Fusion
In this paper, we present an easy single-side two-location spotlight imaging method for mapping the wall layout of buildings and for detection of stationary targets within buildings by utilizing multiple-input multiple-output (MIMO) through-wall-radar. Rather than imaging for building walls directly, the images of all building corners are generated to speculate wall layout indirectly by successively deploying the MIMO through-wall-radar at two appropriate locations on only one side of building and then carrying out spotlight imaging with two different squint views. The phase coherence factor (PCF) weighting is introduced to suppress the interferences of multi-path ghosts and grating-side lobes in two single-location images, and an arithmetic fusion strategy is designed to fuse two single-location images into one panorama image with all corner images and clear target images. Computer Simulation Technology (CST) electromagnetic simulation validates the proposed imaging method.
13:50 Comparing Interrelationships Between Features and Embedding Methods for Multiple-View Fusion
Manifold embedding techniques have properties that render them attractive candidates to learn a compact and general representation of a three dimensional spatial object that can be used for object recognition through classification. This paper presents a comparative study of several supervised spectral embedding techniques and their relationship with the feature space used to describe the exemplars as inputs to an embedding procedure. By concentrating on this aspect, we are able to highlight preferential combinations between feature description and embedding and we formulate recommendations on the use of such methods for fusing multiple views of and object to recognize it under variable poses.
14:10 A New Image and Video Fusion Method Based on Cross Bilateral Filter
Image fusion is quite common in applications such as digital photography, medical imaging, surveillance, and remote sensing. Designing a common fusion algorithm for all image fusion applications is a challenging task. The cross bilateral filter based image fusion (CBFF) is one such general-purpose method. It can be applied to both mono and multi-modal image fusion applications. However, CBFF has some drawbacks. 1) It introduces artifacts or extra information into the fused image. 2) The runtime of the CBFF is high. To solve these problems and improve the performance further, we propose a new image fusion algorithm based on the cross bilateral filter by designing a simple and an efficient image fusion rule. Experiments are conducted on both images and videos. Results are analyzed using recent fusion metrics in addition to the qualitative and run time analysis. Results demonstrated that the proposed algorithm can be used as an alternative method to the CBFF.
14:30 Multi-camera Matching of Spatio-Temporal Binary Features
Local image features are generally robust to different geometric and photometric transformations on planar surfaces or under narrow baseline views. However, the matching performance decreases considerably across cameras with unknown poses separated by a wide baseline. To address this problem, we accumulate temporal information within each view by tracking local binary features, which encode intensity comparisons of pixel pairs in an image patch. We then encode the spatio-temporal features into fixed-length binary descriptors by selecting temporally dominant binary values. We complement the descriptor with a binary vector that identifies intensity comparisons that are temporally unstable. Finally, we use this additional vector to ignore the corresponding binary values in the fixed-length binary descriptor when matching the features across cameras. We analyse the performance of the proposed approach and compare it with baselines.

5i - Distributed Fusion

Room: JDB-Teaching Room
Chair: Pramod Varshney
13:10 Information Decorrelation for an Interacting Multiple Model Filter
In a sensor network compensation of the correlated information caused by previous communication is of utmost interest for distributed estimation. In this paper, we investigate different information decorrelation approaches that can be applied when using an interacting multiple model filter in a local sensor node. The related decorrelation and the corresponding fusion operations are discussed. The different approaches are compared on a simple distributed single maneuvering target tracking example.
13:30 Unscented Information Consensus Filter for Maneuvering Target Tracking Based on Interacting Multiple Model
This paper deals with the problem of maneuvering target tracking with networked multiple sensors. To avoid linearized error, and obtain more accurate estimated result for maneuvering target, a novel distributed maneuvering target tracking method based on interacting multiple model with unscented information consensus protocol is proposed. The pseudo measurement matrix is computed according to unscented transform, and then each model updates its information vector and information matrix. To make the information concordant and improve the tracking accuracy throughout the whole network, the weighted information consensus protocol is applied in each model. Finally, the posterior estimate in each sensor is acquired with weighted combination of the model-conditioned estimates. Experimental results demonstrate that the proposed algorithm outperforms the existing method in the aspect of tracking accuracy and agreement of estimates.
13:50 Energy-efficient Decision Fusion for Distributed Detection in Wireless Sensor Networks
This paper proposes an energy-efficient counting rule for distributed detection by ordering sensor transmissions in wireless sensor networks. In the counting rule-based detection in an $N-$sensor network, the local sensors transmit binary decisions to the fusion center, where the number of all $N$ local-sensor detections are counted and compared to a threshold. In the ordering scheme, sensors transmit their unquantized statistics to the fusion center in a sequential manner; highly informative sensors enjoy higher priority for transmission. When sufficient evidence is collected at the fusion center for decision making, the transmissions from the sensors are stopped. The ordering scheme achieves the same error probability as the optimum unconstrained energy approach (which requires observations from all the $N$ sensors) with far fewer sensor transmissions. The scheme proposed in this paper improves the energy efficiency of the counting rule detector by ordering the sensor transmissions: each sensor transmits at a time inversely proportional to a function of its observation. The resulting scheme combines the advantages offered by the counting rule (efficient utilization of the network's communication bandwidth, since the local decisions are transmitted in binary form to the fusion center) and ordering sensor transmissions (bandwidth efficiency, since the fusion center need not wait for all the $N$ sensors to transmit their local decisions), thereby leading to significant energy savings. As a concrete example, the problem of target detection in large-scale wireless sensor networks is considered. Under certain conditions the ordering-based counting rule scheme achieves the same detection performance as that of the original counting rule detector with fewer than $N/2$ sensor transmissions; in some cases, the savings in transmission approaches $(N-1)$.
14:10 Multi-sensor Multi-frame Detection Based on Posterior Probability Density Fusion
Multi-frame detection (MFD) and multi-sensor fusion are two popular methods that improve of the target detection and estimation which can improve the performance by increasing the number of measurement samples. In this paper, we combine these two methods together, proposing a novel multi-sensor multi-frame detection (MS-MFD) method. On one hand, MS-MFD can enhance the target signal to noise ratio by integrating the multiple measurement samples over time, improving the detection ability. On the other hand, it can acquire the target space-diversity gain by jointly processing the measurement samples on different observation orientations, providing more accurate estimates. In particular, the proposed method consists of two steps. First, it conducts the MFD processing in each sensor nodes, computing the local multi-frame jointly posterior probability density. Then, it transmits the local densities to the fusion center for further processing, calculating the global target estimates. Furthermore, in order to improve the implementation efficiency of MS-MFD, a Gaussian Mixture model based method is proposed to approximate the distribution of local posterior probability density, so that the transmission costs of local posterior probability density can be significantly reduced. It is demonstrated by simulations that the proposed methods show superior performance.
14:30 Distributed RGBD Camera Network for 3D Human Pose Estimation and Action Recognition
Skeleton based human action recognition has recently attracted a lot of attention in the research community. 3D skeleton data is becoming easier to access due to the evolution of new depth sensors like Kinect v2. However, the performance of the depth sensors is subjected to viewpoint variations and occlusion. In this paper, we propose a novel distributed sensor data fusion method to address this problem. The information weighted consensus filter(ICF) is introduced to fuse the skeleton data, so as to get more precise joint positions. To demonstrate the proposed idea, we captured the human action sequences in different views and compare the recognition accuracy between the fused and the raw data, and proved that the fused data can help improve recognition performance.

Thursday, July 12 14:50 - 15:20

Refreshments

Thursday, July 12 15:20 - 17:00

6a - Algorithms for Tracking

Room: LR0
Chair: Michael Beard
15:20 Distributed Cross-Entropy delta-GLMB Filter for Multi-Sensor Multi-Target Tracking
The multi-dimensional assignment problem, and by extension the problem of finding the $T$-best multi-sensor assignments, represent the main challenges of centralized and especially distributed multi-sensor tracking. In this paper, we propose a distributed multi-target tracking filter based on the \(delta\)-Generalized Labeled Multi-Bernoulli \(delta\)-GLMB) family of Labeled Random Finite Set (LRFS) densities. Consensus is reached for high-scoring multi-sensor assignments jointly across the network by employing the cross-entropy method in conjunction with average consensus. This ensures that multi-sensor information is used jointly to select high-scoring multi-assignments without exchanging the measurements across the network and without exploring all possible single-target multi-assignments. In contrast, tracking algorithms that rely on posterior fusion, i.e., merging local posteriors of neighboring nodes until convergence, are sub-optimal due to only using local information to select the $T$-best local assignments in the construction of local posteriors. Numerical simulations showcase this performance improvement of the proposed method with respect to a posterior-fusion \(delta\)-GLMB filter.
15:40 Target Tracking Using an Asynchronous Multistatic Sensor System with Unknown Transmitter Positions
This paper considers the problem of target tracking using asynchronous multistatic system with unknown transmitter positions. In such a system, the receiver is considered as the own sensor to perform passive tracking. It listens to the signals from at least two non-cooperative transmitters via direct and indirect (bouncing from targets) paths. The transmitters and targets are then tracked based on the measured bearings and the bistatic ranges (derived from the TDOA of the direct and indirect path signals). Since the transmitter positions are unknown, they have to be estimated, and their estimates will contain errors. To cope with these errors, we develop an iterated least squares estimator with covariance inflation (ILS-CI) for track initiation, and apply the covariance inflation filter (CIF) for track update. Four approaches, namely the optimal, simple, covariance inflation (CI) and combined approaches, with different strategies in track initiation and track update, are proposed to solve this tracking problem. Their performances are evaluated through simulation tests.
16:00 Performance Evaluation for Large-scale Multi-target Tracking Algorithms
The traditional method of applying the Optimal Sub-Pattern Assignment (OSPA) metric cannot fully evaluate multi-target tracking performance, as it does not account for phenomena such as track label switching, and track fragmentation. The OSPA(2) has been proposed as a technique for applying the OSPA distance in a way that captures these effects, while retaining the properties of a true metric. In this paper, we demonstrate the behaviour of the OSPA(2) on some numerical examples, discuss some of its advantages and limitations, and show that it is capable of being applied to performance evaluation of large-scale scenarios in the order of a thousand targets.
16:20 Quanta Tracking Algorithm for Low SNR Targets: How Low Can It Go?
One of the main attributes of the Quanta Tracking (QT) algorithm is its ability to track dim targets. As this algorithm has been presented, the question usually arises, what is the lowest Signal-to-Noise-Ratio (SNR) that can be tracked by this algorithm? This is not the simple straightforward question to answer that it appears. Before it can be determined how small might be an SNR that can be tracked, a few definitions have to be established. First, the very definition of SNR needs to be decided. Then the definition of what it means to successfully track has to be decided. After these two definitions are determined then the experiment can be performed to answer the main question. In this paper, we define SNR for this application and the threshold for a target being "tracked". Finally, we obtain results that measure how low can the SNR be for this algorithm to track.
16:40 Higher Degree Cubature Quadrature Kalman Filter for Randomly Delayed Measurements
Existing state estimators consider that the measurements are available sequentially at each time step. But in real life networked control problems this may not be true, particularly where the estimators are located remotely and measurements are received through a common unreliable network. In such scenarios, due to limited communication capacity, measurements are generally delayed in a random manner. In this correspondence, the authors developed a higher degree cubature quadrature Kalman filter (HDCQKF) for a nonlinear system with arbitrary step randomly delayed measurements. With the help of two examples, it has been shown that the randomly delayed HDCQKF provides more accurate estimation compared with randomly delayed CKF.

6b - SS: Remote Sensing Data Fusion

Room: LR1
Chair: Jiaojiao Tian
15:20 Time-series 3D Building Change Detection Based on Belief Functions
One of the challenges of remote sensing image based building change detection is distinguishing building changes from other types of land cover alterations. Height information can be a great assistance for this task but its performance is limited to the quality of the height. Yet, the standard automatic methods for this task are still lacking. We propose a very high resolution stereo series data based building change detection approach that focuses on the use of time series information. In the first step, belief functions are explored to fuse the change features from the 2D and height maps to obtain an initial change detection result. In the second step, the building probability maps (BPMs) from the series data are adopted to refine the change detection results based on Dempster-Shafer theory. The final step is to fuse the series building change detection results in order to obtain a final change map. The advantages of the proposed approach are demonstrated by testing it on a set of time series data captured in North Korea.
15:40 Fusing Spaceborne SAR Interferometry and Street View Images for 4D Urban Modeling
Obtaining city models in a large scale is usually achieved by means of remote sensing techniques, such as synthetic aperture radar (SAR) interferometry and optical image stereogrammetry. Despite the controlled quality of these products, such observation is restricted by the characteristics of their sensor platform, such as revisit time and spatial resolution. Over the last decade, the rapid development of online geographic information systems, such as Google map, has accumulated vast amount of online images. Despite their uncontrolled quality, these images constitute a set of redundant spatial-temporal observations of our dynamic 3D urban environment. These images contain useful information that can complement the remote sensing data, especially the SAR images. This paper presents a preliminary study of fusing online street view images and spaceborne SAR images, for the reconstruction of spatial-temporal (hence 4D) city models. We describe a general approach to geometrically combine the information of these two types of images that are nearly impossible to even coregister without a precise 3D city model due to their distinct imaging geometry. It is demonstrated that, one can obtain a new kind of city model that includes high resolution optical texture for better scene understanding and the dynamics of individual buildings up to the precision of millimeter retrieved from SAR interferometry.
16:00 Object-related Alignment of Heterogeneous Image Data in Remote Sensing
The fusion of heterogeneous image data, in particular optical images and synthetic aperture radar (SAR) images, is highly worthwhile in the context of remote sensing tasks as it allows to exploit complementary information - such as spectral and distance measurements or different observation perspectives - of the two data sources while diminishing their individual weaknesses (e.g. cloud cover, difficulty of image interpretation, limited sensor revisit). However, relating the heterogeneous data on the signal level requires a data alignment step, which cannot be realized without auxiliary knowledge. This paper addresses and discusses this fundamental fusion problem in remote sensing in the context of a framework named SimGeoI, which solves the multi-sensor alignment task based on geometric knowledge from existing digital surface models. Sections of optical and SAR images are related to individual objects using interpretation layers generated with ray tracing techniques. Results of SimGeoI are presented for a test site in London in order to motivate an object-related fusion of remote sensing images.
16:20 Geometric Multi-Wavelet Total Variation for SAR Image Time Series Analysis
A time series issued from modern synthetic aperture radar satellite imaging sensor is a huge dataset composed by many hundreds of million pixels when observing large scale earth structures such as big forests or glaciers. A concise monitoring of these big structures for unexpected change detection thus requires loading and analyzing huge spatio/polarimetric multi-temporal image series. The contributions of the present paper for the sake of parsimonious analysis of such huge datasets are associated with a framework having two main processing stages. The first stage is the derivation of an index called geometric multi-wavelet total variation for fast and robust change evaluation. This index is useful for identifying significant change patterns appearing as geo-spatial non-stationarities in multi-wavelet total variation map. The second stage consists in the proposal of a concise asymmetric multi-date change information matrix on regions associated with significant multi-wavelet total variations. This stage is necessary for a fine characterization of change impacts on existing geo-spatial structures. Experimental tests based on Sentinel-1 data show relevant results on a wide Amazonian forest surrounding the Franco-Brazilian Oyapock Bridge.
16:40 Kohonen-based Credal Fusion of Optical and Radar Images for Land Cover Classification
This paper presents a Credal algorithm to perform land cover classification from a pair of optical and radar remote sensing images. SAR (Synthetic Aperture Radar) /optical multispectral information fusion is investigated in this study for making the joint classification. The approach consists of two main steps: 1) relevant features extraction applied to each sensor in order to model the sources of information and 2) a Kohonen map-based estimation of Basic Belief Assignments (BBA) dedicated to heterogeneous data. This framework deals with co-registered images and is able to handle complete optical data as well as optical data affected by missing value due to the presence of clouds and shadows during observation. A pair of SPOT-5 and RADARSAT-2 real images is used in the evaluation, and the proposed experiment in a farming area shows very promising results in terms of classification accuracy and missing optical data reconstruction when some data are hidden by clouds.

6c - SS: Advances in Distributed Kalman Filtering and Fusion 2

Room: LR2
Chair: Benjamin Noack
15:20 Encrypted Multisensor Information Filtering
With the advent of cheap sensor technology, multisensor data fusion algorithms have been becoming a key enabler for efficient in-network processing of sensor data. The information filter, in particular, has proven useful due to its simple additive structure of the measurement update equations. In order to exploit this structure for an efficient in-network processing, each node in the network is supposed to locally process and combine data from its neighboring nodes. The aspired in-network processing, at first glance, prohibits efficient privacy-preserving communication protocols, and encryption schemes that allow for algebraic manipulations are often computationally too expensive. Partially homomorphic encryption schemes constitute far more practical solutions but are restricted to a single algebraic operation on the corresponding ciphertexts. In this paper, an additive-homomorphic encryption scheme is used to derive a privacy- preserving implementation of the information filter where additive operations are sufficient to distribute the workload among the sensor nodes. However, the encryption scheme requires the floating-point data to be quantized, which impairs the estimation quality. The proposed filter and the implications of the necessary quantization are analyzed in a simulated multisensor tracking scenario.
15:40 Reconstruction of Cross-Correlations with Constant Number of Deterministic Samples
Optimal fusion of estimates that are computed in a distributed fashion is a challenging task. In general, the sensor nodes cannot keep track of the cross-correlations required to fuse estimates optimally. In this paper, a novel technique is presented that provides the means to reconstruct the required correlation structure. For this purpose, each node computes a set of deterministic samples that provides all the information required to reassemble the cross-covariance matrix for each pair of estimates. As the number of samples is increasing over time, a method to reduce the size of the sample set is presented and studied. In doing so, communication expenses can be reduced significantly, but approximation errors are possibly introduced by neglecting past correlation terms. In order to keep approximation errors at a minimum, an appropriate set size can be determined and a trade-off between communication expenses and estimation quality can be found.
16:00 A Handover Triggering Algorithm for Managing Mobility in WSNs
Wireless sensor networks are useful for a large number of healthcare applications which require mobile nodes. Most existing applications rely on body area networks (BAN) which require the presence of additional devices, such as mobile phones, to transfer data from the BAN to a remote base station or a server. In this paper we propose to extend BAN with personal area networks (PAN) so that enhanced mobility and seamless collection of data can be possible. In order to improve the reliability of this merge and support a high goodput, we also propose a seamless handover mechanism which enables mobile transmitters to discover and transfer communication to reliable relay nodes when the quality of an existing link deteriorates. In this paper we shall report how we implemented our scheme for TinyOS and TelosB platforms and compared it with four other competitive schemes.
16:20 Distributed Estimation Using Particles Intersection
A technique is presented for combining arbitrary empirical probability density estimates whose interdependencies are unspecified. The underlying estimates may be, for example, the particle approximations of a pair of particle filters. In this respect, our approach, named hereafter particles intersection, provides a way to obtain a new particle approximation, which is better in a precise information-theoretic sense than that of any of the particle filters alone. Particles intersection is applicable in networks with potentially many particle filters. We demonstrate both theoretically and through numerical simulations that depending on the communication topology this technique leads to consensus in the underlying network where all particle filters agree on their estimates. The viability of the proposed approach is demonstrated through examples in which it is applied for multiple object tracking and distributed estimation in networks.
16:40 Decentralized Tracking in Sensor Networks with Varying Coverage
The number of sensors used in tracking scenarios is constantly increasing, this puts high demands on the tracking methods to handle these data streams. Central processing (ideally optimal) puts high demands on the central node, is sensitive to inaccurate sensor parameters, and suffers from the single point of failure problem. Decentralizing the tracking can improve this, but may give considerable performance loss. The newly presented inverse covariance intersection method, proven to be consistent, even under unknown track cross-correlations, is benchmarked against alternatives. Different track-to-track methods, including smoothed association over a window, are compared. A scenario with objects tracked in multiple cameras, not necessarily optimized for tracking, are used to give realism to the evaluations.

6d - SS Advanced Nonlinear Filters 3

Room: LR5
Chair: Ondřej Straka
15:20 Simultaneous Localization and Mapping Using a Novel Dual Quaternion Particle Filter
In this paper, we present a novel approach to perform simultaneous localization and mapping (SLAM) for planar motions based on stochastic filtering with dual quaternion particles using low-cost range and gyro sensor data. Here, SE(2) states are represented by unit dual quaternions and further get stochastically modeled by a distribution from directional statistics such that particles can be generated by random sampling. To build the full SLAM system, a novel dual particle filter based on Rao-Blackwellization is proposed for the tracking block, which is further integrated with an occupancy grid mapping block. Unlike previously proposed filtering approaches, our method can perform tracking in the presence of multi-modal noise in unknown environments while giving reasonable mapping results. The approach is further evaluated using a walking robot with on-board ultrasonic sensors and an IMU sensor navigating in an unknown environment in both simulated and real-world scenarios.
15:40 Entropy-based Consistency Monitoring for Stochastic Integration Filter
The paper deals with state estimation of nonlinear stochastic dynamic discrete-time systems with a special focus on the stochastic integration filter. The filter is an instance of Gaussian filters, which for strongly nonlinear systems often provide inconsistent estimates. A technique for estimate consistency monitoring based on entropy is proposed, which detects optimistic estimates. For the purpose of the entropy computation, a probabilistic analysis of the stochastic integration filter behavior is carried out. The proposed consistency monitoring is illustrated in a numerical example.
16:00 Smooth Bias Estimation for Multipath Mitigation Using Sparse Estimation
Multipath remains the main source of error when using global navigation satellite systems (GNSS) in constrained environment, leading to biased measurements and thus to inaccurate estimated positions. This paper formulates the GNSS navigation problem as the resolution of an overdetermined system, which depends nonlinearly on the receiver position and linearly on the clock bias and drift, and possible biases affecting GNSS measurements. The extended Kalman filter is used to linearize the navigation problem whereas sparse estimation is considered to estimate multipath biases. We assume that only a part of the satellites are affected by MP, i.e., that the unknown bias vector is sparse in the sense that several of its components are equal to zero. The natural way of enforcing sparsity is to introduce an l1 regularization associated with the bias vector. This leads to a least absolute shrinkage and selection operator (LASSO) problem that is solved using a reweighted-l1 algorithm. The weighting matrix of this algorithm is designed carefully as functions of the satellite carrier to noise density ratio and the satellite elevations. The smooth variations of multipath biases versus time are enforced using a regularization based on total variation. Experiments conducted on real data allow the performance of the proposed method to be appreciated.
16:20 Optimized Gauss-Hermite Quadrature with Application to Nonlinear Filtering
In this paper we consider the Gauss-Hermit quadrature (GHQ) rule for numerical integration. Since GHQ is exact for polynomials of up to a certain degree, we propose a method to improve GHQ when the integrand is not close to a polynomial by transforming it to one that is approximated by a polynomial, and thus fits GHQ well. The problem of optimizing the GHQ rule through this transformation is formulated and solved as a nonlinear least squares problem with linear constraints. The proposed optimized GHQ method is compared with traditional methods for two numerical examples that show its higher accuracy. While our method is applicable in many different areas, in this paper we apply it to nonlinear filtering. A new quadrature Gaussian filter is developed and compared with several popular nonlinear filters by simulation of two nonlinear examples.
16:40 Stochastic Integration Filter: Theoretical and Implementation Aspects
The paper focuses on state estimation of discrete-time nonlinear stochastic dynamic systems with a special focus on the stochastic integration filter. The filter is an representative of the Gaussian filter and computes the state and measurement predictive moments by making use of a stochastic integration rule. As a result, the calculated values of the moments are random variables and exhibit favorable asymptotic properties. The paper analyzes theoretical consequences of using stochastic integration rules and proposes several modifications that improve the performance of the stochastic integration filter. As the filter requires multiple iterations of the stochastic rule, its computational costs are higher in comparison with other Gaussian filters. To reduce the costs, several modifications are proposed in the paper, which are also concerned with numerical stability issues. The proposed modifications are illustrated using both static and dynamic numerical examples used in target tracking.

6e - Localisation 2

Room: LR6
Chair: Roland Hostettler
15:20 Fast and Robust Vehicle Pose Estimation by Optimizing Multiple Pose Graphs
An essential task for Intelligent Transportation System (ITS) is to obtain precise knowledge of local environments as well as the local (within structured environment) and global pose. Market entry and large-scale production of autonomous driving functions pose two elementary constraints. First, the utilized sensor setup has to be both cost-efficient and space-saving. Second, the system has to be fail-safe according to Automotive Safety Integrity Level 1 (ASIL) D. This paper presents an approach for robust, graph-based localization. Outliers are rejected and fall-back solutions, when single sensors are permanently corrupted, are provided by solving various graphs simultaneously. Furthermore, an estimation of the sensor uncertainty is presented. Experimentally, the applicability and performance of the presented approach is demonstrated using an Opel Insignia and its 2D dynamic sensors, with two additional gray-scale cameras, a low-cost GNSS receiver and a previously recorded digital map. The graph-based approach has a mean solver time of 14.48 ms and a maximal lateral error below 27.03 cm with a Standard Deviation (STD) of 10.05 cm and outperforms the previously presented Extended Kalman Filter (EKF) and Particle Filter (PF) approaches.
15:40 Train Localization with Particle Filter and Magnetic Field Measurements
In this paper a particle filter for absolute train localization based on magnetic field measurements is proposed. The filter utilizes distortions of the earth magnetic field introduced by ferromagnetic infrastructure components along the railway track. The distortions are characteristic for a certain part of the track network and therefore are a source of position information. The particle filter introduced in this paper incorporates a prior created map of these distortions to estimate the train position. This only requires low-cost passive magnetometers and a simple movement model that accounts for the limited dynamics of a train. The feasibility of the approach is demonstrated in an evaluation with measurements collected on a train driving in a rural area. Overall a position root mean square error below four meters could be achieved, proving that the magnetic field is a viable source of position information that is independent from other localization systems like GNSS.
16:00 Universal Kriging of RSS Databases in a Bayesian Filter
Received signal strength (RSS) based navigation is chosen for many indoor propagation scenarios because of the high availability and low cost of the needed infrastructure. Usually, RSS-based navigation relies on fingerprinting techniques. Recently, methods for the spatial interpolation of RSS databases, including ordinary and universal Kriging have become the focus of research, as they can enhance these databases. This paper explores theoretical possibility of using universal Kriging as a measurement model in a Bayesian filter to give the possibility of using RSS databases in a deeply coupled information fusion filter. The method is applied on a simplified propagation model and an exemplary implementation in an extended Kalman filter is presented in detail and evaluated.
16:20 Bearing-Only Multi-Target Localization for Wireless Array Networks: A Spatial Sparse Representation Approach
Bearing-only multi-target localization (BOMTL) using multiple sensors is generally required to solve the sophisticated data association problem which determines a designated sensor measurement originated from a particular target. In this paper, a novel spatial sparse representation based BOMTL method is proposed by fully utilizing a wireless array network structure. With array spatial features, the BOMTL problem can be formulated as a binary sparse vector recovery problem using the converted ``pseudo-measurements" in frequency domain. The proposed method transforms the source location estimation problem into a spatial sparse representation (SSR) framework, which avoids dealing with the conventional data association. With orthogonal matching pursuit (OMP) exploiting the binary property of the sparse vector to be estimated, we develop a BOMTL-OMP algorithm to reconstruct the sparse vector. The numerical simulations demonstrate the performance of the proposed method.
16:40 Multiple Sound Source Localization Based on a Multi-dimensional Assignment Model
In this paper, we address the multiple sound source localization problem using time differences of arrival (TDOAs) of sound sources to a microphone array. Typically, TDOAs are estimated based on the peak extraction of the generalized cross-correlation function. In multi-source cases, for any given microphone pair, it is hard to tell the correspondence between the sound sources and the extracted peaks. In this work, we develop a novel localization approach based on data association which combines multiple TDOAs from the same source across different microphone pairs. Firstly, the generalized cross correlation-phase transform (GCC-PHAT) function is evaluated and multiple peaks of the GCC function indicating candidate TDOAs are extracted for each pair of microphones. Next, we employ the multi-dimensional assignment algorithm to associate multiple TDOAs from the same source. Finally, multiple sound source localization is carried out based on the obtained TDOA associations across different microphone pairs. Experimental results show the proposed method achieves superior performance for multiple sound source localization compared to the competing algorithm, especially in noisy environments.

6f - SS: Extended Object and Group Tracking

Room: LR11
Chair: Marcus Baum
15:20 Extended Object Tracking Using Automotive Radar
For automotive radar based extended object tracking (EOT), measurements are originated from the edges of the object, which usually has a regular shape. To handle this problem, this paper proposes an EOT approach, in which the object is assumed rectangular. Since the properties of a rectangular shape can be fully captured by its vertices, modeling and estimation of the extension can be reduced to those of the vertices, which are then included into the object state. Then the property that the object is rectangular can be described as a quadratic equality constraint on the state. A measurement model is proposed with the scattering centers being assumed uniformly distributed over the observable edges of the object. Actually, measurements at each time correspond to at most two adjacent boundary edges. By taking advantage of this, a data association method is proposed, in which the association events are largely eliminated. Given an association event, the target state can be estimated in the linear minimum mean-square-error framework with the shape constraint treated as a pseudo-observation. The estimated state is then projected into the constraint space to improve the estimation performance. Simulation results of a scenario of EOT using an automotive radar are given to illustrate the effectiveness of the proposed approach.
15:40 Radar and Lidar Target Signatures of Various Object Types and Evaluation of Extended Object Tracking Methods for Autonomous Driving Applications
This paper presents a discussion on comparison of common extended object tracking methods for estimating target- extension ellipses based on real world traffic data and different target level signatures obtained for various object types such as cars, pedestrians and bicyclist obtained using automotive multi- mode radar network and 4x-layer automotive lidar. The most commonly used extended object tracking methods are briefly introduced. In addition measurement profiles of road users, car, cyclists and pedestrians, are investigated to compare the models and their assumption with the real data. The obtained information would be further used to define the appropriate object specific and sen- sor specific measurement model. The measurement distribution is discussed in detail and compared with the models' assumptions. Extended object tracking is performed on the obtained data and performance analysis is carried out using high resolution ground truth. The evaluation is carried out separately on the radar and lidar data on exemplary traffic scenes. As a last step a track-to-track intersection fusion approach is evaluated on the same data set to find out the different information gains. The relation between measurements and track behavior as well as other influences are discussed
16:00 A Random Matrix Measurement Update Using Taylor-Series Approximations
An approximate extended target tracking (ETT) measurement update is derived for random matrix extent representation with measurement noise. The derived update uses Taylor series approximations. The performance of the proposed update methodology is illustrated on a simple ETT scenario and compared to alternative updates in the literature.
16:20 Extended Target Tracking Using Gaussian Processes with High-Resolution Automotive Radar
In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler velocity provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.
16:40 A Cartesian B-Spline Vehicle Model for Extended Object Tracking
In this paper a novel continuous Cartesian defined target model has been proposed for extended object tracking (EOT) with focus on vehicles in specific and for the application to measurements from an automotive Light Detection and Ranging (LIDAR). To set up the Cartesian boundary we deploy quadratic uniform periodic B-splines. In contrast to previous works we introduce a new walk parameter to model the contour function of an object such that the shapes parameters are well defined and lie within the same space as the measurements.

Thursday, July 12 15:20 - 16:40

6g - Sensor/Resource Management

Room: LR12
Chair: Moses Chan
15:20 A Dwell Scheduling Method for Phased Array Radars Based on New Synthetic Priority
Dwell scheduling is an important module of phased array radar. In order to implement effective dwell scheduling, a scheduling method based on the new synthetic priority is proposed, in which the scheduling model is founded from the viewpoint of scheduling gain. The scheduling gain synthesize the synthetic priority of the task and the validity of scheduling it, where the synthetic priority integrates the importance, the urgency of the task and the threat level of the target. Simulation results demonstrate that compared with the conventional dwell scheduling methods, the proposed method improves the scheduling performance of phased array radar.
15:40 Sensor Operation Deployment with Multiple Routes per Asset
In many reconnaissance and surveillance tasks, the challenge is to deploy a considerable set of assets in the best manner for satisfying a set of given information requirements. This publication describes the mathematical and methodical foundations of an approach to support operators at this demanding task via an automatic planning component. The automatic planning component can be used as part of a two-step approach for resource-optimal sensor scheduling [1], [2]. More precisely, in this publication, we present an extension of our previous approach which additionally includes the aspect that, in practice, often an asset can be deployed multiple times within a given planning period. The extension is done on basis of a beforehand survey of suitable state-of-the art approaches. We present a detailed mathematical problem formulation, which in essence corresponds to an optimization problem under a considerable set of constraints, and describe design and implementation of the algorithm for performing the optimization. Different aspects and results with regard to evaluation demonstrate the validity and performance of our approach.
16:00 Evidence Gathering for Hypothesis Resolution Using Judicial Evidential Reasoning
Realistic decision-making often occurs with insufficient time to gather all possible evidence before a decision must be rendered, requiring efficient processes for prioritizing between candidate action sequences. The proposed Judicial Evidential Reasoning framework encodes decision-maker questions as rigorously testable hypotheses and proposes actions to resolve the hypotheses in the face of ambiguous, incomplete, and uncertain evidence. Dempster-Shafer theory is applied to model hypothesis knowledge and quantify ambiguity, and an equal-effort heuristic is proposed time-efficiency and impartiality to combat confirmation bias. This work includes derivation of the generalized formulation, computational tractability considerations for improved performance, several illustrative examples, and sample application to a space situational awareness sensor network tasking scenario. The results show strong hypothesis resolution and robustness to fixation due to poor prior evidence.
16:20 A Complete Power Allocation Framework for Multiple Target Tracking with the Purpose of Minimizing the Transmit Power
In this paper, a new power allocation framework is proposed with the task of multiple target tracking (MTT), in which an adaptive cost function (ACF) with respect to the transmit power and tracking accuracy requirements is first designed. Then we take the ACF as an objective function and formulate the proposed framework as a mathematical optimization problem. In this problem, the posterior Cramer-Rao lower bound (PCRLB) provides us with a lower bound on the estimated error of the targets state. Numerical simulation demonstrates that in the scenario where the common method is not applicable, an effective and robust power allocation scheme can be obtained by the proposed method.

Thursday, July 12 15:20 - 17:00

6h - Situational Awareness

Room: JDB-Seminar Room
Chair: Paolo Braca
15:20 Network Security Situation Awareness for Industrial Control System Under Integrity Attacks
Due to the wide implementation of communication networks, industrial control systems are vulnerable to malicious attacks, which could cause potentially devastating results. Adversaries launch integrity attacks by injecting false data into systems to create fake events or cover up the plan of damaging the systems. In addition, the complexity and nonlinearity of control systems make it more difficult to detect attacks and defense it. Therefore, a novel security situation awareness framework based on particle filtering, which has good ability in estimating state for nonlinear systems, is proposed to provide an accuracy understanding of system situation. First, a system state estimation based on particle filtering is presented to estimate nodes' state. Then, a voting scheme is introduced into hazard situation detection to identify the malicious nodes and a local estimator is constructed to estimate the actual system state by removing the identified malicious nodes. Finally, based on the estimated actual state, the actual measurements of the compromised nodes are predicted by using the situation prediction algorithm. At the end of this paper, a simulation of a continuous stirred tank is conducted to verify the efficiency of the proposed framework and algorithms.
15:40 MHT Approach to Ubiquitous Monitoring of Spatio-Temporal Phenomena
This paper describes a multiple-hypothesis tracking (MHT) formulation of a particular set of `situational awareness' problems that involve monitoring of spatio-temporal phenomena using ubiquitous sensing. In particular, the focus is on large-scale monitoring (or tracking) applications utilizing volunteer mobile ubiquitous sensors to track the evolution of spatio-temporal `targets' of interest (e.g., tracking snow-fall or ride-quality at a certain stretch of a roadway using vehicles). An efficient framework is developed utilizing MHT as the basis to carry out detection and tracking of evolutionary behavior. The framework is described via an illustrative example on ride-quality monitoring as applied to autonomous driving environments, where vibration and GPS data recorded by voluntarily participating vehicles are utilized for threat detection and tracking.
16:00 Unsupervised Maritime Traffic Graph Learning with Mean-Reverting Stochastic Processes
Inspired by the fair regularity of the motion of ships, we present a method to derive a representation of the commercial maritime traffic in the form of a graph, whose nodes represent way-point areas, or regions of likely direction changes, and whose edges represent navigational legs with constant cruise velocity. The proposed method is based on the representation of a ship's velocity with an Ornstein-Uhlenbeck process and on the detection of changes of its long-run mean to identify navigational way-points. In order to assess the graph representativeness of the traffic, two performance metrics are introduced, leading to distinct graph construction criteria. Finally, the proposed method is validated against real-world Automatic Identification System data collected in a large area.
16:20 High Level Data Fusion Architecture for Threat Assessment in Scenarios with Manouverable Aerial Targets
The purpose of an Air Defense System is to evaluate the air scenario, identify hostile aircraft that represent a real danger to resources and people of one country. Decision makers in such system are faced today with the challenge of processing and evaluating an increasing amount of data originating from different sources. This work is related to the reconnaissance (including the automatic identification and classification) of targets with hostile behaviors in relation to one point of interest located on the ground. This paper proposes one high level data fusion architecture for an Air Defense System providing the detection, classification and evaluation of the threatening level represented by each hostile aircraft. The proposed architecture also includes an automatic planning tool which provides plans to support the decision making process.
16:40 Multi-Model Threat Assessment Involving Low Probability High Consequence Events
This paper describes a multi-model threat prediction scheme for autonomous agents making decisions in dynamic environments. The proposed multi-layered scheme utilizes argumentation, Transferrable Belief Model, Anytime Decision Making, rule-based techniques, and Choquet Integral based decision theory for situational awareness and action selection. We discuss theoretical foundations and provide high level architecture of an agent. We illustrate the scheme with an example scenario.

6i - SS: Multi-layered Fusion Processes: Exploiting Multiple Models and Levels of Abstraction for Understanding and Sense-Making 2

Room: JDB-Teaching Room
Chair: Lauro Snidaro
15:20 Multi-level Information Fusion Approach with Dynamic Bayesian Networks for an Active Perception of the Environment
Most Situation Awareness applications in Information Fusion try to evaluate a dynamic environment by a passive approach combining heterogeneous information. However in a crisis situation, the decision has to be made efficiently as the world quickly evolves. A consequence is the difficulty to get the information in a fast and efficient way with an acceptable confidence. Another problem is the processing of a significant amount of heterogeneous information in near real time. To address this issue, we propose a multi-level Information Fusion framework based on Dynamic Bayesian Networks (DBN) with an active perception approach. The contribution of this model primarily lies in its capability of handling both Hard and Soft sensors and resulting information. And secondly in the identification of the most valuable DBN variables which maximize the information gains in the next step. These valuable variables allow to infer states on a sub-DBN to reduce the complexity of calculation. In this top-down approach, we seek which variables can provide the highest information to select automatically the right sensors and choose correct actions to optimally observe these variables. We finally propose an illustration with a basic maritime scenario.
15:45 Beyond Sentiments and Opinions: Exploring Social Media with Appraisal Categories
The digital era arrives with a whole set of disruptive technologies that creates both risk and opportunity for open sources analysis. Although the sheer quantity of online conversations makes social media a huge source of information, their analysis is still a challenging task and many of traditional methods and research methodologies for data mining are not fit for purpose. Social data mining revolves around subjective content analysis, which deals with the computational processing of texts conveying people's evaluations, beliefs, attitudes and emotions. Opinion mining and sentiment analysis are the main paradigm of social media exploration and both concepts are often interchangeable. This paper investigates the use of appraisal categories to explore data gleaned for social media, going beyond the limitations of traditional sentiment and opinion-oriented approaches. Categories of appraisal are grounded on cognitive foundations of the appraisal theory, according to which people's emotional response are based on their own evaluative judgments or appraisals of situations, events or objects. A formal model is developed to describe and explain the way language is used in the cyberspace to evaluate, express mood and subjective states, construct personal standpoints and manage interpersonal interactions and relationships. A general processing framework is implemented to illustrate how the model is used to analyze a collection of tweets related to extremist attitudes.
16:10 Beyond Situation Awareness: Considerations for Sense-making in Complex Intelligence Operations
There has been a great deal of work done in (sensor) data fusion for situation awareness scenarios, in which timelines are short (e.g., seconds, minutes, perhaps hours) and geography is limited (a military installation, a harbor, a coastline). However, there has been comparatively little examination of intelligence fusion models and their informational needs in scenarios which cover much longer timelines (weeks, months, years) and greater geographical areas, of particular interest in today's interconnected world of asymmetric threats such as terrorism, human trafficking, organized crime and other cross-border concerns. Making sense of such complex scenarios requires collecting information and fusion products from a broad range of sources both human and device-derived over time to look for patterns of behavior and to track changing context information. This paper examines some of the challenges of modelling for fusion in complex intelligence operations.
16:35 Learning Multi-Modal Self-Awareness Models for Autonomous Vehicles from Human Driving
This paper presents a novel approach for learning self-awareness models for autonomous vehicles. Proposed technique is based on the availability of synchronized multi-sensor dynamic data related to different maneuvering tasks performed by a human operator. It is shown that different machine learning approaches can be used to first learn single modality models using coupled Dynamic Bayesian Networks; such models are then correlated at event level to discover contextual multi-modal concepts. In the presented case, visual perception and localization are used as modalities. Cross-correlations among modalities in time is discovered from data and are described as probabilistic links connecting shared and private multi-modal DBNs at the event (discrete) level. Results are presented on experiments performed on an autonomous vehicle, highlighting potentiality of the proposed approach to allow anomaly detection and autonomous decision making based on learned self-awareness models.

Thursday, July 12 19:00 - 23:00

Gala Dinner

Taking place at Duxford - Airspace

Agenda 18.30 Coaches depart from Department of Engineering (Conference venue) for Duxford 19.00 Arrive at Airspace for pre-dinner drinks 19.20 Best Paper Awards 20.00 Dinner 21.30 cash bar 22.00 - 23.00 Coaches return to Cambridge

Friday, July 13 9:00 - 10:00

Plenary: 25 years of particles and other random points

Neil Gordon, David Salmond and Adrian Smith
Chair: Simon Godsill

Abstract: A basic form of a Monte Carlo Bayesian recursive filter, which came to be known as a bootstrap filter or a particle filter, was presented 25 years ago.

The key advantage of this filter is that it does not rely on the highly restrictive linear-Gaussian assumptions that underlie the Kalman filter and its variants.

Since then, the particle scheme has been developed, enhanced and applied by researchers in many different fields ranging from the original motivation of target tracking to navigation, robotics, econometrics and weather forecasting.

In this presentation, we shall describe the state of the art in Monte Carlo methods for Bayesian estimation problems in the early 1990's and indicate how this was extended to dynamic estimation with an evolving state vector.

The circumstances of the development of the filter and some initial test examples will be reviewed with some discussion of the strengths and weaknesses of the approach.

Finally, we shall discuss recent developments, applications and possible future directions.

Reference: N.J. Gordon, D.J. Salmond and A.F.M. Smith, "Novel approach to nonlinear / non-Gaussian Bayesian state estimation", IEE Proceedings-F on Radar, Sonar and Navigation, Vol 140, No 2, April 1993, pp 107-113.

Bios: Dr Neil Gordon, The Defence Science and Technology Group, Department of Defence, Australia received a PhD in Statistics from Imperial College London in 1993. He was with the Defence Evaluation and Research Agency in the UK from 1988-2002 working on missile guidance and statistical data processing. In 2002 he moved to the Defence Science and Technology Organisation in Australia where he currently leads the Data and Information Fusion research group. In 2014 he became an honorary Professor with the School of Information Technology and Electrical Engineering at the University of Queensland. He is the co-author/co-editor of two books on particle filtering and one on search zone calculations for missing Malaysian airlines flight MH370.

David J Salmond joined the Royal Aircraft Establishment in 1977 as a Scientific Officer. His initial research was on the application of modern control theory to weapon guidance. From this work, together with the missile guidance group, he developed general techniques for tracking and guidance, especially for uncertain systems. In particular, with co-workers, he developed a general Bayesian acquisition / selection scheme for dense and cluttered scenarios. He was also a co-developer of the particle filter method for nonlinear dynamic estimation. The basic scheme has been widely taken up and developed by both the academic community and applied engineers.

He has worked closely with the UK tracking community for many years. He has organised and chaired conferences, served on conference committees and reviews papers for the IEEE, AIAA, IET and other control and data fusion journals. He has published over 150 company reports and open papers.

David retired from QinetiQ as a Senior Fellow in May 2016. He is now a consultant for QinetiQ under the "Friend of QinetiQ" scheme. He has worked for QinetiQ since its foundation in 2001, apart from four years with DSTL (Defence Science and Technology Laboratory) from 2006 to 2010. Prior to 2001, he worked (as a Civil Servant) for QinetiQ's predecessor institutions.

Professor Sir Adrian Smith, Vice-Chancellor, University of London was previously Director General, Knowledge and Innovation in BIS, having, from 2008, been Director General, Science and Research originally in DIUS and subsequently in BIS.

Professor Smith has also worked with the UK Higher Education Funding and Research Councils and was appointed Deputy Chair of the UK Statistics Authority from 1 September 2012. From 1 August 2014, he was appointed Chair of the Board of the Diamond Synchrotron at Harwell. He is also the Chair of the Council for Mathematical Sciences.

Professor Smith is a past President of the Royal Statistical Society and was elected a Fellow of the Royal Society in 2001 in recognition of his contribution to statistics. In 2003-04 he undertook an Inquiry into Post-14 Mathematics Education for the UK Secretary of State for Education and Skills and has recently undertaken, on behalf of HMT and the DfE, a 16-18 Maths Review. In 2006 he completed a report for the UK Home Secretary on the issue of public trust in Crime Statistics. He received a knighthood in the 2011 New Year Honours list.

Friday, July 13 10:00 - 10:30

Refreshments

Friday, July 13 10:30 - 12:10

7a - Sequential Monte Carlo

Room: LR0
Chair: Gustaf Hendeby
10:30 A Fast MCMC Particle Filter
Relying on the idea of importance sampling for substantiating the Bayesian filtering recursion, particle filters may become prohibitively inefficient even for moderate state dimensions and likewise whenever the signal to noise ratio is relatively high, as is the case with nearly deterministic state dynamics or random parameters. Markov chain Monte Carlo particle filters completely avoid importance sampling and by that circumvent many of the deficiencies associated with conventional particle filters. These methods may nevertheless suffer from slow convergence rate once inadequate or computationally intractable proposal distributions are used for generating new candidate samples in the underlying Markov chain. In this work, we devise a new Markov chain Monte Carlo particle filter whose sampling mechanism employs jumping Gaussian distributions. This technique enhances the underlying sampling efficiency and leads to significant reduction in the computational cost. The newly derived filter is shown to outperform the conventional (regularised) particle filter both in terms of accuracy and computational overhead, particularly when applied to estimation in systems with low intensity noise or of relatively high state dimensions.
10:50 Multi-Ellipsoidal Extended Target Tracking Using Sequential Monte Carlo
In this paper, we consider the problem of extended target tracking, where the target extent cannot be represented by a single ellipse accurately. We model the target extent with multiple ellipses and solve the resulting inference problem, which involves data association between the measurements and sub-objects. We cast the inference problem into sequential Monte Carlo (SMC) framework and propose a simplified approach for the solution. Furthermore, we make use of the Rao-Blackwellization, aka marginalization, idea and derive an efficient filter to approximate the joint posterior density of the target kinematic states and target extent. Conditional analytical expressions, which are essential for Rao-Blackwellization, are not available in our problem. We use variational Bayes technique to approximate the conditional densities and enable Rao-Blackwellization. The performance of the method is demonstrated through simulations. A comparison with a recent method in the literature is performed.
11:10 An Efficient Particle Filter for the OOSM Problem in Nonlinear Dynamic Systems
In this paper, the out of sequence measurement (OOSM) problem with arbitrary lags in nonlinear dynamic systems is considered. We develop an efficient particle filtering (E-PF) algorithm based on the exact Bayesian solution. Generally, by introducing some reasonable Gaussian assumptions, a general Gaussian smoother is derived to compute the expected smoothing pdfs instead of using the particle smoother, which makes the E-PF computation efficient and applicable for most nonlinear cases. Meantime, for E-PF, only the estimates and covariances for a predetermined maximum number of lags are stored, the storage resource is also effectively saved. In the simulation, a two-dimensional target tracking example is given, the numerical results show that the tracking performance of our algorithm is quite close to the A-PF algorithm proposed by Zhang et al., while the computation cost is significantly reduced.
11:30 A Particle Filter Localisation System for Indoor Track Cycling Using an Intrinsic Coordinate Model
In this paper we address the challenging task of tracking a fast-moving bicycle, in the indoor velodrome environment, using inertial sensors and infrequent position measurements. Since the inertial sensors are physically in the intrinsic frame of the bike, we adopt an intrinsic frame dynamic model for the motion, based on curvilinear dynamical models for manoeuvring objects. We show that the combination of inertial measurements with the intrinsic dynamic model leads to linear equations, which may be incorporated effectively into particle filtering schemes. Position measurements are provided through timing measurements on the track from a camera-based system and these are fused with the inertial measurements using a particle filter weighting scheme. The proposed methods are evaluated on synthesised cycling datasets based on real motion trajectories, showing their potential accuracy, and then real data experiments are reported.
11:50 Auxiliary-Particle-Filter-based Two-Filter Smoothing for Wiener State-Space Models
In this paper, we propose an auxiliary particle filter-based two-filter smoother for Wiener state-space models. The proposed smoother exploits the model structure in order to obtain an analytical solution for the backward dynamics, which is introduced artificially in other two-filter smoothers. Furthermore, Gaussian approximations to the optimal proposal density and the adjustment multipliers are derived for both the forward and backward filters. An important property of the resulting smoother is its linear complexity in the number of particles. The proposed algorithm is evaluated and compared to existing smoothing algorithms in a numerical example where it is shown that the proposed smoother performs similarly to the state of the art in terms of the root mean squared error at lower computational cost for large numbers of particles.

7b - SS: Context-based Information Fusion

Room: LR1
Chair: Jesus Garcia
10:30 Describing Capability Through Lexical Semantics Exploitation: Foundational Arguments
In everyday life as well as in asymmetric warfare domain, to achieve the intended goals, agents often do not make use of the designed and purpose-built tools, but some other tools whose features simply fit for the purpose. The present paper discusses the possibility of capturing and integrating relations and features from context that could drive the retrieval of possible candidate substitutes for the properly designed artifacts through Lexical Semantics Exploitation. Generative Lexicon theory assumes a structure (Qualia Structure) organizing the semantic content carried by lexical items through roles. Among them, the Telic role exposes the function or purpose of the predicated entity and the Constitutive role exposes its component parts. We argue that the typical function an entity has been thought for is related to its internal constituents. We also argue that a knowledge base and a proper metrics can be conveniently built extracting Qualia elements from suitable text corpora.
10:50 Gaussian Mixture Based Target Tracking Combining Bearing-Only Measurements and Contextual Information
Gaussian mixtures (GM) provide a flexible and numerically robust means for the treatment of nonlinearities as well as for the integration of context knowledge into target tracking algorithms. Contextual information lead to constraints on the target state which can be incorporated in the time prediction step of a tracking filter (model of the target dynamics) as well as in the measurement update step in terms of a constraint likelihood function. In this paper, we present examples for each possibility: road-map assisted target tracking and integration of terrain map data for target localization. The algorithms are applied to the problem of airborne passive emitter localization and demonstrate enhanced tracking and localization precision for moving and for stationary ground based emitters.
11:10 Considerations of Context and Quality in Information Fusion
Context has received significant attention in recent years within the Information Fusion community as it can bring several advantages to information fusion processing by allowing for refining estimations, explaining observations and constraining processing and thereby improves the quality of inferences. At the same time context utilization involves concerns about the quality of the contextual information and its relationship with the quality of information obtained from observation and other sources that may be of low fidelity, contradictory, or redundant. Knowledge of the quality of this information and its effect on the quality of context characterization can improve contextual knowledge. At the same time, knowledge about a current context can improve the quality of observation and fusion results. This paper discusses the problem of understanding and estimation of Information Quality as well as Quality of Context, their relationships and its effect on fusion system performance.
11:30 Range-only Based Cooperative Localization for Mobile Robots
In this paper, we address the localization problem of heterogeneous mobile robots based on range-only measurements from low cost Ultra Wide Band (UWB) sensors. We propose a solution where every static or mobile objects are considered as beacons with contextual information. A beacon-to-beacon measurement is performed using UWB sensors and the position estimation is computed by the target beacon. This strategy allows to hide the cooperative localization problem behind this measurements. The fusion algorithm is based on a Split Intersection Covariance Filter which allows to correctly handle the correlation between the poses estimations of the beacons. We present the consistency of this solution using a simulation with 3 robots and 4 static beacons and a real experimentation with 1 robot and 3 static beacons.
11:50 Sequential Event Detection Using Multimodal Data in Nonstationary Environments
The problem of sequential detection of anomalies in multimodal data is considered. The objective is to observe physical sensor data from CCTV cameras, and social media data from Twitter and Instagram to detect anomalous behaviors or events. Data from each modality is transformed to discrete time count data by using an artificial neural network to obtain counts of objects in CCTV images and by counting the number of tweets or Instagram posts in a geographical area. The anomaly detection problem is then formulated as a problem of quickest detection of changes in count statistics. The quickest detection problem is then solved using the framework of partially observable Markov decision processes (POMDP), and structural results on the optimal policy are obtained. The resulting optimal policy is then applied to real multimodal data collected from New York City around a 5K race to detect the race. The count data both before and after the change is found to be nonstationary in nature. The proposed mathematical approach to this problem provides a framework for event detection in such nonstationary environments and across multiple data modalities.

7c - Distributed Fusion

Room: LR2
Chair: Jonathon Chambers
10:30 Heterogeneous Track-to-Track Fusion Using Equivalent Measurement and Unscented Transform
This document presents a novel track-to-track fusion (T2TF) approach for heterogeneous tracks. T2TF enables a distributed fusion structure, where tracks from local trackers are transmitted to a global tracker, that fuses the local tracks. T2TF offers the opportunity of low communication load almost without loss of information, compared to centralized measurement-to-track fusion (CMF). Heterogeneous tracks are of different state spaces, which are often non-linearly related. Heterogeneous track-to-track fusion (HT2TF) arises two challenges. Firstly, the fusion of tracks in different state spaces. Secondly, the cross-correlation in the state estimation errors. The presented HT2TF approach is based on the equivalent measurement and the unscented transform (UT). Compared to state-of-the-art approaches, the Jacobian is not required. For evaluation our approach is compared with the corresponding CMF.
10:50 Distributed Detection and Estimation Fusion by Maximizing Expected Utility
In this paper, we address the problem of distributed joint detection and estimation, in which numbers of sensor nodes are employed to detect signal-presence or absence, and estimate the unknown parameter associated with the decided hypothesis. Due to the limited bandwidth, each local sensor quantizes its original measurement into one bit of information, and the final global decision is then made based on the quantized data set at the fusion center (FC). Firstly, the multi-sensor joint likelihood function under either hypothesis is evaluated by assuming the data transmission channel between the FC and local sensors are perfect or imperfect, respectively. The expected utility is then introduced to assess the joint performance for distributed detection and estimation tasks. Finally, an optimal estimation receiver operating curve (EROC-opt) decision scheme is employed to accomplish the distributed joint detection and estimation. Performance comparisons with the centralized scheme without quantization and the generalized likelihood ratio test (GLRT) are made to show the superiority of the proposed approach.
11:10 The Equivalence Between Distributed and Centralized Best Linear Unbiased Estimation Fusion
This paper discusses the equivalence of the performance between the optimal distributed and centralized fusion based on best linear unbiased estimation (BLUE). A necessary and sufficient condition for the optimal distributed BLUE fusion to have identical performance as their centralized counterparts is obtained by setting the difference of two optimal estimates based on the two fusion rules be zero. Furthermore, under some very mild conditions on estimatee and observation error, we provide two theorems when the observations are linear in the estimatee. Specifically, the optimal distributed BLUE fusion is identical to the centralized BLUE fusion if observation errors are uncorrelated across sensors or the observation matrix of each sensor is full row rank. Numerical examples corroborate our analysis.
11:30 Distributed Filtering over Networks Using Greedy Gossip
This paper studies the problem of distributed filtering for state estimation of a dynamic system by using observations from sensors in a network, and proposes a greedy gossip based distributed filtering (GG-DF) algorithm. The sensor-nodes have estimation ability and work collaboratively. The information transmission across the network abides by the asynchronous gossip strategy that only two neighboring nodes are selected to communicate and exchange information with each other in each communication round. First, we propose a cost function for the estimation error of the entire network. Then, we derive our algorithm by making a greedy selection to minimize the cost. Finally, we provide performance and convergence analysis of the proposed algorithm, together with simulation results comparing with existing methods.
11:50 Collaborative Detector Fusion of Data-Driven PHD Filter for Online Multiple Human Tracking
The use of multiple data sources (measurements) has been recently demonstrated to improve the accuracy and reliability of a tracking system as it is capable of providing redundancy in different aspects, and also eliminating interferences of individual sources. This paper focuses on addressing the multiple human tracking problem from a multi-detector approach. This approach integrates two detectors with different characteristics (full-body and body-parts) to perform robust collaborative fusion based on data-driven Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. To leverage the maximum strengths from multiple detectors, we propose a robust fusion center at the track level, which manages to perform Generalized Intersection Covariance (GCI) fusions for survival and birth tracks independently, and also eliminates false tracks caused by a cluttered environment. Moreover, an identity reassignment mechanism is also developed to address the identity mismatching problem in the target birth process, so as to enhance the fusion performance and track consistency. Experimental results on two challenging benchmark video sequences confirm the effectiveness of the proposed approach.

7d - SS: Uncertainty, Trust and Deception in Information Fusion

Room: LR5
Chair: Audun Josang
10:30 Are My Arguments Trustworthy? Abstract Argumentation with Subjective Logic
An Abstract Argumentation Framework (AAF) is an abstract structure consisting of a set arguments, whose origin, nature, and possible internal organisation is not specified, and by a binary relation of attack on the set of arguments, whose meaning is not specified either. Subjective logic provides a standard set of logical operators, intended for use in domains containing uncertainty. In this paper, we define an extension of AAFs in which each argument and attacks is evaluated with an opinion, by revisiting the constellations approach developed for probabilistic AAFs. In this way, different agents can merge their opinions on how much arguments and attacks are "trustworthy", e.g., they do not represent fallacies or enthymemes. Finally, subjective logic operators can be used to fuse the belief of different possible worlds (i.e., a constellation of sub-graphs in the original AAF) containing different arguments and attacks.
10:50 Multi-Source Fusion Operations in Subjective Logic
The purpose of multi-source fusion is to combine information from more than two evidence sources, or subjective opinions from multiple actors. For subjective logic, a number of different fusion operators have been proposed, each matching a fusion scenario with different assumptions. However, not all of these operators are associative, and therefore multi-source fusion is not well-defined for these settings. In this paper, we address this challenge, and define multi-source fusion for weighted belief fusion (WBF) and consensus \& compromise fusion (CCF). For WBF, we show the definition to be equivalent to the intuitive formulation under the bijective mapping between subjective logic and Dirichlet evidence PDFs. For CCF, since there is no independent generalization, we show that the resulting multi-source fusion produces valid opinions, and explain why our generalization is sound. For completeness, we also provide corrections to previous results for averaging and cumulative belief fusion (ABF and CBF), as well as belief constraint fusion (BCF), which is an extension of Dempster's rule. With our generalizations of fusion operators, fusing information from multiple sources is now well-defined for all different fusion types defined in subjective logic. This enables wider applicability of subjective logic in applications where multiple actors interact.
11:10 Uncertainty Characteristics of Subjective Opinions
In this work, we study different types of uncertainty in subjective opinions based on the internal belief mass distribution and the base rate distribution. Subjective opinions which are used as arguments in subjective logic (SL) expand the traditional belief functions by including base rate distributions. Fundamental uncertainty characteristics depend on the `singularity', `vagueness', `vacuity', `dissonance', `consonance' and `monosonance' of an opinion. We define those concepts in the formalism of SL and show how these characteristics can be manifested in the three different opinion classes which are binomial, multinomial, and hyper opinions. We clarify the relationships between the uncertainty characteristics and discuss how they influence decision making in SL.
11:30 A Trust Logic for Pre-Trust Computations
Computational trust is the digital counterpart of the human notion of trust as applied in social systems. Its main purpose is to improve the reliability of interactions in online communities and of knowledge transfer in information management systems. Trust models are formal frameworks in which the notion of computational trust is described rigorously and where its dynamics are explained precisely. In this paper we will consider and extend a computational trust model, i.e. Audun Jøsang's subjective logic: we will show how this model is well-suited to describe the dynamics of computational trust, but lacks effective tools to compute initial trust values to feed in the model. To overcome some of the issues with subjective logic, we will introduce a logical language which can be employed to describe and reason about trust. The core ideas behind the logical language will turn out to be useful in computing initial trust values to feed into subjective logic. The aim of the paper is, therefore, that of providing an improvement on subjective logic.
11:50 MASA: Multi-agent Subjectivity Alignment for Trustworthy Internet of Things
The vastly diverse and increasingly autonomous Internet of Things (IoT) devices stress trust management as a critical requirement of IoT. This paper addresses subjectivity as an important issue in trust management for IoT. Subjectivity means that the information provided by each autonomous IoT device, represented by an agent, is likely to have been influenced by the device's individual preference, which can be misleading in trust evaluation. In this paper, we seek to align the potentially subjective information with the information seeker's own subjectivity so that the acquired second-hand information is more useful and personalized. Accordingly, we propose a multiagent subjectivity alignment (MASA) mechanism, which models the subjectivity using a regression technique and exchanges the models among agents as the input to an alignment process. This mechanism substantially counteracts biases incurred by different agents and improves the accuracy of second-hand information fusion as demonstrated by our simulations. In addition, we also conduct experiments using a real-world dataset (MovieLens) which further validates the efficacy of MASA.

7e - Localisation 3

Room: LR6
Chair: Henri Nurminen
10:30 Invariant Kalman Filtering for Visual Inertial SLAM
Combining visual information with inertial measurements represents a popular approach to achieve robust and autonomous navigation in robotics, specifically in GPS-denied environments. In this paper, building upon both the recent theory of Unscented Kalman Filtering on Lie Groups (UKF-LG) and the theory of invariant Kalman filter based Simultaneous Localization And Mapping (SLAM) we proposed recently, an innovative UKF for the monocular visual SLAM problem is derived, where the body pose, velocity, and the 3D landmarks' positions are viewed as a single element of a (high dimensional) Lie group $SE_{2+p}(3)$, which constitutes the state, and where accelerometer' and gyrometer' biases are appended to the state and estimated as well. The efficiency of the approach is validated both on simulations and on five real datasets.
10:50 Sound Source Localization Based on Robust Least Squares in Reverberant Environments
In this paper, we address the problem of sound source localization in reverberant environments. Time-delay estimation (TDE) methods are widely employed to locate sound sources based on the time differences of arrival (TDOAs) of signals received at different microphone pairs. In strong reverberations, the highest peak of the localization function is not necessarily from the true source resulting from the multi-path effect. Our previously proposed method based on the optimal peak association (OPA) aims to extract multiple peaks from the localization function for each microphone pair and find out the optimal association of TDOAs corresponding to the same sound source. However, due to the limitation of geometric configuration of microphones and possible missed detections, some microphone pairs fail to provide high-quality TDOA measurements. An improved OPA method is developed in this work based on the robust least squares which can determine the weights adaptively in terms of their respective observation accuracy. Experimental results demonstrate the superiority of the proposed method compared with the original OPA method in reverberant environments.
11:10 3D Angle-of-Arrival Positioning Using Von Mises-Fisher Distribution
We propose modeling an angle-of-arrival (AOA) positioning measurement as a 3-dimensional von Mises-Fisher (VMF) -distributed unit vector instead of the conventional normally distributed azimuth and elevation measurements. Describing the 2-dimensional AOA measurement with three numbers removes discontinuities and reduces nonlinearity at the poles of the azimuth-elevation coordinate system. Our computer simulations show that the proposed VMF measurement noise model based approximative Bayesian filters outperform the normal distribution based algorithms in accuracy in a scenario where close-to-pole measurements occur frequently.
11:30 Millimeter Wave Radar Detection of Moving Targets Behind a Corner
This paper considers the location problem for Moving targets behind a corner. Exploiting multi-path and the algorithm based on phase comparison among the multiple channels can obtain the position of the target behind a corner. To localize the moving target,a scanning radar system with multiple channels is suggested. The false target range can be achieved by the fast Fourier transform(FFT) technique. In addition, the false target azimuth is derived via exploiting the phase differences between the return signals among the multiple channels. Due to false targets and real targets are geometric symmetry, true targets can be localized by the radar system. Finally the experiment results validate this method, and demonstrate the effectiveness.
11:50 Addressing Data Association in Maximum Likelihood SLAM with Random Finite Sets
Recently, various solutions which adopt Random Finite Sets (RFS) for the solution of the fundamental, autonomous robotic, feature based, Simultaneous Localization and Mapping (SLAM) problem, have been proposed. In contrast to their vector based counterparts, these techniques jointly estimate the vehicle and map state and map cardinality. Most of the proposed RFS solutions are based on the a Rao-Blackwellized particle filter representing the vehicle state, accompanied by and RFS filter to represent the map. This article shows that an RFS maximum likelihood approach to SLAM is also possible. By maximizing the RFS based measurement likelihood this article demonstrates that Maximum Likelihood (ML) SLAM is possible without the need for external data association algorithms. It will be demonstrated that RFS based ML-SLAM converges to the same solution as its traditional vector-based counterpart. However, RFS-ML-SLAM does not require the correct data association decisions necessary for the correct convergence of traditional random vector based approaches.

7f - SS: Semi-supervised/unsupervised Learning-based State Estimation

Room: LR11
Chair: Xiaoxu Wang
10:30 Improved Adaptive Kalman Filter with Unknown Process Noise Covariance
This paper considers the joint recursive estimation of the dynamic state and the time-varying process noise covariance for a linear state space model. The conjugate prior on the process noise covariance, the inverse Wishart distribution, provides a latent variable. A variational Bayesian inference framework is then adopted to iteratively estimate the posterior density functions of the dynamic state, process noise covariance and the introduced latent variable. The performance of the algorithm is demonstrated with simulated data in a target tracking application.
10:50 Unscented Particle Double Layer Filter
The Particle filter (PF) provides a general numerical tool to deal with the non-Gaussian filtering problems, but it has the particle depletion problem and so on. The unscented particle filter (UPF) can solve the problem of particle depletion,but it has the computationally intensive problem and so on. To overcome these problems, the unscented particle double layer filter (UPDLF) is proposed. The proposed algorithm uses the PF algorithm to replace the state transition density function in the UKF algorithm, and updates the weights of each deterministic sampling point based on the new measurements. Finally, the state estimation at each time is obtained. The numerical simulation with two examples shows that the proposed filter outperforms the PF algorithm and the UPF algorithm.
11:10 Variational Bayesian Inference for Jump Markov Linear Systems with Unknown Transition Probabilities
Jump Markov linear systems (JMLSs) switch among simpler models according to a finite Markov chain, whose parameter, namely transition probability matrix (TPM), is rarely known and would cause significant loss in performance of estimator if not sufficient, thus needs to be estimated in practice. This paper considers the general situation where TPM is unknown and random, and presents a variational Bayesian method for recursive joint estimation of system state and unknown TPM. Under the assumption of transition probabilities following Dirichlet distributions, a variational Bayesian approximation is made to the joint posterior distribution of TPM, system and modal state on each time step separately. The resulting recursive method is applicable to various Bayesian multiple model state estimation algorithms for JMLSs and an application to IMM algorithm is demonstrated as an example. The performance of proposed method is illustrated by numerical simulations of maneuvering target tracking.
11:30 Linear Gaussian Regression Filter Based on Variational Bayes
In this paper, a novel nonlinear filter method named linear Gaussian regression filter (LGRF) is proposed. LGRF utilizes Variational Bayes (VB) to indirectly approximate the posterior probability density function (PDF) for state estimation. The core of LGRF is to use a linear Gaussian distribution with a set of compensating parameters (CPs) to characterize the likelihood probability (LP) for maximizing the lower bound. Through iteratively and alternatively achieving state estimation and CPs identification, the estimation accuracy can be improved gradually. In addition, compared with point-based filters, there is no covariance matrix decomposition in LGRF so that the inborn defect of numerical instability is avoided. The superior performance of LGRF is demonstrated in the simulation of maneuvering target tracking.
11:50 OTHR Multipath Tracking with Correlated Virtual Ionospheric Heights
This paper proposes a new virtual ionospheric height model for over-the-horizon radar (OTHR) target tracking. Considering the spatial correlation of different ionosphere site, the virtual ionospheric heights are modeled by a Gaussian Markov random field (GMRF). The priors of the GMRF model can be learned from the historical measurements from ionosondes. Given the acquired measurements of the ionosphere subregions, the virtual ionospheric heights of the unmeasured subregions are inferred based on the GMRF model. Then we present the multipath probabilistic data association for uncertain coordinate registration (MPCR) with the new virtual ionospheric height model. Numerical simulation shows that the accuracy of OTHR target tracking is improved.

7g - SS: Information Fusion in Multi-Biometrics and Forensics

Room: LR12
Chair: Naser Damer
10:30 Fingerprint and Iris Multi-biometric Data Indexing and Retrieval
Indexing of multi-biometric data is required to facilitate fast search in large-scale biometric systems. Previous works addressing this issue in multi-biometric databases focused on multi-instance indexing, commonly iris data. Few works addressed the indexing in multi-modal databases with basic candidate list fusion solutions limited to joining face and fingerprint data. Iris and fingerprint are widely used in large-scale biometric systems where fast retrieval is a significant issue. This work proposes joint multi-biometric retrieval solution based on fingerprint and iris data. This solution is evaluated under eight different candidate list fusion approaches with variable complexity and on a database of 10k reference and probe records of irises and fingerprints. Our proposed multi-biometric retrieval of fingerprint and iris data resulted in a reduction in the miss rate (1- hit rate) at 0.1% penetration rate by 93% compared to fingerprint indexing and 88% compared to iris indexing.
10:50 A Mobile App Authentication Approach by Fusing the Scores from Multi-modal Data
Remembering various PINs (Personal Identification Number) and passwords is a major challenge for most people, despite this is still the most prevalent way of identifying to log into mobile applications. In order to relieve people for memorizing these codes, an unobtrusive mobile phone App authentication approach is designed in this paper by analyzing the data collected from four resources: WiFi, Bluetooth, accelerometer sensor and gyroscope sensor. We first develop the authentication model based on single resource respectively. Further a score-level fusion authentication approach is proposed by considering the scores generated from four models. The proposed approach was evaluated on a dataset collected from a real-life scenario with the authentication response time setting at 3 seconds. The best EER (Equal Error Rate) achieved from the experiments is 9.67%, which indicates the feasibility of deploying the proposed approach on mobile phone to enhance the security while still maintaining good user friendliness.
11:10 Towards Protected and Cancelable Multi-spectral Face Templates Using Feature Fusion and Kernalized Hashing
Multi-spectral imaging has been explored to handle a set of deficiencies found in traditional imaging that capture the images only in visible spectrum or NIR spectrum. The promising performance obtained in the experimental works indicate the use- case in real-life biometric system. As the biometric system should also consider protecting biometric templates, it is required to have such template protection schemes for multi-spectral imaging biometric systems. Specifically, the biometric templates need to be protected after the extraction of features to avoid the leakage of biometric data and subsequent linkability issues. In this work, we investigate and propose a new template protection scheme for multi-spectral biometric system. The proposed approach leverages the information across different spectrum to provide protected templates. Through the use of kernalized hashing, we provide fully unlinkable template protection scheme that works across all different spectrum. Further, we propose a template level fusion across all the spectral bands to improve the performance of biometric systems with template protection in place. Through the use of a relatively large sized multi-spectral face biometric database of 168 subjects captured in 9 narrow spectral band in visible and near infrared range (530nm to 1000nm), we illustrate the effectiveness of the proposed approach in achieving a robust and secure template protection while addressing irreversibility, unlinkability and renewability. Through the experiments we establish the performance of proposed template protection approach and demonstrate a high Genuine Match Rate (≈ 100% at False Accept Rate of 0.01%) and low Equal Error Rate (≈ 0%), while satisfying other requirements of biometric template protection. With the set of experimental validation, we present the security analysis to demonstrate the unlinkability of the biometric templates.
11:30 Fusion of Multi-scale Local Phase Quantization Features for Face Presentation Attack Detection
The face recognition system is widely knowing for the vulnerability to the presentation or spoofing attacks. The exponential deployment of the face recognition system has further challenged especially with the low-cost face artefacts that can be generated using conventional printers. In this paper, we present a novel scheme to detect the face presentation attack especially the high-quality print attacks. The proposed scheme is leveraged on the phase information extracted from the spatial-frequency representation of the given image. We also present a new face presentation attack database collected using the iPhone 6S. the new database is comprised of $100$ subjects collected in two different sessions that have resulted in a total of $31228$ samples (or images). Extensive experiments are carried out on the newly constructed database and the obtained results show the improved performance of the proposed scheme when compared against six different state-of-the-art methods.
11:50 Deep and Multi-algorithmic Gender Classification of Single Fingerprint Minutiae
Accurate fingerprint gender estimation can positively affect several applications, since fingerprints are one of the most widely deployed biometrics. For example, gender classification in criminal investigations may significantly minimize the list of potential subjects. Previous work mainly offered solutions for the task of gender classification based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications, including forensics and the fast growing field of consumer electronics. Moreover, partial fingerprints are not well-defined. Therefore, this work improves the gender decision performance on a well-defined partition of the fingerprint. It enhances gender estimation on the level of a single minutia. Working on this level, we propose three main contributions that were evaluated on a publicly available database. First, a convolutional neural network model is offered that outperformed baseline solutions based on hand crafted features. Second, several multi-algorithmic fusion approaches were tested by combining the outputs of different gender estimators that help further increase the classification accuracy. Third, we propose including minutia detection reliability in the fusion process, which leads to enhancing the total gender decision performance. The achieved gender classification performance of a single minutia is comparable to the accuracy that previous work reported on a quarter of aligned fingerprints including more than 25 minutiae.

7h - Bayesian Methods/ Belief Propagation

Room: JDB-Seminar Room
Chair: Sumeetpal S. Singh
10:30 Joint Tracking for Capturing and Classification Based on Joint Decision and Estimation
This paper presents an approach to joint tracking for capturing and classification (JTCC). Target tracking for capturing requires that the estimates be within a close neighborhood of the estimatee rather than have a small average error, as for traditional tracking problems. Target classification determines the class of targets. Tracking for capturing is an estimation problem while classification is a decision problem and they are highly coupled. So JTCC is a joint decision and estimation (JDE) problem. To solve this problem jointly, we first consider a generalized Bayes risk based on a previously-proposed JDE idea. By minimizing this Bayes risk, we obtain the joint solution and its estimation part turns out to be a generalized maximum a posterior estimator. The JTCC approach addresses adequately the coupling between decision and estimation and the peculiarity of tracking for capturing. To evaluate the proposed algorithm jointly, we also give a joint performance measure: joint capturing and correct classification rate. Simulation results show that the JTCC outperforms the decision then estimation, the separate decision and estimation, and the conditional joint decision and MMSE-estimation methods in the joint performance measure.
10:50 Phase Transition in Bayesian Tracking in Clutter
A simple model of Bayesian object tracking, in the presence of clutter (i.e. spurious detections), is studied. In this model a single object is known to be present, and is detected in every time-step with additional detections of Poisson distributed clutter. Its position is estimated using Gaussian models of measurement errors and diffusion for the object's unknown motion. Neglecting fluctuations, in three or more dimensions a phase transition is found at a certain critical clutter density, above which long-term tracking is not possible, but below which the posterior probability remains localised around the true object position even after infinite time. In fewer dimensions there is no phase transition and long-term tracking is always possible in principle. Bounds on the critical clutter density are calculated for a continuous model. Numerical results for a discrete model confirm the approximate prediction of the critical clutter density. It is anticipated that similar behaviour will occur for more general models of multiple-object tracking.
11:10 On Bayesian Inference for Continuous-time Autoregressive Models Without Likelihood
Continuous-time autoregressive (CAR) model is a very powerful model to model many real world continuous processes. When the model is driven by Brownian motion, the parameter inference is usually based on the likelihood calculation using the Kalman filter; while the model is driven by nonGaussian Levy process, Monte Carlo type of methods are often ´ applied to approximate the likelihood. In both cases, likelihood evaluation will be the key but not is not always easy. Here we propose an innovative Bayesian inference method without the requirement of likelihood evaluation. The algorithm is in a framework of approximate Bayesian computation (ABC). Distance correlation is employed as a very flexible summary statistics for ABC and the p-value calculated from distance correlation provides a good measurement of the dependence between generated samples. Simulation study shows that this approach is straightforward and effective in inferring CAR model parameters.
11:30 Belief Propagation Based AIS/Radar Data Fusion for Multi-Target Tracking
A data fusion technique aiming at combining observations from two classes of sensors is proposed. The first class consists of sensors that produce periodic noisy observations of the targets; moreover, they may also miss the targets or generate false alarms. Sensors belonging to the second class, instead, do not generate false alarms, and provide aperiodic noisy observations of the targets that may have an identity. The problem is formalised with specific application to the maritime domain, in which radar sensors and the Automatic Identification System (AIS) are selected as representatives of the two classes, respectively. A Bayesian framework is developed and a detection-estimation problem is formulated, which is then efficiently solved with the use of a Belief Propagation (BP) message passing scheme. The performance and the effectiveness of the proposed algorithm is evaluated in a simulated scenario.
11:50 Online Estimation of Unknown Parameters in Multisensor-Multitarget Tracking: a Belief Propagation Approach
We propose a Bayesian multisensor-multitarget tracking framework, which adapts to randomly changing conditions by continually estimating unknown model parameters along with the target states. The time-evolution of the model parameters is described by a Markov chain and the parameters are incorporated in a factor graph that represents the statistical structure of the tracking problem. We then use the belief propagation (BP) message passing scheme to calculate the marginal posterior distributions of the targets and the model parameters in an efficient way that exploits conditional statistical independencies. As a concrete example, we develop an adaptive BP-based multisensor-multitarget tracking algorithm for maneuvering targets with multiple dynamic models and sensors with unknown and time-varying detection probabilities. The performance of the proposed algorithm is finally evaluated in a simulated scenario.

Friday, July 13 12:10 - 13:10

Lunch

Friday, July 13 13:10 - 14:50

8a - Deep Learning

Room: LR0
Chair: Subrata Das
13:10 Using Deep Learning for Classifying Ship Trajectories
In this paper we demonstrate how deep learning can be applied to the field of sea surveillance by classifying ship types from their trajectories. Commercial ships using AIS continually report information such as their ship type, e.g. fishing or cargo ship. A problem with AIS information however is that it can easily be modified and therefore deliberately or accidentally incorrect. In an attempt to address this we use a 1100 hours long AIS data set to teach 16 different neural networks to classify ships using only motion trajectories and without relying on the reported ship type. We also test three baseline methods using a more conventional 1 nearest neighbor approach. The evaluation showed that the best performing classifier was one based on deep learning.
13:35 Deep Learning for Military Image Captioning
US DoD big data is extensively multimodal and multi INT, where structured sensor and unstructured audio, video and textual ISR data are generated by numerous air, ground, and space borne sensors along with human intelligence. Data fusion at all levels "remains a challenging task." While there are algorithmic stove-piped systems that work well on individual modalities, there is no system to date that is mission and source agnostics and can seamlessly integrate and correlate multi INT data that includes textual, hyperspectral, and video content. The considerable volume and velocity aspects of big data only compound the aforementioned encountered in fusion. We have developed the concept of "deep fusion" based on deep learning models adapted to process multiple modalities of big data. Rather than reducing each modality independently and fusing at a higher-level model (feature-level fusion), the deep fusion approach generates a set of multimodal features, thereby maintaining the core properties of the dissimilar signals and resulting in fused models of higher accuracy. We have initiated a deep fusion experiment to automatically generating the caption of an image to help analysts tagging and captioning large volumes of images gathered from collection platforms. In our proof-of-concept demonstration for caption generation, the generative model is based on a deep recurrent architecture combined with the pre-trained image-to-vector model Inception V3 via a Convolutional Neural Network (CNN) and the word-to-vectors model word2vec via a skip-gram model. We make use of the Flickr8K dataset extended with some military specific images to make the demonstration more relevant to the DoD domain. The detailed results from the image captioning experiment is presented here . The captions are generated from test image are subjectively evaluated and the BLEU scores are compared and found substantial improvements.
14:00 Doc2Img: A New Approach to Vectorization of Documents
Vector space representations of text have increased in popularity and are used in various text classification problems. We present Doc2Img, a new approach to create document vectors that improves upon existing approaches such as Word2Vec and Doc2Vec in capturing similarities between words within a document and the differences across documents. We apply this new vector space representation to the problem of deriving the sensor requirements of apps (for smartphones and IoT devices) by learning a classification model using document vectors. We show that this learned model outperforms existing vector space representations (Word2Vec and Doc2Vec) by more than 10\%. Further, this model can predict with an average accuracy of 75\% and greater than 85\% on the top-20 sensor requirements for 300 different applications.
14:25 Semantic Segmentation on Radar Point Clouds
Semantic segmentation on radar point clouds is a new challenging task in radar data processing. We demonstrate how this task can be performed and provide results on a large data set of manually labeled radar reflections. In contrast to previous approaches where generated feature vectors from clustered reflections were used as an input for a classifier, now the whole radar point cloud is used as an input and class probabilities are obtained for every single reflection. We thereby eliminate the need for clustering algorithms and manually selected features.

8b - SS: Sensor, Resources, and Process Management for Information Fusion Systems

Room: LR1
Chair: Selim Ozgen
13:10 Sequential LMMSE Filtering with Out-of-Sequence Observations Under Nonlinear System
In this work, the problem of state estimation is addressed in a simultaneous presence of sensor faults, out-of-sequence measurements (OOSM) and non-linear dynamical system. We propose a mixture filtering to follow the criterion of linear minimum mean squared error (LMMSE) under non-linear system. Then formulae of LMMSE estimations with arbitrary time delay of OOSM is derived. The mixture criterion lies first in fulfil a nonlinear estimation using in-sequence observations, secondly takes into account the correlated observation faults and the arbitrary delay OOSMs in a LMMSE sense. The feasibility of proposed approach is demonstrated by numerical comparisons.
13:30 Response Surface Modelling for Networked Radar Resource Allocation
Sensor management is an important function of any data fusion center as the output of a fusion system is dependent on the quality of the information collected. In this paper, the scheduling aspect of sensor management function is implemented using Response Surface Modeling (RSM). Applying RSM requires formulating the sensor management function as an objective function. Then the benefit of RSM over prior global optimization approaches is the simplification of the evaluation of this objective function to find global optima. This leads to either reduced computational requirements and/ or shorter due times for creating sensor schedules. This work shows the utility of RSM towards scheduling multiple sensors and seeks to introduce RSM to the sensor management community. It is shown that the RSM scheduler provides a significant improvement towards reducing the number of missed targets in a surveillance radar network. This is compared to performing a uniform scanning regime (or sequential stepped scan) often employed. Very few iterations are required to provide this gain. The RSM technique also quickly determines where the most effective use of sensor resources needs to be applied. Consequently, it spends more radar dwell time on these beam locations.
13:50 Cross-Domain Pseudo-Sensor Information Measure
In a companion paper, we introduce the concept of a cross-domain pseudo-sensor comprising a combination of one or more hard and soft sensors as part of our method of Information Based Sensor Management (IBSM). Previously we defined two information measures associated with IBSM based on changes in Shannon entropy. The first measure is situation information defined on the Bayes net (BN) component of the situation information expected value network (SIEV-net). This situation information measure is defined as a global measure of the change in uncertainty in all the nodes of the BN. The second measure is sensor information defined on a norm of an error covariance matrix associated with the kinematic state estimator or other physical measurement. In this paper, we extend the concepts of situation information and sensor information to include cross-domain (hard/soft) pseudo-sensor measurements. We also discuss the issue of contemporaneity of the cross-domain measurements. The result extends the applicability of IBSM to soft sensors as well as cross-domain pseudo-sensors.
14:10 Cross-Domain Pseudo-Sensors in IBSM
The applicable (sensor) function table (AFT) is one of the 6 major components of the Information Based Sensor Management (IBSM) approach to sensor management. The AFT lists all available sensing actions from which the information instantiator (II) can downselect to an admissible set of sensor functions which are capable of satisfying an information request. The II further orders the admissible set by their expected (sensor) information value rate (EIVR) for passing observation requests to the sensors. While the AFT was initially developed for hard sensors, it is becoming increasingly important to not only develop methods of characterizing soft sensors for representation in the AFT, but also for characterizing approximate contemporaneous measurements by both hard and soft sensors as AFT entries. Hard sensor entries in the AFT are reviewed. Suggestions are presented for AFT entries of several representative soft sensors. Hard and soft sensing functions are then combined where possible into cross-domain (hard/soft) pseudo-sensor entries. Additionally, brief examples are presented which demonstrate how the AFT entries may be represented in the Sensor Model Language (SML) format for improved automation of the sensor management process.
14:30 Sensor Control for Selective Object Tracking Using Labeled Multi-Bernoulli Filter
With the recent advent of labeled random finite set filters, it is now possible to not only estimate the number of objects and their states, but also track their trajectories all within the stochastic filtering scheme. This paper investigates how the objects label information returned by a labeled multi-Bernoulli filter can be effectively used for sensor control purposes. The main focus is on selective multi-object tracking applications where objects with particular labels are of high priority, and the sensor needs to be controlled to achieve maximum confidence in tracking performance of the filter for those objects. We formulated and examined two novel solutions. The first is based on maximization of the confidence in what the filter returns in relation to the existence of the objects of interest. The second solution is based on maximization of the confidence in what the filter returns in relation to the both existence and the states of the objects of interest. We also present an intuitive solution for tracking scenarios when some of the targets of interest temporarily disappear then reappear. Simulation results indicate how the proposed methods can lead to significant improvements in terms of tracking accuracy of objects of interest, compared to using the generic (non-selective) sensor control methods.

8c - SS: Multi-sensor Data Fusion for Navigation and Localisation 2

Room: LR2
Chair: Jonathon Chambers
13:10 Robust Vehicle Infrastructure Cooperative Localization in Presence of Clutter
One of the primary challenges for a successful Highly Assisted and/or Autonomous Vehicle is its localization. To improve the precision of location of the vehicle, not only the internal sensors are being used, but also using data from external sensors is attracting increasing attention from the research community. One such proposed sensor is an infrastructure radar which can be used to improve the localization of the ego-vehicle. Although a radar indeed is a supplementary source of information, it suffers a unique type of clutter which have trajectories like real objects and can therefore result in "ghost measurements", i.e., measurements which do not correspond to any real vehicles. This deteriorates the quality of the fused state estimates. This paper proposes a robust method to fuse the radar readings in presence of such outliers. This methodology builds upon a previously proposed solution where the problem was formulated as a factor graph. The radar measurements were added as a novel constraint of sum of inter-vehicle distance, called Topology Factor. Our previous work assumed clutter free environment. This paper proposes a novel robust Topology Factor which is also resilient against above mentioned outliers. Simulations (based on real data) show promising results in the direction of lowering the degradation of fused state estimates in presence of such clutter.
13:30 Network Localization and Navigation Using Measurements with Uncertain Origin
Location aware networks will introduce new applications and services for modern convenience, the military, and public safety. In this paper, we introduce a Bayesian method for network localization and navigation (NLN) in the presence of measurement origin uncertainty (MOU). In the envisioned cooperative scenario, the agents of a dynamic network aim to better localize themselves by performing observations of other agents in their environment and sharing their location information. Since observations suffer from MOU, a data association problem has to be solved before an agent can update its location information with information provided by neighboring agents in the network. In our approach, joint inference is performed through a factor graph formulation of the entire, network-wide estimation problem. Performing the loopy sum-product algorithm (SPA) on the derived factor graph results in a distributed and scalable inference algorithm. Simulation results demonstrate that even in the presence of MOU, cooperation among agents can significantly improve the localization accuracy.
13:50 SINS Error Amendment for Deep-diving HOV Using Range-only Positioning
Motivated by the problem that the position error of strap-down inertial navigation system (SINS) accumulates over time while a deep-diving human occupied vehicle (HOV) executes unpowered diving, this paper describes an online SINS error amendment method using range-only positioning. The proposed method can avoid the use of expensive ultra-short baseline (USBL) system that needs to be precisely calibrated and long baseline (LBL) system, which is hard to be calibrated and expensive in ship time to deploy. The proposed method utilizes only a set of acoustic ranges from submersible support vessel as measurement information and the SINS error model as process model, based on which the error divergence is eliminated using extended Kalman filter(EKF).Results of simulation show that SINS/Range-only can effectively amend the longitude and latitude error in SINS based on the proposed method.
14:10 Redundant RINS Information Fusion with Application to Shipborne Transfer Alignment
The single-axis rotational inertial system (RINS) can average out the biases of inertial sensor perpendicular to rotation axis. However, these inertial sensor biases will introduce Schuler oscillation and saw-tooth error in the velocity output. Redundant RINS configuration is widely used in the ships and underwater vehicles. However, the information fusion between the redundant systems is ignored. In this paper, a joint error model and a measurement model are constructed for the redundant RINSs, whereby a novel Kalman filter is designed to estimate the inertial sensor biases. The designed Kalman filter does not require external reference information aiding. Based on the estimates of the inertial sensor bias, a velocity error prediction model is designed to predict the velocity error caused by inertial sensor biases. By velocity error output correction, the velocity fluctuation is decreased by 30%. As a typical application, the compensated velocity output from the master RINS is provided for the slave inertial navigation system (INS) to accomplish transfer alignment. Simulation test and experiments are conducted to verify the effectiveness of the proposed method.
14:30 Multi-robot Autonomous Navigation System Using Informational Fault Tolerant Multi-Sensor Fusion with Robust Closed Loop Sliding Mode Control
This paper presents a strategy to combine a fault tolerant multi-sensor data fusion with a closed loop controller scheme robust against external disturbances applied to a multi-robot mobile system tracking different trajectories. Multi-sensor fusion is ensured using an information filter which is the canonical form of the Kalman filter. Fault detection and exclusion strategy is proposed to eliminate any erroneous measurements, for this purpose, residuals are generated using the Kullback-Leibler divergence that compares the priori and posteriori distributions of the predicted and the corrected estimations respectively. Prediction model is based on odometry, with encoders data as input. Observation model is based on extra sensors observations. To optimise detections, an adaptive thresholding method based on the Kullback-Leibler criterion is proposed. Trajectory tracking is achieved using a sliding mode controller (SMC), developped by inverting the dynamical model of mobile robots. SMC is robust against external matched disturbances, parameters variation and actuators deterioration, however it can not handle total loss of effectiveness, which makes detection and isolation of faulty actuators compulsory. For this purpose, controllers inputs are converted into expected elementary velocities and compared to real data obtained from fault tolerant multi-sensor data fusion. The main contributions of this paper is to combine a fault detection and exclusion (FDE) scheme with an enhanced sliding mode controller in order to detect and isolate both sensors and actuators faults. The method is applied to a multi-robot system. The obtained experimental results demonstrate the performance of the proposed approach.

8d - Pattern Analysis/AI

Room: LR5
Chair: Kezhi Mao
13:10 Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search
In search applications, autonomous unmanned vehicles must be able to efficiently reacquire and localize mobile targets that can remain out of view for long periods of time in large spaces. As such, all available information sources must be actively leveraged -- including imprecise but readily available semantic observations provided by humans. To achieve this, this work develops and validates a novel collaborative human-machine sensing solution for dynamic target search. Our approach uses continuous partially observable Markov decision process (CPOMDP) planning to generate vehicle trajectories that optimally exploit imperfect detection data from onboard sensors, as well as semantic natural language observations that can be specifically requested from human sensors. The key innovation is a scalable hierarchical Gaussian mixture model formulation for efficiently solving CPOMDPs with semantic observations in continuous dynamic state spaces. The approach is demonstrated and validated with a real human-robot team engaged in dynamic indoor target search and capture scenarios on a custom testbed.
13:35 A Compact Belief Rule-Based Classifier with Interval-Constrained Clustering
In this paper, a rule learning method based on interval-constrained clustering is proposed to efficiently design a compact belief rule-based classifier. The main idea of this method is to learn a compact belief rule base based on a set of prototypes generated from the original training set. First, an interval-constrained clustering algorithm is used to divide the training data for each class into several clusters, with which the number of data belonging to each cluster can be constrained within a given interval. Then, we define a belief rule based on the centroid of each cluster. Finally, a two-objective optimization procedure is designed to get a compact belief rule base with a better trade-off between accuracy and interpretability. Two experiments based on synthetic and benchmark data sets have been carried out to evaluate the performance of the proposed classifier.
14:00 Feature Regrouping for CCA-Based Feature Fusion and Extraction Through Normalized Cut
Feature fusion is important for providing enhancements of data authenticity in both traditional and deep learning pattern analysis. Classical serial fusion concatenates multiple feature sets followed by dimensionality reduction using principal component analysis (PCA), linear discriminant analysis (LDA), canonical correlation analysis (CCA) etc. CCA-based feature fusion is a main technique for exploring the mutual relationships of multiple feature sets. It considers the correlation of multiple feature sets during dimensionality reduction. In traditional CCA-based feature fusion and extraction, the natural groupings of features are directly used. It is still unclear whether the natural groupings of features are optimal for CCA-based fusion. In this paper, we propose a feature regrouping algorithm for CCA-based feature fusion and extraction through normalized cut (FR-NC). Feature correlation analysis is incorporated into normalized cut, in which the intra-group correlation is maximized, and the extra-group correlation is minimized simultaneously. CCA-based feature fusion is performed on the regrouped features. The proposed feature regrouping algorithm aims to provide enhanced fused features for pattern classification. Extensive experiments have proved its effectiveness.
14:25 Anomaly-Aware Traffic Prediction Based on Automated Conditional Information Fusion
Reliable and accurate short-term traffic prediction plays a key role in modern intelligent transportation systems (ITS) for achieving efficient traffic management and accident detection. Previous work has investigated this topic but lacks study on automated anomaly detection and conditional information fusion for ensemble methods. This works aims to improve prediction accuracy by fusing information considering different traffic conditions in ensemble methods. In addition to conditional information fusion, a day-week decomposition (DWD) method is introduced for preprocessing before anomaly detection. A k-nearest neighbours (kNN) based ensemble method is used as an example. Real-world data are used to test the proposed method with stratified ten-fold cross-validation. The results show that the proposed method with incident labels improves predictions up to 15.3% and the DWD enhanced anomaly-detection improves predictions up to 8.96%. Conditional information fusion improves ensemble prediction methods, especially for incident traffic. The proposed method works well with enhanced detections and the procedure is fully automated. The accurate predictions lead to more robust traffic control and routing systems.

8e - SS: Towards a Battlefield IoT: Information Challenges and Solutions

Room: LR6
Chair: Tarek Abdelzaher
13:10 Risks and Benefits of Side-Channels in Battlefields
As networked devices and applications make their way into our battlefields, their behaviors need to take into account these highly cyber-physical adversarial environments. On the dark side of the spectrum, undesired side-channels put our sensitive data at risk; hence, side-channel-protected devices and implementations should be promoted. On the bright side of the spectrum, side-channel analysis may be correlated with observed and hidden events and enable causality inference and watermarking. This paper describes some unique risks and benefits that can be obtained from side-channel analyses in battlefields.
13:35 A Command-by-Intent Architecture for Battlefield Information Acquisition Systems
The command by intent paradigm improves agility of military operations by empowering subordinate units to exercise measured initiative to meet mission goals and accept prudent risk within commander's intent. This paper discusses what the paradigm entails in terms of architectural decisions for data fusion systems tasked with real-time information collection to satisfy operational mission goals. A preliminary evaluation of such an architecture is presented and evaluated using a target tracking task set in the context of a NATO-based mission scenario.
14:00 Squadron: Incentivizing Quality-Aware Mission-Driven Crowd Sensing
Recent years have witnessed the success of mobile crowd sensing systems, which outsource sensory data collection to the public crowd equipped with various mobile devices in a wide spectrum of civilian applications. We envision that crowd sensing could as well be very useful in a whole host of mission-driven scenarios, such as peacekeeping operations, noncombatant evacuations, and humanitarian missions. However, the power of crowd sensing could not be fully unleashed in mission-driven crowd sensing (MiCS) systems, unless workers are effectively incentivized to participate. Therefore, in this paper, taking into consideration workers' diverse quality of information (QoI), we propose Squadron, a quality-aware incentive mechanism for MiCS systems. Squadron adopts the reverse auction framework. It approximately minimizes the platform's total payment for worker recruiting in a computationally efficient manner, and recruits workers who potentially could provide high quality data. Furthermore, it also satisfies the desirable properties of truthfulness and individual rationality. Through rigorous theoretical analysis, as well as extensive simulations, we validate the various aforementioned desirable properties held by Squadron.
14:25 QuickSketch: Building 3D Representations in Unknown Environments Using Crowdsourcing
Disaster and emergency response operations require rapid situational assessment of the affected area for timely and efficient rescue operations. A 3D map, collected after a disaster, can provide such awareness, but constructing this map quickly is a significant challenge. In this paper, we explore the design of a capability called QuickSketch that rapidly builds 3D representations of an unknown environment using crowdsourcing. QuickSketch employs multiple vehicles equipped with 3D sensors (stereo cameras) to explore different areas of an unknown territory and then combines 3D data from all the vehicles to build a single 3D map. QuickSketch annotates the 3D map with important landmarks and enables rapid contextualization of visual intelligence (photos) received from first responders and disaster victims to guarantee timely backup and rescue operations. Our evaluation results show that QuickSketch can stitch a 3D map for a large campus with sub-meter mapping accuracy under certain conditions, position landmarks an order of magnitude more accurately than other image matching techniques, and contextualize visual intelligence accurately.

8f - Applications of Information Fusion 2

Room: LR11
Chair: Michael Roth
13:10 High-Level Information Fusion of Cyber-Security Expert Knowledge and Experimental Data
High-Level Information Fusion (HLIF) provides the ability to combine data from diverse sources, including doc- uments involving analyst assessment and raw sensor reports generated by sensors, in a coherent and consistent way. Command and Control (C2) in cyber infrastructure involves gathering information from experts, merging it with field knowledge and experimental results, and selected the most appropriate cyber assets to deploy at any given time in the mission cycle. When framing cyber asset selection as a HLIF problem, one key aspect involves estimation of network-wide impacts generated by cyber assets. Cyberspace is a highly dynamic man-made domain with a high degree of uncertainty and incomplete data which must be transformed into knowledge to support precise and predictable cyber effects estimation. Current systems have to rely on human subject matter experts (SMEs) for most tasks, rendering the cyber asset planning process too time consuming and therefore operationally ineffective. This paper proposes an architecture that leverages probabilistic ontologies to expedite the cyber asset planning process, allowing for the automation of most time-consuming, error-prone, SME-based knowledge elicitation under uncertainty. We illustrate the main aspects of the proposed architecture through examples taken from a cyber-security case study.
13:30 Estimation of Value-at-Risk Using Mixture Copula Model for Heavy-Tailed Operational Risk Losses in Financial, Insurance & Climatological Data
Data fusion techniques are being regularly used for analysis in Operational Risk Management (ORM). One of two popular risk metrics of interest, Value-at-Risk (VaR), has always been difficult to robustly estimate for different data types. The classical Monte Carlo simulation (MCS) approach (denoted henceforth as classical approach) assumes the independence of loss severity and loss frequency. In practice, this assumption may not always hold. To overcome this limitation and handle cases with heavy-tail data and more robustly estimate the corresponding VaR, we adopt a new approach known as Mixture Copula-based Parametric Modeling of Frequency and Severity (MCPFS). The proposed approach is verified via large-scale MCS experiments and validated on four publicly available financial datasets. We compare MCPFS with the classical approach for robust VaR estimation. We observe that the classical approach estimates VaR poorly while the MCPFS methodologies attain better VaR estimates for real-world data. These studies provide real-world evidence that the MCPFS methodologies have merits for its use to accurately estimate VaR.
13:50 Semantic Information Fusion Algebraic Framework Applied to Content Marketing
Content marketing objectives are to create and distribute valuable, relevant, and consistent content to attract and retain a clearly defined targeted audience. To earn credibility, the brand create messages that are useful and make a positive difference in the lives of the prospects. As content items, at the most basic level, is information, this paper will present how semantic information fusion operator introduced in prior work becomes a cornerstone of an overall content marketing tool chain. After introducing Mathematical Morphology framework, this operator which relies on a conceptual graphs, will be sound defined. Some underlying morphological properties such as dilation or idempotence are demonstrated, and we explain how these properties can be used within semantic information fusion. Index Terms—Semantic Information Fusion, Mathematical Morphology, Conceptual Graphs, Content Marketing.
14:10 Map-supported Positioning Enables In-service Condition Monitoring of Railway Tracks
We demonstrate how positioning concepts enable in-service condition monitoring of railway tracks. Specifically, it is shown that accurate georeferencing of monitoring data can be achieved by sensor fusion of GNSS and IMU measurements with a map of the railway network. Because such georeferencing is an offline positioning problem, a two-stage approach that operates on batches of data is developed: First, path hypotheses are estimated from the GNSS data and the railway map. Second, a nonlinear Rauch-Tung-Striebel smoother provides on-track positions and speeds in path coordinates given each path hypothesis and the IMU and GNSS data. The developed methods are an essential part of a track condition monitoring system developed at DLR. The positioning results are used for the track-dependent analysis of axle-box-acceleration data. Accordingly, all results shown in this paper have been obtained on real data collected in the harbor railway network of Braunschweig, Germany.
14:30 Post-Processing of Multi-Target Trajectories for Traffic Safety Analysis
This work presents a method for qualitatively im- proving the output of an existing video and radar-based multi- target tracking system by means of post-processing. The proposed method is a novel two-phase track stitching method, which involves a breaking phase for tracklet creation and a tracklet stitching phase. Beside track stitching mechanisms, minimum- cost network flow techniques are utilized to find optimal tracklet- to-track assignments. The method has been tested with data from a research intersection in Braunschweig, Germany, which is operated by the German Aerospace Center for the purpose of traffic safety analysis. Furthermore, an evaluation with the publicly available MOTChallenge benchmark is provided. Improvements between 14% and 33% concerning ID switches 40% in terms of track fragmentations are achieved. Both criteria are especially relevant for traffic safety analysis.

8g - Detection Theory/ Methods

Room: LR12
Chair: Nageswara Rao
13:10 Signal Detection with Elliptically Distributed Observations in Sensor Arrays
The unknown signal detection problem using the observations received from sensor arrays is addressed. It is formulated as a statistical hypothesis test on the covariance structure of the received signals. For the general case of the received signals following elliptically symmetric distributions, a generalized likelihood ratio test (GLRT) statistic is introduced. Moreover, for the uniform linear arrays (ULAs) with and without perfectly calibrated sensors, the GLRT detectors are derived whether the prior structure information of observation covariance matrix is known or not. Some numerical results are provided to show the effectiveness of the proposed detectors.
13:43 Online Design of Precoders for High Dimensional Signal Detection in Wireless Sensor Networks
In this paper, we present an efficient methodology to design precoders for distributed detection of unknown high dimensional signals. We consider a wireless sensor network, where several distributed sensors collaborate to perform binary hypothesis testing based on observations of an unknown high dimensional signal corrupted by noise. The sensors collect data over both temporal and spatial domains. Due to network resource constraints, each sensor performs a linear compression (through precoding) of the observed high dimensional signal at each time instant and forwards the compressed signal to the fusion center (FC). The FC then employs the generalized likelihood ratio test (GLRT) to make a decision on the presence or absence of the signal. We propose online linear precoding/compression strategies for such sensors that collect data over spatio-temporal domain, so that the detection performance at the FC is maximized under certain network resource constraints. Through the measure of non-centrality parameter and receiver operating characteristics (ROC), we show that our proposed precoder design achieves very good detection performance.
14:16 Two-level Clustering-based Target Detection Through Sensor Deployment and Data Fusion
Target detection is one fundamental problem in many sensor network-based applications, and is typically tackled in two separate stages for sensor deployment and data fusion. We propose an integrated solution, referred to as SSEM, which combines 2-level clustering-based sensor deployment and Source Strength Estimate Map-based data fusion for the detection of a single static or moving target. SSEM conducts the first level of clustering to determine a sensor deployment scheme and the second level of clustering to divide the deployed sensors into multiple subsets. For each sensor, the source strength is estimated at each grid point of the entire region based on a signal attenuation model, and for each subset of sensors, the target location is estimated using a strength distribution map-based statistical analysis method. A final detection decision is made by thresholding the clustering degree of the target location estimates computed by all subsets of sensors. Compared with traditional grid-based target detection methods, SSEM significantly reduces the computation complexity and improves the detection performance through an integrated optimization strategy. Extensive simulation results show the performance superiority of the proposed solution over several well-known methods for target detection.

8h - SS: Intelligent Information Fusion and Data Mining for Tracking

Room: JDB-Seminar Room
Chair: Tiancheng Li
13:10 Measurement-wise Occlusion in Multi-Object Tracking
Handling object interaction is a fundamental challenge in practical multi-object tracking, even for simple interactive effects such as one object temporarily rendering another undetectable. We formalize the problem of occlusion in tracking with two different abstractions. In object-wise occlusion, objects are occluded by other objects and then do not create measurements. In measurement-wise occlusion, a previously unstudied approach, all objects may generate measurements but some measurements may be occluded by others. The relative validity of each abstraction depends on the situation and sensor, but measurement-wise occlusion fits into probabilistic multi-object tracking algorithms with much looser assumptions on object interaction. Its value is demonstrated by showing that it naturally creates a popular approximation for lidar tracking, and by an example of visual tracking in image space.
13:30 Pattern Discovery and Anomaly Detection via Knowledge Graph
In this paper, we developed a pattern discovery and anomaly detection system using a knowledge graph constructed by integrating data from heterogeneous sources. Specifically, the knowledge graph is constructed based on data extracted from structured and unstructured sources. Besides the extracted entities and relations, the knowledge graph finds hidden relations via link prediction algorithms. Based on the constructed knowledge graph, the normalcy model for entity, action, and triplets are established. The information of the incoming streaming data is extracted and compared to the normalcy model in order to detect abnormal behaviors. In addition, we apply the lambda framework to enable a computationally scalable algorithm for pattern discovery and anomaly detection in a big data environment. Real time tweets data are used for evaluation and preliminary results show promising performance in detecting abnormal pattern and activities.
13:50 2D Spatial Keystone Transform for Sub-Pixel Motion Extraction from Noisy Occupancy Grid Map
In this paper, we propose a novel sub-pixel motion extraction method, called as Two Dimensional Spatial Keystone Transform (2DS-KST), for the motion detection and estimation from successive noisy Occupancy Grid Maps (OGMs). It extends the KST in radar imaging or motion compensation to 2D real spatial case, based on multiple hypotheses about possible directions of moving obstacles. Simulation results show that 2DS-KST has a good performance on the extraction of sub-pixel motions in very noisy environment, especially for those slowly moving obstacles.
14:10 Model Learning and Spatial Data Fusion for Predicting Sales in Local Agricultural Markets
This research explores the ability to extract knowledge about the associations among agricultural products which allows to improve the prediction of future consumption in the local markets of the Andean region of Ecuador. This commercial activity is carried out using Alternative Marketing Circuits (CIALCO), seeking to establish a direct relationship between producer and consumer prices, and promote buying and selling among family groups. The fusion of information from spatially located heterogeneous data sources allows to establish the best association rules between data sources (several products in several local markets) to infer a significant improvement in spatial prediction accuracy for sales future agricultural products.
14:30 Distributed Flooding-then-Clustering: A Lazy Networking Approach for Distributed Multiple Target Tracking
We propose a straightforward but efficient networking approach to distributed multi-target tracking, which is free of ingenious target model design. We confront two challenges: One is from the lack of statistical knowledge about the target appearance/disappearance and movement, and about the sensors, e.g., the rates of clutter and misdetection; The other is from the severely limited computing and communication capability of the low-powered sensors, which may prevent them from running a full-fledged tracker/filter. To overcome these challenges, a flooding-then-clustering (FTC) approach is proposed which comprises two components: a distributed flooding scheme for iteratively sharing the measurements between sensors and a clustering-for-filtering approach for target detection and position estimation from the local aggregated measurements. We compare the model-free FTC approach with cutting edge distributed probability hypothesis density (PHD) filters that are modeled with appropriate statistical knowledge about the target motion and the sensors. A series of simulation studies using either linear or nonlinear sensors, have been presented to verify the effectiveness of the FTC approach.

Friday, July 13 14:50 - 15:20

Refreshments

Friday, July 13 15:20 - 17:00

9a - Data Association/Sensor Registration

Room: LR0
Chair: Benjamin Noack
15:20 Radar/ESM Anti-bias Track Association Algorithm Based on Hierarchical Clustering in Formation
To address radar/ESM track association problem in formation in the presence of systematic biases, an anti-bias track association algorithm based on hierarchical clustering analysis is proposed. The influence of formation and systematic biases on association is analyzed first. In order to eliminate the effect of biases, the relative bearing bias between radar and ESM is estimated by hierarchical clustering for distance vectors in MPC. Finally, anti-bias track association is achieved based on the global optimal assignment. Simulation results indicate the proposed algorithm outperforms the state-of-the-art approaches.
15:40 Retrodiction of Data Association Probabilities via Convex Optimization
In a surveillance environment with high clutter, finding the correct measurement to track association becomes extremely important for efficient target tracking. This study offers a novel algorithm to retrodict the data association probabilities at any past time instant, when the batch set of measurements is kept in memory. For the retrodiction procedure, the batch association cost is first written explicitly as a binary integer optimization problem with a quadratic cost function and it is shown that the relaxed form of the problem is convex. From the relaxed problem, a lower bound for the optimal association cost is derived, and this lower bound is used as the data association probabilities pertaining to that selected time instant in the past. Due to its consideration of the batch set of data in a retrospective manner, we will call this algorithm as Retrodictive Probabilistic Data Association, RPDA. For simplification of the mathematical analysis, a single point target with no missing measurements, i.e. $P_{D}=1$, is taken into account.
16:00 Isolating Random and Bias Covariances in Tracks
In addition to the typical random errors that vary between consecutive measurements, the measurements for most all sensors used for target tracking include bias errors that remain relatively fixed during a target tracking episode and are typically characterized by an a priori mean and covariance. Since the bias errors are approximately fixed during a tracking episode, those errors violate the typical assumption of the measurement errors being white noise. Inflating the measurement covariance of the random errors by adding the bias covariance gives track covariances that poorly represent the true errors. The Schmidt- Kalman filter can be used to prevent the track covariances from becoming artificially too small. However, the Schmidt- Kalman filter produces a track covariance that encompasses the random and bias errors. In this paper, the authors formulate the target tracking as a least-square estimation (LSE) problem and show that the track covariance due to the bias errors can be isolated from the track covariance due the random errors. The authors utilize Monte Carlo simulations to verify and illustrate the accuracy of isolation of the bias and random covariances.

9b - Point Process Methods/ PHD/ Multi-Bernoulli tracking

Room: LR1
Chair: Reza Hoseinnezhad
15:20 A Distributed Bernoulli Filter Based on Likelihood Consensus with Adaptive Pruning
The Bernoulli filter (BF) is a Bayes-optimal method for target tracking when the target can be present or absent in unknown time intervals and the measurements are affected by clutter and missed detections. We propose a distributed particle-based multisensor BF algorithm that approximates the centralized multisensor BF for arbitrary nonlinear and non-Gaussian system models. Our distributed algorithm uses a new extension of the likelihood consensus (LC) scheme that accounts for both target presence and absence and includes an adaptive pruning of the LC expansion coefficients. Simulation results for a heterogeneous sensor network with significant noise and clutter show that the performance of our distributed algorithm is close to that of the centralized multisensor BF.
15:40 An Implementation of the Poisson Multi-Bernoulli Mixture Trajectory Filter via Dual Decomposition
This paper proposes an efficient implementation of the Poisson multi-Bernoulli mixture (PMBM) trajectory filter. The proposed implementation performs track-oriented N-scan pruning to limit complexity, and uses dual decomposition to solve the involved multi-frame assignment problem. In contrast to the existing PMBM filter for sets of targets, the PMBM trajectory filter is based on sets of trajectories which ensures that track continuity is formally maintained. The resulting filter is an efficient and scalable approximation to a Bayes optimal multi-target tracking algorithm, and its performance is compared, in a simulation study, to the PMBM target filter, and the delta generalized labelled multi-Bernoulli filter, in terms of state/trajectory estimation error and computational time.
16:00 A Heavy-Tailed Noise Tolerant Labeled Multi-Bernoulli Filter
The well-known labeled multi-Bernoulli (LMB) filter for multi-target tracking in clutters works well only under the Gaussian noise assumptions. Since this Gaussian assumption can hardly hold in practice, we present the problem of the LMB with heavy-tailed non-Gaussian measurement noise. Through modeling the measurement noise as Student's t distribution, a heavy-tailed measurement noise tolerant LMB (TLMB) is derived in the framework of variational Bayesian inference for the joint estimation of the target state together with the unknown scale matrix and degree of freedom (dof) of the Student's t distribution. Simulations on multi-target tracking in clutter with unreliable sensor demonstrate the effectiveness and superiority of the proposed TLMB.
16:20 Visual Mitosis Detection and Cell Tracking Using Labeled Multi-Bernoulli Filter
Cell life-cycle and motility analysis is one of the fundamental tasks in many biological research activities, and its automation is a challenging problem. To solve the cell tracking problem using a Bayesian stochastic filter, one needs to properly incorporate all the information available about the cell behavior within the filter. This includes its movements, changes during mitosis process (up to splitting) and death. This paper demonstrates an effective way to perform this task for a particular family of cells (Chinese Hamster Ovarian (CHO) cells) that are known to have a commonly elliptic shape, and elongate during their mitosis. We model this by incorporating an ellipse-based state variable, a particular spawning process and an intuitive adaptive (measurement-driven) birth process that are added to the prediction step of a multi-Bernoulli filter. Our numerical experimental results involving microscopic images of living cells demonstrate significant improvement in tracking performance as a result of the proposed additions.
16:40 A Hierarchical LMB/PHD Filter for Multiple Groups of Targets with Coordinated Motions
In some multi-object tracking scenarios such as convoys or road constrained motions, it can be advantageous to track groups of targets sharing common motion characteristics, even if they are not necessarily close to each other. The objective is twofold: reducing the computational cost while increasing the accuracies of the individual trajectory estimates. In a previous communication, we introduced a generic model based on hierarchical random finite sets (RFSs) to represent these types of multi-group multi-target scenarios. A first RFS is used to represent the multi-group state: the number of groups, their common motion characteristics and their target compositions are assumed to be random variables. Then, for each group, a second layer of RFSs represents the multi-target state assuming that the number of targets inside a group and their trajectories are also random variables. The main contribution of this paper is to derive a filter dedicated to the state estimation of hierarchical RFSs from sequential sensor measurements. The proposed solution is based on the labeled multi-Bernoulli filter to estimate the group characteristics, which interacts with a bank of probability hypothesis density filters to address the multi-target layer.

9c - Belief Functions

Room: LR2
15:20 Belief Function Definition for Ensemble Methods - Application to Pedestrian Detection in Dense Crowds
Large scale social events are characterized by very high densities (at least locally) and an increased risk of congestions and fatal accidents. Our work focuses on the specific problem of pedestrian detection in high-density crowd images, denoted by strong homogeneity and clutter. We propose and compare different evidential fusion algorithms which are able to exploit multiple detectors based on different gradient, texture and orientation descriptors. The evidential framework allows us to model spatial imprecision arising from each of the detectors, both in the calibration and in the spatial domains. Moreover, we propose a Belief Function allocation that takes into account both types of imprecision. Results on difficult high-density crowd images acquired at Makkah during the Muslim pilgrimage show that the proposed combined fusion algorithm leads to better results than taking into account only individual sources of imprecision.
15:40 Learning-based Modelized Combination of Evidence
The evidence combination is a typical kind of uncertainty reasoning or information fusion in the theory of belief functions, which combines bodies of evidence stemming from different information sources. In traditional applications of the evidence combination (e.g., pattern classification with evidential reasoning), given a sample, the basic belief assignments (BBAs) of different information sources are generated first, then they are combined by some rule, e.g., Dempster's rule. In this paper, we propose a new modelized method for evidence combination. By just inputting the sample into the learned model of combination, the "combined" BBA is obtained. That is, it does not need to generate multiple BBAs for each sample for the combination. In our proposed modelized combination, we can generate different combination models by different combination rules. Experimental results and related analyses validate the rationality and efficiency of our proposed method.
16:00 Rough Set Classifier Based on DSmT
The classifier based on rough sets is widely used in pattern recognition. However, in the implementation of rough set-based classifiers, there always exist the problems of uncertainty. Generally, information decision table in Rough Set Theory (RST) always contains many attributes, and the classification performance of each attribute is different. It is necessary to determine which attribute needs to be used according to the specific problem. In RST, such problem is regarded as attribute reduction problems which aims to select proper candidates. Therefore, the uncertainty problem occurs for the classification caused by the choice of attributes. In addition, the voting strategy is usually adopted to determine the category of target concept in the final decision making. However, some classes of targets cannot be determined when multiple categories cannot be easily distinguished (for example, the number of votes of different classes is the same). Thus, the uncertainty occurs for the classification caused by the choice of classes. In this paper, we use the theory of belief functions to solve two above mentioned uncertainties in rough set classification and rough set classifier based on Dezert-Smarandache Theory (DSmT) is proposed. It can be experimentally verified that our proposed approach can deal efficiently with the uncertainty in rough set classifiers.
16:20 Study of Discounting Methods Applied to Canonical Decomposition of Belief Functions
In Demspter-Shafer theory the discounting operation can be used to weak the belief according to the reliability of the source of information. This is usually done by modifying basic belief assignment also called mass function. In this framework, the degree of belief can be expressed by different representations. The canonical decomposition- well adapted for some combination rules- is one of them. The conversion between different representations can induce heavy computation. Thus, we propose in this paper to focus on discounting methods that are directly applied to the canonical decomposition. In the following document we will introduce two new functions and we will compare them to methods previously studied in the literature. Then an original equation that implements on the canonical decomposition the flawless discounting will be shown. Finally an application on a distributed data fusion algorithm for smart cars will demonstrate the convergence when the cautious operator is used.
16:40 Combination of Sources of Evidence with Distinct Frames of Discernment
Multi-source information fusion strategies in target recognition have been widely applied. Generally, each source is defined and modelled over a common frame composed of the hypotheses to discern. However, in practice, the independent sources of evidence can refer to distinct frames of discernment in terms of the hypotheses they consider. Under this condition, the classical combination process cannot be applied directly. Working with distinct frames of discernment for information fusion is a problem often encountered in the development of recognition systems which requires a particular attention. In order to combine such sources, this paper presents a new combination method which splits the process of fusion into two steps: construction of granular structure, calculation of belief mass, followed by the fusion process. Our simulations results show that the proposed method can effectively solve the problem of fusion of sources defined on distinct frames.

9d - Decision Making

Room: LR5
Chair: Krishna R Pattipati
15:20 Distributed Multi-Hypothesis Sequential Test for Tracking-Aided Target Classification
This paper studies target classification by using both feature data and kinematic measurements. The problem is tackled in a distributed architecture, where local deciders make decisions based on their local data and the fusion center fuses local decisions by a multi-hypothesis sequential test. We adopt the matrix sequential probability ratio test (SPRT) as the local tests and the fusion rule. The centralized fusion and the distributed one are compared discussed. A lower bound of the overall average sample number of the distributed fusion is proposed to help determine the thresholds of the local tests in order to improve the performance of the distributed fusion. Numerical results are provided to demonstrate the performance of our algorithm.
15:45 A New Method for OWA Aggregation of Interval Value in Multi-Criteria Decision Making
OWA operator is an effective aggregation method in multi-criteria decision making problem. However, in some multi-criteria decision making cases, the criteria satisfactions have some uncertainty, for instance, which is a set of interval values at a series of different levels. For multi-criteria decision making problem, it is necessary to aggregate criteria satisfactions. But the linear ordering of criteria satisfactions is unknown at a specific level in these cases. Therefore, OWA operator cannot be applied to aggregate the satisfactions directly. In this paper, a new method, named as the interval value exceedance method, is proposed. By using the proposed method, the domination relationship of criteria satisfactions for each level can be obtained. Then OWA operator can be used to aggregate these satisfactions based on the domination relationship, even if the linear ordering of satisfaction is unknown.
16:10 A Sequential Game of Defense and Attack on an Interdependent System of Systems
This research studies defense strategies of an interdependent system in the face of rational attacks. We propose a sequential game between an attacker and a defender for an interdependent System of Systems (SoS) to explore the effect of interdependency on an optimal defense strategy. We develop an algorithm of backward induction to obtain the Nash equilibrium of the game. The attacker is the first mover as he applies an attack strategy on constituent systems that maximizes his utility. The defender observes and responds by a defense strategy that maximizes her utility. Both players' utilities are expressed as the difference between a player's reward due to SoS functionality (dysfunctionality) and the cost of the action. The sensitivity analysis compares the effects of different parameters on the attacker's and defender's strategies such as the effectiveness of defense (attack), the unit cost of defense (attack) and the interdependency level of constituent systems.
16:35 Path Planning in an Uncertain Environment Using Approximate Dynamic Programming Methods
Routing in uncertain environments is challenging as it involves a number of contextual elements, such as different environmental conditions (forecast realizations with varying spatial and temporal uncertainty), changes in mission goals while en route, and asset status. In this paper, we use an approximate dynamic programming method with Q-factors to determine a cost-to-go approximation by treating the weather forecast realization information as a stochastic state. These types of algorithms take a large amount of offline computation time to determine the cost-to-go approximation, but once obtained, the online route recommendation is nearly instantaneous and several orders of magnitude faster than previously proposed ship routing algorithms. The proposed algorithm is robust to the uncertainty present in the weather forecasts. We compare this algorithm to a well-known shortest path algorithm and apply the approach to a real-world shipping tragedy using weather forecast realizations available prior to the event.

9e - Algorithms for Tracking - Gaussian Processes, Gaussian Mixture Methods

Room: LR6
Chair: Claude Jauffret
15:20 A Normal-Gamma Filter for Linear Systems with Heavy-Tailed Measurement Noise
This paper considers state estimation of stochastic systems with outliers in measurements. Traditional filters, which assume Gaussian-distributed measurement noise, may have degraded performance in this case. Recently, filters using heavy-tailed distributions (e.g., Student's t distribution) to describe measurement noise are gaining momentum. This paper proposes a new modeling of the state and an introducing auxiliary variable (related to measurement noise) as normal-gamma distribution. This modeling has three advantages: first, it can describe heavy-tailed measurement noise since the measurement noise is t distributed; second, using a joint distribution naturally considers the interdependence between the state and the measurement noise; third, it helps to get a simple recursive filter. We derive the normal-gamma filter for linear systems, in which a Kullback--Leibler minimizer is obtained to approximate the predicted filtering density. Analysis shows its superiority in robustness to traditional filters. Performance of the proposed filters is evaluated for estimation and tracking problems in two scenarios. Simulation results show the efficiency and effectiveness of the proposed normal-gamma filter compared with traditional filters and other robust filters.
15:40 Efficient Pseudolinear Estimation for Bearings and Frequencies Target Motion Analysis
In this paper, we present a pseudolinear estimate for the bearings and frequencies target motion analysis. We propose two ways to reduce its inherent bias and evaluate its performance with Monte Carlo simulations, w.r.t. the Cramér-Rao lower bound.
16:00 A Gaussian Mixture Smoother for Markovian Jump Linear Systems with non-Gaussian Noises
This paper considers the state smoothing problem for Markovian jump linear systems with non-Gaussian noises which are modeled as Gaussian mixture distribution. On the basis of decomposing the total probability at the point of two adjacent Markov jumping parameters at the current and the next epochs, the posterior probability density of the state for smoothing is derived recursively. Then, through transforming the quotient of two Gaussian mixtures into the corresponding multiplication under the possible two adjacent Markov modes, a recursive Gaussian mixture smoother is designed with the conditional posterior probability density of the state under each hypothesis being approximated by Gaussian mixture distribution. A maneuvering target tracking example with non-Gaussian noises validates the proposed method.
16:20 Enhanced GMM-based Filtering with Measurement Update Ordering and Innovation-based Pruning
The use of Gaussian mixture model (GMM) in nonlinear/non-Gaussian filtering problems has been extensively investigated. This paper advocates two enhancements for GMM-based nonlinear filtering techniques, namely, the adaptive ordering of the measurement update and normalized innovation square (NIS)-based mixture component management. The former technique selects the order of measurement update that maximizes the marginal measurement likelihood to improve performance. The latter takes the filtering history of a mixture component into account and prunes those components with NIS larger than a threshold to eliminate their impact on the filtering posterior. The advantage of the proposed enhancements is illustrated via simulations that consider source tracking using the time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements received at two unmanned aerial vehicles (UAVs). A GMM-cubature quadrature Kalman filter (CQKF) is implemented and its performances with different measurement update and mixture component management strategies are compared. The superior performance obtained via the use of the two proposed techniques is demonstrated.
16:40 A Multi-Sensor, Gibbs Sampled, Implementation of the Multi-Bernoulli Poisson Filter
This paper introduces and addresses the implementation of the Multi-Bernoulli Poisson (MBP) filter in multi-target tracking. A performance evaluation in a real scenario, in which a 3D lidar, automotive radar and a video camera are used for tracking people will be provided. For implementation purposes, a Gaussian Mixture (GM) approximation of the MBP filter is used. Comparisons with state of the art GM-δ-GLMB and GM-δ-GMBP filters show similar accuracy, despite the need for less parameters, and therefore less computational cost, within the GM-MBP filter. Further performance improvements of the GM-MBP filter are shown, based on birth intensity and survival distributions, which take into account the common field of view of the sensors and the variation of time steps between asynchronous measurements.

9f - SS: Autonomous Driving

Room: LR11
Chair: Ting Yuan
15:20 Optimal Sensor Data Fusion Architecture for Object Detection in Adverse Weather Conditions
A good and robust sensor data fusion in diverse weather conditions is a quite challenging task. There are several fusion architectures in the literature, e.g. the sensor data can be fused right at the beginning (Early Fusion), or they can be first processed separately and then concatenated later (Late Fusion). In this work, different fusion architectures are compared and evaluated by means of object detection tasks, in which the goal is to recognize and localize predefined objects in a stream of data. Usually, state-of-the-art object detectors based on neural networks are highly optimized for good weather conditions, since the well-known benchmarks only consist of sensor data recorded in optimal weather conditions. Therefore, the performance of these approaches decreases enormously or even fails in adverse weather conditions. In this work, different sensor fusion architectures are compared for good and adverse weather conditions for finding the optimal fusion architecture for diverse weather situations. A new training strategy is also introduced such that the performance of the object detector is greatly enhanced in adverse weather scenarios or if a sensor fails. Furthermore, the paper responds to the question if the detection accuracy can be increased further by providing the neural network with a-priori knowledge such as the spatial calibration of the sensors.
15:53 Multi-Sensor Fusion and Active Perception for Autonomous Deep Space Navigation
Keeping track of the current state is a crucial task for mobile autonomous systems, which is referred to as state estimation. To solve that task, information from all available sensors needs to be fused, which includes relative measurements as well as observations of the surroundings. In a dynamic 3D environment, the pose of an agent has to be chosen such that the most relevant information can be observed. We propose an approach for multi-sensor fusion and active perception within an autonomous deep space navigation scenario. The probabilistic modeling of observables and sensors for that particular domain is described. For state estimation, we present an Extended Kalman Filter, an Unscented Kalman Filter, and a Particle Filter, which all operate on a manifold state space. Additionally, an approach for active perception is proposed, which selects the desired attitude of the spacecraft based on the knowledge about the dynamics of celestial objects, the kind of information they provide as well as the current uncertainty of the filters. We evaluated the localization performance of the algorithms within a simulation environment. The filters are compared to each other and we show that our active perception strategy outperforms two other information intake approaches.
16:26 Learning Switching Models for Abnormality Detection for Autonomous Driving
We present an approach to learn a model to estimate the dynamical states at continuous and discrete inference levels when trajectory information is available. We learn a probabilistic model approximated through Markov Jump Linear Systems (MJLSs) filters that represent plans that could have generated by an observed trajectory. The resulting generative models are used to analyze new trajectories and to detect deviations from plan based on internal innovation measurements. We show examples of application of the proposed approach to learn filters for evaluating deviations from a reference task execution of driving situations that include static and dynamic obstacle avoidance.

Saturday, July 14 8:30 - 16:30

ISIF Board Meeting

Room: JDB-Seminar Room