The use of Software-Defined Networking (SDN) offers several advantages, including flexibility, and a dynamic way of programming functionality in a network. The design and capabilities of the underlying SDN infrastructure influence the performance of common network tasks, therefore, it is important to know how to measure different performance parameters in an SDN network. In this study, we analyze the traffic that is traveling across an SDN network, and from that analysis we infer its latency using a POX controller. Furthermore, we validate the statistical relevance of measurements by analyzing their mean and standard deviations.
In Wireless Sensor Networks (WSNs), providing full area coverage while maintaining connectivity between the sensors is considered an important issue. Coverage-aware sleep scheduling is an efficient way to optimize the coverage of WSNs while maximizing the energy consumption. On the other hand, clustering can provide an efficient way to achieve high connectivity in WSNs. Despite the close relationship between the coverage problem and the clustering problem, they have been formulated, discussed and evaluated separately. Furthermore, most existing WSN strategies are designed to be applied on Two-Dimensional (2D) fields under an ideal energy consumption model that relies on calculating the Euclidean distance between any pair of sensors. In reality, sensors are mostly deployed in a Three-Dimensional (3D) field in many applications and they do exhibit a discrete energy consumption model that depends on the sensors' status rather than the distance between them. In this paper, we propose a Pareto-based network configuration strategy for 3D WSNs. In the proposed protocol, deciding the status of each sensor in a 3D WSNs is formulated as a single multi-objective minimization problem. The proposed formulation considers the following combined properties: energy efficiency, data delivery reliability, scalability, and full area coverage. The performance of the proposed protocol is tested in 3D WSNs and under a realistic energy consumption model which is based on the characteristics of the Chipcon CC2420 radio transceiver data sheet.
The focus of this paper is directed towards optimal control of multi-agent systems consisting of one leader and a number of followers in the presence of noise. The dynamics of every agent is assumed to be linear, and the performance index is a quadratic function of the states and actions of the leader and followers. The leader and followers are coupled in both dynamics and cost. It is assumed that the state of the leader and the average of the states of all followers (called mean-field) are available to all agents but the local state of each follower is not known to others. It is shown that the optimal distributed control strategy is linear time-varying, and its computational complexity is independent of the number of followers. This strategy can be computed in a distributed manner, where the leader needs to solve one Riccati equation to determine its optimal strategy while each follower needs to solve two Riccati equations. This result is subsequently extended to the case of the infinite horizon discounted and undiscounted cost functions, where the optimal distributed strategy is shown to be stationary. A numerical example with 100 followers is provided to demonstrate the efficacy of the results.
We present a multi-objective optimization methodology for a pseudo-adaptive algorithm that assigns IOT devices to a set of networks. The age of The Industrial Internet of Things (IIOT) provides a fantastic opportunity to rethink traditional methods of device assignment amongst heterogeneous network options within an environment. To date, the only way to guarantee performance of a life-safety critical network was to invest in costly fixed and dedicated infrastructure that could not be a shared with non-critical building systems equipment. This paper describes a methodology to tune shared infrastructure communication pathways for reliability approaching fixed infrastructure solutions. The genetic algorithm GA developed maximizes devices' and networks' performance while minimizing infrastructure costs.We assume an assessment of each network available to each device in terms of QoS, and our GA running on a central controller at a remote server.
Multiple-input Multiple-output (MIMO) systems are an attractive choice to enhance the communication performance by increasing the number of antennas at transmitter and receiver. However, the physical implementation of MIMO systems has several issues such as a high economic cost and a high computational complexity for large antenna arrays. Additionally, due to space limitations, huge MIMO systems cannot be implemented in small devices. Hence, antenna selection techniques emerge as a solution to maintain the benefits of MIMO with an affordable trade-off between complexity and implementation cost. Besides, cooperative wireless sensor networks (WSN) may consider virtual MIMO systems. Thus, in this paper, we propose an antenna selection algorithm called Antenna Adaptive Selection of Information (ASSI) to be implemented on WSN. AASI adapts the transmission information according to channel characteristics. Our algorithm achieves affordable results in performance and capacity close to the optimal antenna selection for a specific signal-to-noise ratio with a reduced computational complexity.
This paper proposes the use of electric water heaters (EWH), already existing in residential distribution systems, to mitigate the impacts of the increasing integration of plug-in electric vehicles (PEVs). By controlling the thermostat setpoint of the EWHs within the distribution system, the peak demand due to the PEVs charging is flattened. The proposed control scheme has been verified using a simulation model developed in MATLAB environment. Simulation results on a 33-bus distribution test system reveal the potential of the proposed control scheme to mitigate the impacts of increased PEV penetration while meeting the EWHs hot water demand requirements.
The recent proliferation of the Internet of Things (IoT) has provided consumers unprecedented connectivity and access to many of their devices via mobile applications and smart home energy management systems. Although many platforms are available for consumers to remotely fine-tune their energy consumption patterns, awareness is still lacking of specific details about their past consumption trends. When contextual data is presented regarding specific appliances that were active in the past along with associated costs, consumers will be incentivized to make power consumption decisions that result in increased cost-savings and energy conservation. In this paper, we propose a hybrid classification system based on the Hidden Markov Model (HMM) and k-Nearest Neighbours (KNN) algorithms for disaggregating power consumption data of individual households in a non-intrusive manner. We also apply the Pareto's 80/20 Principle for accurate identification of appliances that draw significant power and contribute to majority of energy costs.
Traditionally, operators relied on the right mix of generators at their disposal for grid services, but now with two-way communication enabled by the smart grid, demand-response (DR) becomes another option for the electric utility to deploy control strategies shaping the demand profile. DR strategies can tap into the demand flexibility potential of large populations of residential loads to shift electricity use across hours of the day while maintaining the same comfort level. This paper presents the results from applying a DR strategy to the electric baseboards of eleven homes over a two-month period during Winter 2016/17. The DR strategy applies setpoint modulation to baseboard heaters via smart thermostats to store thermal energy prior to peak hours and then uses this stored energy to reduce demand. It is found that demand reductions of 36% and 24% can be achieved during morning and afternoon peaks, respectively, with a small daily reduction in energy consumption. DR holds significant potential for peak shaving where residential heating accounts for an important share of the utility's demand and can be had with little to no user discomfort.
Road gradient affects electric vehicle (EV) dynamics, challenging the energy management of energy storage system (ESS). Incorporating ultracapacitors improves the efficiency and prolongs battery life, by managing energy appropriately. In this paper, power split schemes on dual ESS considering road grade complexity are developed. The first is fuzzy-based approach with reduced fuzzy rules, while the second proposed approach utilizes road gradient energy as well as vehicle inertial energy to manage ESS energy. The ESS total energy loss and battery temperature rise are taken as key evaluation parameters for comparison and evaluation. Results show the ESS management improved its efficiency and prolongs battery life.
Public parking lots equipped with electric vehicle (EV) charging facilities place huge power demands on distribution networks. These huge demands, if not carefully considered at the planning stage, can create several operational problems. To address this issue, this paper proposes a novel distribution network expansion planning framework, which gives full consideration to the charging power demands of large EV parking lots. This framework provides several alternatives for construction/reinforcement of feeders and substations, while taking all the necessary constraints into account. Furthermore, the unscented transformation (UT) method is employed to model the uncertainties of load demands and EV parking lot demands. The ability of the UT method to accurately model correlated uncertain parameters makes it highly applicable in the context of distribution network expansion planning, where considerable correlated uncertainties exist. The proposed UT-based framework is formulated as a mixed-integer linear programming (MILP) problem, which can be solved using off-the-shelf mathematical programming solvers that guarantee convergence to the global optimal solution. A 24-node distribution system is used to verify the effectiveness of the proposed methodology.
Modelling plug-in electric vehicles (PEVs) charging load for use in many power system applications requires reliable estimates of a number of random variables that characterize the PEV charging process. Among these variables are the variables relevant to the driver's behaviour (e.g., arrival and departure times and daily mileage). Determining reliable estimates of these variables is challenging, since no currently sufficient real data can be relied upon for precise descriptions of these variables. The alternative is to use sample data for each variable from the available transportation mobility data, and to estimate a proper probability distribution function (PDF) that can preserve the random characteristics of each variable and generate the desired synthetic data. This paper presents a statistical evaluation study for different collections of PDFs in order to find the best model to precisely reflect the random characteristics of each driver behaviour variable. The most commonly used PDFs, along with some advanced PDFs, have been verified against the observed sample data based on consideration of a well-known goodness of fit statistical test.
This work introduces a new non-uniform Analog-to Digital Converter(ADC)architecture. The architecture partitions the amplitude axis into P non-overlapping partitions that sample the analog input at input signal driven instances resulting into a greatly reduced sampling rate and ADC implementation complexity for a given input signal bandwidth. It is shown that for an arbitrary random process, this new architecture automatically satisfies the Nyquist requirement on average (Beutler's condition) and results in a random additive sampling architecture that is alias free (Shapiro-Silverman condition). The architecture does not require a pseudo-random sequence for reconstruction or otherwise, as the sampling instants are largely signal driven. Additionally, it is shown that the architecture geometrically reduces the slew rate requirement within each partition, effectively compressing the amplitude of each digital sample. As this is a new architecture, a comprehensive design paradigm is presented
The single event transients (SETs) effects on different nodes of the LC-tank VCOs are investigated in this paper and the drain of the bias transistor is indicated as the most sensitive node of the LC-tank VCO. Besides, the SET sensitivities of two commonly structures are analyzed. To mitigate the SET effects of the LC-VCO, a hardened by design (RHBD) technique was proposed, in which a coupling capacitor is added in parallel with the bias transistor to speed up the discharging of the current pulse deposit. Besides, two ac coupling capacitors are added between the varactors and the LC-tank core to block the voltage distortion at node X and Y caused by SETs. The simulation results indicate that the SET-induced amplitude perturbation and the recovery time of the LC-tank VCO are significantly mitigated.
Ring oscillator has been widely used because of the wide turning range, small area and low cost. However, the conventional current starved inverter based RO is very sensitive to the single event effects (SEEs) when applied to the radiation environment. To mitigate SE effects to the RO, a current mode logic (CML) based RO-VCO is proposed, in which the delay time of each stage is mainly determined by the value of resistors and capacitors instead of the transconductance of transistors. The circuit was designed and simulated in standard 65nm CMOS technology, and simulation results suggest that the proposed structure effectively mitigated the SEEs in the CML based VCO without influence the turning range and the phase noise performance.
This paper presents a compact frequency output temperature sensor based on the fact that threshold voltage fluctuates according to the temperature. In modern electronic applications, on-chip temperature monitoring is crucial to optimize performance. This paper gives a project of a sensor that has been fabricated using low cost 0.18 µm CMOS technology with a single 1.8 V supply voltage. In order to sense the temperature, the chip get the threshold voltage of the MOSFETs, instead of the usually used base emitter voltage of the bipolar transistor, with the threshold voltage get circuit (VG). Then, with the intention to deal with the high sensitivity voltage signal, VCO is used to transfer the voltage signal into oscillating signal, whose frequency can be seen linearly proportional to the temperature. After one-point trimming, the sensor exhibits a high linearity over a temperature range of 0 to +150 °C, achieving an resolution of 0.15 °C with the inaccuracy rate of 12%. Also, the power consumption is below 1.2 µW, and the area of the chip is approximately 0.06 mm2. These features make this design a highly suitable solution in the case of portable applications.
This paper presents a 40 Gb/s SerDes receiver for 4-level pulse-amplitude modulation (PAM4) data. After a voltage-shifting amplifier which works as a 3-level slicer, the input PAM4 signal is quantized into thermometer code, and then it is converted into two parallel binary signal by a decoder as the output. Also, an equalizer is designed to deal with the channel attenuation and a Clock and a 20 Gb/s full-rate Clock and Data Recovery (CDR) to recovery correct clock from input data. The whole circuit is designed in 65nm CMOS technology with an area of 1x0.7 mm2. Simulation results show that the eye opening of the output data is about 0.9UI and the jitter of the recovered clock is around 3.2 ps. Under 1.2V power supply, the power consumption is about 270mW .
Quantum computing due to its ability of inherent parallel processing has emerged as one of the novel solutions to the complex computing problems. In connection with this, several proposals have been given for the quantum circuits based design of the reversible combinational and sequential circuits. However, the implementation of the reversible sequential logic is challenging in comparison to the reversible combinational logic. Spin-torque based reconfigurable architecture is emerged as one of the novel technologies to realize the quantum circuits. However, the architecture needs the optimized decomposition of the quantum circuits utilized for the reversible sequential logic due to required number of single qubit rotations and two-qubit entanglements. In this paper, the elementary decomposition of the quantum circuits representing the reversible D-Latch is optimized with the help of elementary quantum library {Ry (θ), Rz (θ), sqrt(SWAP) }. The number of elementary operations required to realize the D-Latch over 5 clock cycles is reduced by 43.56 %. The average fidelity of the D-Latch considered for the implementation and analysis at the end of each clock cycle is well above 97%. The fidelity is further improved by approximating the present state output utilized for the next state input.
Smart gadgets are being embedded almost in every aspect of our lives. From smart cities to smart watches, modern industries are increasingly supporting the Internet-of- Things (IoT). SysMART aims at making supermarkets smart, productive, and with a touch of modern lifestyle. While similar implementations to improve the shopping experience exists, they tend mainly to replace the shopping activity at the store with online shopping. Although online shopping reduces time and effort, it deprives customers from enjoying the experience. SysMART relies on cutting-edge devices and technology to simplify and reduce the time required during grocery shopping inside the supermarket. In addition, the system monitors and maintains perishable products in good condition suitable for human consumption. SysMART is built using state-of-the-art technologies that support rapid prototyping and precision data acquisition. The selected development environment is LabVIEW with its world-class interfacing libraries. The paper comprises a detailed system description, development strategy, interface design, software engineering, and a thorough analysis and evaluation.
Remote Condition Monitoring (RCM) of machines deploys condition monitoring of machine conditions with reduced manning to enhance proactive maintenance. Vibration and acoustics parameter of the machine helps in diagnosing the condition of the machine for early detection of faults in the system. This paper employs a Remote Condition Monitoring approach of two elevator parameters, vibration and acoustics, using an Internet of Things (IoT) device for Remote Data Acquisition (RDA) and Remote Fault Indication (RFI). A remote monitoring set-up was developed comprising of augmented sensors networked connections and Arduino Yun microcontroller, installed on the elevator system to remotely monitor the deterioration in the working condition. The set-up was configured to monitor the conditions online, through email application service. The data from the email were analyzed and notifications generated at the severity level of each parameter. The result showed that, vibration and acoustics parameters are complimentary in fault diagnosis, and that RCM enables faster repair and maintenance decision and prevent the catastrophic breakdown of the machine.
A reliable model for identifying spatial-temporal regularities of human dynamics is rewarding in many applications such as computer networking and mobile communication. These hidden patterns are inherited from our repeating behaviours with respect to three primary contexts of time, space, and social environments. Thus, selecting a suitable source of sensor data that is scalable, multidimensional, and social network illustrative can enable us to develop a reliable human mobility model and potentially prediction system. We first demonstrate that Wi-Fi network scans collected from mobile phone devices share a similar set of characteristics to real-world large-scale networks. One particularly is the long-tailed property of node degree distribution of projection networks. This feature can be interpreted as the robustness of this system against structural changes of removing a set of nodes or connections. Later, we transform Wi-Fi events into a tabular data format containing different time granularities and location-tagged information. However, the new data is sparse and difficult to analyze. Thus, we reduce the dimensionality of data by extracting its structural patterns using principal component of new features. Our analysis shows that we can reconstruct original data with more than 90% accuracy using only a set of top eigenvectors with the quarter size of original features, while the outliers with noisy data are filtered out. Our proposed technique help to visualize users similarities, behaviour dynamics, and reduce computation complexity of further analysis.
Social network can rebuild human society and analyze it as well. Social interactions and activities increase dramatically. A common way to construct networks is to generalize a single dataset, such as location and proximity data. Due to restrictions on location-based services and the telecommunication radio range, it is paramount to find a universal method obtaining a highly accurate network to represent social society. We design a combined scheme with multiple datasets in order to settle this problem. Moreover, structures of networks vary with different definitions of nodes and edges. The relationships between human mobility and human relationship were mainly studied so far. In this study, the effect of friendship on human social interactions and activities is also analyzed. We show the proposed combined network model provides a highly efficient way to construct social networks. We evaluate the performance of the model with centralities and coefficients. Finally, the relationships among networks are shown as well.
This paper presents a data collection architecture for situational awareness (SA)-centric microgrids. A prototype has been developed which can provide enormous data collection capabilities from smart meters, in order to realise an adequate SA level in microgrids. A communication framework based on the publish-subscribe model is also proposed and implemented for the communication layer of the SA using the message queuing telemetry transport (MQTT) protocol over two different physical (PHY) layers (i.e., WiFi and GPRS). An Internet of things (IoT) platform (i.e., Thingsboard) is used for the SA visualisation with a customised dashboard. It is shown by using the developed system, an adequate level of SA can be achieved with a minimum installation and hardware cost. Moreover, the Modbus protocol over the RS-485 is applied for the smart meter communication.
In this paper, deep neural networks (DNNs) are applied to features extracted from Parkinsonian speech recordings to predict their perceived quality. This procedure was also used to benchmark the electroacoustic characteristics of speech amplifiers used by people impaired with Parkinson Disease (PD). Speech recordings were obtained 11 PD subjects and 10 normal controls, with and without the assistance of 7 different speech amplifiers, and their quality was assessed subjectively by normal hearing listeners. Mel-frequency and Gammatone=frequency cepstral coefficients (MFCCs and GFCCs respectively) and their first order derivatives were extracted as features, and given as input to the DNN. Two optimizers are used to train the neural network, namely stochastic gradient descent (SGD) and Adam optimizers. The paper also shows the effect of feature reduction in enhancing the performance of the objective metrics. Experimental results showed that the reduced model of GFCC outperforms other objective metrics in terms of correlation with the subjective measures.
PMUs are viewed as one of the most vital measurement devices in future of electric grid. PMUs devices can provide synchronized phasor measurement of voltages and currents from broadly scattered areas in an electric power grid. A hybrid Multilayer Perceptron NN- Stochastic Fractal Search (MLP-SFS) algorithm is being proposed in rectangular coordinates to solve hybrid state estimator problem. Hybrid SE is defined based on its measurement set which consists of traditional as well as synchronized measurements. The approach classifies the process into two steps. The first step, Multilayer Perceptron NN is used to compute the initial estimated states. The second step, SFS is implemented to acquire the final estimated states. This hybrid technique is used to improve the accuracy of state estimation. The size of PMUs is gradually increased by adding them to the conventional measurement set. Seven cases are tested to show the impact of PMUs on the accuracy. The application of the hybrid technique is illustrated on IEEE 14, 30, and 57-bus systems. The Performance of MLPN-SFS is compared to MLP and SFS individually.
The paper presents a hybrid technique (SFS-SA) for distributed multiarea SE. Two SE levels are considered such as local and coordination SE. In local SE, each area executes its own SE based on their local measurements. In coordination SE, each area exchanges border information (boundary measurements) which determines the system-wide state. In this paper, stochastic fractal search technique (SFS) is utilized to perform the local SE and simulated annealing is used in the coordination SE. Furthermore, three observable measurement configurations are considered in this paper. The hybrid technique(SFS-SA) is validated using IEEE 118-bus system. IEEE 118-bus system is partitioned into four non-overlapping areas. The results showed a significant reduction in the computational time.
This paper proposes modified stochastic fractal search technique (M-SFS) for PSSE. Both accuracy and computational time are enhanced in the modified SFS technique. Two modifications are considered in this paper. The first modification is replacing the logarithmic function in the diffusing process with several benchmarks functions. This will have critical effect on the algorithm execution. The second modification is replacing the uniform distribution parameter in both diffusing and updating process with a few chaotic maps. This will improve the accuracy with the least computational time. The M-SFS is validated using IEEE 30, 57, and 118-bus systems. The results showed that MSFS technique performs better in these modifications compared to its original SFS technique.
State estimation is an important tool in monitoring and controlling active distribution systems. An important partner of estimation is bad data identification which could effectively enhance the estimation's precision in the cases where bad data exists in the measurement set. Once the measurement set is detected to include bad data, its spot must be identified to be excluded from the estimation problem. It is well known that the biggest value of residual vector is most likely corresponding to the bad data. This paper proposes a new estimation algorithm, which enhances the influence of the bad data in the residual vector, increasing the possibility of the successful bad data identification based only on residual vector.
Sagging of the conductor in transmission line has a vital role in the safety, reliability and efficiency of power transmission. Transmission lines must be designed to guarantee the maximum static loading capacity. This is done by maintaining the minimum vertical clearance between the cables and the ground. However, the increase of the cable length between two tower, leads to the high cost of material and electrical energy loss, as well as increasing the possibility of intervention. On the other hand, Reducing the line sagging induces high tension in the conductor, which may lead to damage of the conductor. To assure a safety sagging profile, an inspection is essential at the establishment and maintainable of power transmission lines. In this paper firstly the dynamic formulation of long and heavy cables are developed. Then an image processing method is applied for inspection of cable sagging. To investigate the method a reconfigurable experimental setup is designed to provide various sagging profiles and the sagging profile is extracted via image processing and the result is compared to that of analytical method.
Traditional fast Fourier transform (FFT) has gained enormous recognition for broken rotor bar (BRB) fault detection in induction motors using the sideband features as fault indices, however, the false alarm from inaccurate diagnosis remains a major setback associated with the technique. This paper presents two reliable spectral analysis approaches for BRB fault detection and analysis for induction motors: Thompson Multilayer (MTM) power spectral density (PSD) estimate, and Welch PSD estimate. The two methods are implemented using the simulated stator current signal of an induction motor obtained by the finite element method. The Finite Element analysis software, ANSYS, is used to design and simulate different motor conditions: a healthy motor, a motor with one, two and three BRBs. It is verified that the proposed methods provide robust and reliable BRB fault detection for induction motors.
Totem-pole PFC has the potential to achieve high efficiency and power density by using GaN HEMTs as the high-frequency switching device. With the advent of 600-650V rated commercial GaN HEMTs, the totem-pole PFC is expected to emerge as the dominant topology for the front-end of telecom and data center power supplies. Like most bridgeless topologies, totem-pole PFC generates a large amount of common-mode (CM) noise. Especially around the zero-crossings of input line voltage, the noise is very prominent and a huge current spike is also observed in the input current. A few existing works have theorized about the origin of the CM noise and the spike current but they lack the proper explanation to relate these two mutual issues and estimate the influence of different circuit parameters. The objective of this work is to establish the relationship between these two issues, primarily through the investigation of practical waveforms and a modified CM noise model. To facilitate the design of the totem-pole PFC with reduced EMI, a set of mathematical equations are also developed from a proposed equivalent circuit. Finally, the contribution of different circuit parameters on CM noise and spike current generation at zero-crossing is identified and outlined from the analysis of the equivalent circuit.
This paper studies the use of an ensemble of one-class classifiers for broken rotor bars detection in induction motors. To achieve this goal, the current signal of an induction motor is considered into account for the sake of detection. The fault detector is a multiple classifiers system (MCS), which combines various one-class classifiers to enhance the accuracy of the monitoring system. One-class classifiers are combined in different manners to form the ensembles. These include random subspace, bagging and boosting strategies. These ensemble-based schemes are constructed in homogeneous and heterogeneous configuration and compared together for the purpose of fault detection in induction motors.
This paper presents a new hybrid multilevel inverter topology which combines a four-level nested neutral point clamped (NNPC) converter with a half-bridge inverter. The converter has less components compared to that of other multilevel converters and features a modular structure for cascading multiple H-bridges to produce higher voltage levels. A sinusoidal pulse width modulation (SPWM) scheme is presented which utilizes redundant switching states to control capacitor voltage levels. A three-phase, seven-level converter is simulated and analyzed.
This paper presents the design and analysis of an indirect field-oriented controlled (FOC) induction motor drive system based on the space vector pulse width modulation (SVPWM) technique. The induction motor is fed to a voltage source inverter (VSI). The PI-controller has been employed for regulating the fluctuation of the motor current and torque, due to the parametric variation of an induction motor. As long as the FOC algorithm maintains the motor efficiency in a wide range of speeds, it also considers the torque changes with transient phases by processing a dynamic model of the motor. The Simulink-based model has been thoroughly developed in order to verify the accuracy and stability of the control system.
The green energy sources such as wind and solar energy are main contributors of the modern distribution grid, called smart grid. The variation in the green energy sources causes unintentional operation of the power transformer which is main part of the smart grid. Power quality, reliability and availability of electrical energy at the consumers end has been a top priority of the electrical power supply companies. The smart grid desires to have environmental friendly and an efficient distributed generation. The priority for a smart grid is not only the combination of smart sensors and automated operation of different sections at the grid, but the better performance of electrical power system. This can be achieved by continuous monitoring and control of transformers which is very important part of electrical power system. This paper proposes an Arduino with GSM modem for remote monitoring, protection and control of the transformer. The software based hardware provides an easy and effective way of implementation of protection system for the distribution transformer from over current, over voltage, humidity, oil temperature and winding temperature, separately. If any of the specified parameters exceed the pre-set value, then it takes corresponding protective action and also sends the information to the provided mobile number. Furthermore, the message can also be sent from the cell phone to the related control sections for further actions regarding the control and the status of different parameters at the transformer at the yard. The software based hardware is easy and friendly for the user and can be portable to either location. The automated action of the proposed circuit is helpful particularly in autotransformer at power grid, and it is a progressive forward step towards the smart grid and distributed generation.
Accurate wind power prediction error (WPPE) modeling is of high importance in power systems with large scale wind power generation containing high level of uncertainty. Since WPPE cannot be entirely removed, providing its accurate probability distribution model can assist power system operators in mitigating its negative effects on decision making conditions. In this paper, unlike previous related works, a nonparametric model is presented using kernel density estimation (KDE) with an efficient bandwidth (BW) selection technique called "advanced plug-in" technique. The utilized BW selection technique enables KDE to accurately estimate important features of WPPE distribution, e.g., fat tails, high skewness and kurtosis. The proposed WPPE modeling approach is simulated using one-year time series of real wind power and corresponding predicted values for 1-hour look-ahead time. The efficacy of the proposed WPPE model is depicted using Centennial wind farm dataset in south of Saskatchewan province in Canada. Results show that parametric distribution models like Normal, Stable, and so on cannot properly model the uncertainty of WPPE.
There has been a strong drive to enable Distributed Energy Resources (DERs), such as Photovoltaic generators (PV), Combined Heat and Power (CHP), and Electric Vehicles (EV), in large scale, at a residential level. One notable aspect of these devices is their size. Typical rooftop PV systems are in the range of 5kW-15kW, residential CHPs are in the range of 1kW-20kW, and EVs are in the range of 2kW-15kW. Given the fact their topology contains power electronics components, there are concerns their impact on the utility grid can be significant, especially concerning poor power quality effects. To address this concern, this paper presents measurements and analyses of common PV, CHP and EVs deployed in Canada, and compares their electromagnetic compatibility emissions to those of applicable standards. Despite their power electronics topologies and sophisticated switching control, these DERs are expected to have a small impact on the distribution system voltage supply waveform.
In this paper, a robust and simple control strategy is presented for power quality improvement of a grid-connected renewable energy based the maximum power point tracking (MPPT) from solar photovoltaic (PV) and wind-based permanent magnet synchronous generator (PMSG) sources. A combination of MPPT control and active power control algorithms are applied to two level inverters for interfacing between PV -PMSG energy sources and the grid. For MPPT, dc-dc boost and ac-dc converters are used for PV system and wind turbine, respectively. Both these renewable sources are connected to the voltage source converter (VSC) to feed the grid. VSC is controlled for PQ improvement and AC grid voltage regulation. The developed control algorithms for a grid-connected VSC based on modified synchronous reference frame (SRF), its use for desired power quality, minimize the total harmonic distortion (THD) and quick dc-link voltage regulation. The performance, effectiveness and the robustness of the hybrid system are validated using Matlab/Simulink. The proposed composite controller ensures rapid and desired control action without any adjustment during varying climate conditions and load variations.
This paper proposes a new methodology for managing distributed generations (DGs) investment proposals submitted by DG investors to local distribution companies (LDCs). The work presented in this paper assumes that governmental incentives are no longer available, as is the case in Ontario, and therefore LDCs would determine the appropriate incentives for DG investors. The proposed approach has two main stages to be implemented. An optimization model is presented in stage one to define the accepted and rejected DG capacities to fulfill the operational system constraints. Economic analyses are carried out in stage two for the accepted DG capacities in order to determine the optimal incentive prices in which the profitability of DG investments is guaranteed.
This paper introduces a MATLAB/Simulink package including two well-known power system benchmarks developed in Simscape Power Systems. The simulation models can be employed for base-lining and for testing new control techniques and protection algorithms for renewable and micro-grids integration studies. Different simulation scenarios including time-domain and small-signal analysis are presented to support the correctness of the implementation. Furthermore, we present a frequency measurement (phasor) block to measure the frequency in a Simscape Power Systems model running in phasor mode. The models are available in MATLAB-Central file exchange for power system education and research worldwide.
New ideas supporting the transition of the energy system are needed. The growth of renewable energy sources for electricity production is afflicted with uncertainties in forecasting its energy supply. To keep the electric grid in stable operation at times of high volatile supply from renewable energy sources one possibility is seen in distributed battery energy storage systems. They provide flexibility as a ancillary service for transmission system operators as well as improving self-sufficiency for residential building with photovoltaic systems. This research-in-progress paper presents an approach towards a coordinated multi-agent reinforcement learning-based swarm battery control. The goal is to use reinforcement learning to manage the battery's power flows between the battery, a photovoltaic system, the household's electric load and the electric grid. Our approach includes the use of the battery to offer frequency containment reserve as well as to improve energy self-sufficiency. Distributed battery energy storage systems will act as single agents for local operation optimum and coordinated reinforcement learning with several battery storage agents will ensure convergence towards a global optimum. As a last step, we compare the performance of our algorithm with an hybrid simulation model defining the same system configuration and objective. First test runs show that the our algorithm is able to learn within its environment and converges towards an optimal control policy.
Modern electric power systems are heavily structured based on digital technology. Starting from the production point to the consuming point, all the sensing devices have a twoway communication that allows for quick and smooth actions to monitor, control, optimize, and protect the corresponding electric power grid with a fast response to any change in the system status. During the winter season, especially in cold countries, a big portion of non-electric energy is consumed for heating purposes. Such these alternative energy sources are: logs, solar, and biogas water heaters. In smart grids, the local trading strategy is possible between two entities, where the one that has a surplus energy can transfer it to others. However, the preceding non-electric forms of energy are still not traded between entities. This paper presents a new concept on how to make this type of nonclassical energy trading possible between entities. This concept can be considered as a basis for solving the missing link in managing electric and non-electric forms of energy in the next generation smart grids.
The increasing use of distributed generation like rooftop solar panels and charging of large fleets of electric vehicles will result in over- and under- voltage problems in the low voltage distribution networks. Distributed electric springs have been proposed as an effective means of controlling these voltage problems. However, when multiple distributed electric springs are activated in a system, each electric spring tries to correct the local voltage problem. As a result, some groups of electric springs located in different sections of the same radial network can be competing against each other at any given time. In the past, droop control has been suggested as a means of avoiding this conflict. This paper highlights the problem with simple drop control of electric springs in a radial distribution networks and presents coordination between electric springs as an alternative. A comparison between the droop control and the coordinated droop option is presented in terms of their voltage control capability, and required compensator capacity, by means of a case study on a typical European LV network with stochastic demand profile for different types of residential customers. The cost of communication can be justified by comparison with the savings in the required compensator ratings for a similar voltage control.
Power transmission lines are often built along terrains where trees surround the right-of-way in British Columbia, Canada. These trees provide natural shielding against direct lightning strokes to transmission lines. In this paper, a new method is proposed to quantify the shielding effect of trees based on LiDAR survey data. This method takes as input the statistical parameters for the trees around the edge of the right-of-way such as heights, distance to centerline, and density per span. LiDAR survey data are collected for transmission line corridors in BC Hydro's system. Calculation results are shown for a new transmission line project in British Columbia.
This paper proposes the use of deep learning convolutional neural networks (CNN) to classify marine resources images (especially fish) captured in non-structured scenarios. Tests conducted using two state of the art deep CNN techniques show that Deep learning can be used efficiently in this type of classifications. AlexNet and GoogLeNet were both used to classify the images captured onboard of fishing boats. The best results were obtained using transfer learning and pretrained ImageNet models. These models are used to initialize the convolutional layers and then retraining the fully connected layers on the available dataset. Using this strategy, AlexNet and GoogLeNet achieve respectively a success rate of 94.01% and 96.01%. These results are further improved by extracting and using fish areas for training and classification. The accuracy of cropped fish areas classification obtained 96.35% with AlexNet and 96.54% with GoogLeNet. Overall, GoogLeNet was the best performing network. Also, the top-2 accuracy obtained by GoogLeNet was equal to 97.87% for the full image classification and 98.94% for the cropped images.
Multivariate generalized Gaussian distribution has been an attractive solution to many signal and image processing applications. Therefore, efficient estimation of its parameters is of significant interest for a number of research problems. The main contribution of this paper is to develop a fixed-point estimation algorithm for learning the multivariate generalized Gaussian mixture model's parameters (MGGMM). A challenging application that concerns Human action recognition is deployed to validate our statistical framework and to show its merits.
Digital images can be found in several sizes and they can be easily displayed on a computer screen using algorithms that reduce their dimensions. Recent advances have contributed to the emergence of gigapixel images with a large amount of information. These images can be understood as an image mosaic with multiple single pictures. In this work, we propose and analyze a multiresolution method for finding people in gigapixel images. Experimental results demonstrate the effectiveness of the developed approach.
Metallography is a field of study focused on metal analysis of microstructure, defects, etc, and material identification. ASTM International provides E112 protocol to support material observation based on average grain size. This method requires to count total of grains cut on a circular area of 645 mm2 or 1 inch2 and following directions to identify the material. However, this process demands high accuracy and knowledge and it is very handwork, subject to human errors. Moreover, having at least a little previous knowledge about the material helps to choose the most suitable protocol. In this work we present an approach for metallographic specimen identification based on imaging classification with classic machine learning algorithms. We prepared specimens following ASTM for six different materials and collected sample images on a microscope. We compared K-Nearest Neighbor, Decision Tree and Linear Discriminant Analysis and used raw pixels, gray histogram and GLCM features as input data. Our experiments where performed with 1,200 patch samples with different pixel set size reaching an average accuracy of 96.8%. Thus, the proposed approach presents a path toward automated metallographic studies.
Image aesthetics classification is the method of visualizing and classifying images based on the visual signatures in the data rather than the semantics associated with it. In this work, we develop learning techniques that is inspired by the way a human brain identifies images. We develop CNN models by providing most useful information to the network by leveraging the joint information from wavelet compressed image patches and class activation maps (CAM). The performance of the network in recognizing the image based on simple visual aesthetics signatures is shown to be better than existing techniques with few caveats.
In this work freestanding silicon nanostructures with high aspect ratio are fabricated using a chemical - electrochemical etching technique. <100> silicon samples first are textured, covered with hillocks with pyramid shapes via anisotropic wet etching of the samples in TMAH/IPA solutions. Effects of various combinations of TMAH/IPA on the pyramidal texture of silicon are studied in detail. Then, the textured silicon samples are subjected in an electrochemical etching in HF/Ethanol solution to form the nanostructures. A third step of fine etching of the samples in diluted TMAH/IPA solutions was necessary to remove residual and un-etched walls between the structures. Effect of various fabrication parameters controlling the length and tip-to-tip separation of the final structures are investigated. Potential applications of the fabricated structures are remarked.
In nanoparticles, material properties are often used to functionalize them for various applications. In metallic nanoparticle, the localized surface plasmons shows an interesting property upon external excitation i.e. there is a sizeable optical resonance red shift with increase in the dielectric constant of the surrounding medium. In this paper, we present an alternate explanation of this phenomena showing that it is a direct result from the theory of the parallel resonance circuit. We show that the magnitude of the peak red shift depends upon the value of the components of resonant circuit, which are directly derived from the dielectric function of the surrounding medium. The derived results are accurate and their equivalence is compared with Mie solution.
We demonstrate coherent transmission of hybrid modulation format combining DMT and QAM. In an SNR-limited regime, we demonstrate experimentally the use of hybrid modulation increases performance of a silicon photonics modulator.
We propose a novel adaptive pre-compensation method to correct for the filtering effects caused by cascading optical reconfigurable optical add drop multiplexers (ROADMs). The improvement is achieved without using additional hardware (HW) on the link or within a typical coherent signal processor in the transponders. With our method, using an estimate of the adaptive common response of all the branches of the adaptive equalizer at the receiver, we show an improvement of up to 0.6 dB in required optical signal-to-noise ratio (R-OSNR).
We review the challenges of cladding pumped multi-core optical fiber amplifiers for application to space division multiplexing (SDM). Through numerical simulations, we investigate two fiber designs: the first one with a uniform cladding and the second one with an annular cladding to guide the pump. We compare the multi-core amplifier gain, noise figure and pumping efficiency in a WDM scenario. We present the fabricated fibers and summarize experimentally measured performance that shows gain>20 dB and NF<6 dB over the whole C-band. Finally, we examine scalability of the annular cladding design.
This paper presents a throughput capacitive biosensor using charge based capacitive measurement (CBCM) technique suitable for lab-on-chip (LOC) applications. The proposed biosensor consists of a 10×10 array of core-CBCM capacitive sensors with digital outputs working in current mode. Each capacitive sensor consists of a differential current mirror, a current-controlled oscillator and an up/down counter using linear feedback shift register (LFSR). By the conversion of the current response of the core-CBCM circuit to pulse frequencies and counting the number of the output pulses during a specific time interval, we can do the required integration in the digital domain instead of the analog domain and thus prevent operation in voltage mode. This approach helps to obtain a wide dynamic range more than 100fF with a controllable sensitivity about 118 pulse/fF. Based on the simulated results, the proposed sensor offers great advantages for high throughput drug screening applications.
This paper describes the process of implementing an electronic system for the detection of different types of obstacles in a white cane, focused on improving the daily mobilization of people with visual disabilities. The system consists of an ultrasonic sensor implemented on a stepper motor, in order to detect possible obstacles, with a working range between 0.5 m and 5 m in distance and between 90º left and 90º right in detection, also, a sound module and a buzzer will be implemented in the handle of the cane, in order to alert the disabled person in an efficient way about possible obstacles that may occur around them. An Android application was also designed; the mobile application communicates with the White cane through a GPS and GSM module, to help locate the visually disabled person, in the place where he is, sending the Parameter of longitude and latitude, all in a text message, so that a relative can access to the application and visualizes through Google Maps the exact location of the disabled person that owns the smart cane.
Monitoring of the physical properties of the tissues provides valuable information for the clinical diagnosis and evaluation. However, one of the challenges of an ultrasonic method for continuous monitoring of a tissue motion using a conventional clinical ultrasonic image system could be motion artifacts due to the weight and size of its handheld ultrasonic probe employed. The inherent properties of the polyvinylidene fluoride (PVDF) polymer piezoelectric film allow a construction of wearable ultrasonic sensor (WUS) that is flexible and lightweight. However, the PVDF ultrasonic sensor has a relatively weak transmitting acoustic signal strength which causes poor signal-to-noise ratio of the ultrasonic signals acquired in pulse-echo measurements, particularly for the signals reflected from deep tissues. This paper investigated an improvement of the ultrasonic performance of the WUS using a double-layer PVDF films. The sensor was constructed using two 52-µm thick PVDF films. The developed double-layer WUS showed the 1.7 times greater ultrasonic signal amplitude compared to the WUS made of a single-layer PVDF having the equivalent PVDF film thickness. Thus, the developed double-layer PVDF WUS improved the depth of the ultrasonic penetration into the tissues. The developed WUS successfully demonstrated to monitor the contractions of biceps muscles in an upper arm. In addition, a cardiac tissue motion was clearly observed in the M-mode measurement corresponding with the cardiac cycles obtained from the ECG measurements.
The growth in wearable medical sensor-based technologies has made it possible to capture high volume physiological data of patients, both within and outside the hospital. The acquired physiological data are analyzed, usually in real-time, using a patient monitoring application for early disease detection or to detect any other changing conditions of a patient. In some cases, it is desirable to have a distributed, scalable patient monitoring system to which the physiological data of different patients can be submitted for online analysis. Such a system should be able to support the concurrent analysis of multiple data streams of different patients, allowing a clinician to remotely monitor more than one patient from a single location. This type of system also conserves resources, since in this case, there is no need to provision computational resources for every single patient being monitored. In this paper, we explore the usability of Apache Storm, an open-source real-time processing engine, in the development of such a scalable patient monitoring system. The contribution of this work, therefore, is to demonstrate that it is possible to achieve a more resource-efficient alternative to the isolated patient monitoring systems by using a distributed real-time computation platform, Apache Storm, to develop a scalable health monitoring system that can support the concurrent monitoring of multiple patients. To show how the proposed system can be developed, we describe a prototype implementation of a multi-tenant health monitoring application that monitors the arrhythmia status of multiple patients, based on a simple ECG analysis, using Apache Storm.
Bio-Signal computing involves signal acquisition, conditioning and processing. Inherently, signal acquisition and conditioning is done through analog discrete front end circuitry. Signal processing in the analog domain involves higher component counts, inaccuracies, and limitations. Due to the recent developments in VLSI technology, highly integrated Analog Front Ends (AFEs) are emerging, capable of processing bio-signals under program control. This ushers a new design paradigm for the embedded system designers. In this paper, we present a highly integrated AFE based embedded system - a novel, low cost remote platform, capable of multimodal bio-signal computing including non-invasive blood pressure estimation.
An Orthogonal Frequency Division Multiplexing (OFDM) system with QPSK (Quadrature Phase Shift Keying) mapper is considered. Selected Mapping (SLM) technique with a new method for generating pseudo-random sequences is proposed for PAPR reduction. The conventional random sequence lacks systematic structure, which results in increased system complexity. In addition, it is known that SLM requires sending the side information to the receiver in order to recover the original signal. The proposed scheme is generated simply. Thus, the side information is reduced to a single index value of the used column. The simulation results show that the proposed scheme has nearly the same PAPR performance as the conventional random sequence.
Delay and Disruption Tolerant Networks (DTNs) are attracting significant attention in recent years. It has been shown that the traffic in DTNs may also be bursty and correlated and therefore exhibit self-similar characteristics. The proposed work analyses the queuing behaviour and estimate the crucial network parameter, buffer size in vehicular DTN in presence of bursty traffic. We have modified the existing N-burst model for a delay tolerant environment to show that, even with fairly low buffer overflow probability (BOP), the buffer size in such type of networks can be substantially large. Simulations and statistical analysis of the data traffic concretize the fact that our proposed N-burst model exhibits self-similarity in a vehicular DTN environment and gives realistic buffer size for target BOPs.
We address the problem of uplink channel estimation in TDD multiuser massive multi-input-multi-output (MU-MIMO) systems, when uplink training duration is limited. Based on the concept of compressive sensing (CS), uplink channel could be estimated with limited training duration if channel can be sparsely represented. In this paper, a low-rank matrix approximation (LRMA) based on CS technique is proposed for the massive MU-MIMO channel estimation problem. As such, the channel estimation problem was formulated as a quadratic nuclear norm optimization problem with linear constraint. Consequently, the regularization parameter, which minimizes the error between a data fidelity and convex penalty function, is selected based on cross-validation (CV) curve method. The simulation results demonstrate that the proposed method outperforms the LS method in terms of the estimator performance. In addition, the proposed method reduces the pilot length and the computational cost as well.
In this article, we analyze the performance of LMS (Least Mean Square) adaptive filter in the context of dual-polarization coherent optical receivers. We consider the penalty in presence of polarization-dependent loss (PDL), as well we studied the tracking of fast state-of-polarization (SOP) in presence of both PDL and optical filtering. The trade-offs between tracking capability and back to back required OSNR (optical signal to noise ratio) when choosing the step size is illustrated as well.
An important step for generation adequacy evacuation in power system planning involving wind farms is to develop an accurate wind speed model for a site. Auto-regressive Moving Average (ARMA) model is a most common approach for predicting future wind speeds. This method, however, has some drawback, for example, the probability distribution of ARMA model might follow a Normal distribution with negative wind speeds. In this paper, a neural network based approach is proposed for wind speed time series prediction, and three training algorithms, Bayesian Regularization, Levenberg Marquardt, and Scaled Conjugate Gradient, are considered. The wind speed data in St. John's, Newfoundland and Labrador, Canada, are used in the case study to validate the proposed approach. The results obtained from the neural network approach are compared with that from the ARMA model. It is found that the neural network approach provides more accurate wind speed time series prediction.
In this paper, we present a new method for Hourly Live GHG (HLGHG) calculation and HLGHG forecasting generated by electrical energy consumption. Most of the GHG standards rely on emission factors provided by IPCC (Intergovernmental Panel on Climate Change) for different fuel types. In general, it is not obvious that one can make a direct comparison to conclude any specific standard is better than others. Nevertheless, if one wants to calculate and compare GHG emissions from two properties located in two different countries/regions, one needs to make sure that they are applying country-specific emission factors for each fuel. Our method does not depend on any specific factors and provides an upper bound for HLGHG generated by the Electrical Grid. We can compare the emitted GHG by different buildings in different countries/regions. We show our results are consistent with Environment and Climate Change Canada (ECCC) method. Finally, we provide a forecasted upper bound and lower bound for HLGHG by using a combination of Regression and ARIMA models, and by using our method for HLGHG calculation.
In our earlier paper [9] we took the histories of wind speed forecasts and actual wind speed data available from Environment Canada and presented that the hourly wind speed forecast error distributions are nearly Gaussian in nature.
In this paper we used the hourly error distribution to model a representative wind-speed realization as the sum of a deterministic term and a stochastic term. The deterministic term was the forecast provided by Environment Canada, while the stochastic component, the error in the forecast, was modeled as a first-order gaussian markov process.
Wind-speed realizations were then input to a wind generator model developed in MATLAB/Simulink to get wind power realizations. The uncertainties in the wind speed-realizations were transferred to the wind power realizations as well. Monte Carlo Simulations were performed to assess likely range of wind power production. It is shown that how the statistics of wind power prediction obtained by performing Monte Carlo Simulation gave an idea of the risk involved in wind power production.
Accurate load forecasting is a critical step for power system generation planning. Contingency parameters of the system and their dynamic characteristics should be taken into account for load forecasting purpose. In this paper, a probabilistic load forecasting algorithm considering contingency parameters is developed for the peak load forecasting. Using Anderson-Darling test toolbox in MATLAB and the historical data, the probabilistic distribution of the contingency parameters can be determined. In a case study, the Monte-Carlo simulation is run to forecast load demand and generation scenarios of Bangladesh based on the developed adaptive algorithm and the calculated probabilistic distribution. The influence of contingency parameters is evaluated using Bayesian network in a sensitivity study.
In this paper, seven critical parameters have been identified contributing to generation shortages, and affecting the peak load demand forecasting for power system generation planning. Due to the difficulty to predict the occurrence of each parameter effectively, the time series regression models of these parameters and the peak load demand are developed using the curve fitting tool box in MATLAB. The accuracy of the developed models is evaluated through the residual, goodness of fit, and percentage errors calculated between actual and calculated data。
Differential Evolution (DE) is a popular global optimization algorithm, mostly due to its high performance, easy implementation, and utilization of a few control parameters. The mutation scheme is one of the important steps of DE, which selects a number of individuals from the population as parents to generate the next population during its evolutionary process. The parents are traditionally selected randomly and in some mutation schemes the best member of population is selected as one of the parents. In this paper, we propose the centroid-based differential evolution (CenDE) algorithm, which uses the centroid of top three individuals in the population in terms of objective function value performance as the base parent. The experiments are conducted for high and low dimensional problems with small and standard population sizes on CEC Black-Box Optimization Benchmark problems 2015 (CEC-BBOB 2015). Our experiments show that the center of best three individuals plays an important role in generating candidate individuals with better objective values for the next generation, resulting in a faster convergence compared with the DE algorithm.
At this work, we propose a simple but an effectiveness technique to adjust a disparity map in a more appropriate configuration. This proposal consists of three main steps: segmentation process, statistical analysis and by using adaptive weight windows. Furthermore, we investigate if a disparity map, yielded by a robust stereo method, can be improved by the proposed methodology. Thus, we implement some stereo vision methods to compare. The experimental results show that the proposed method is efficient and it can make some enhancements in disparity maps, as reducing the disparity error measure.
In this work, we propose a disparity refinement method to be applied in stereo matching algorithms. It consists of a segmentation process, statistical analysis of grouping areas and a support weighted function to find unknown disparities. We investigate the behavior of this method by comparing it with other post-processing techniques, as the left to right consistency check. By comparing some of the most common refinement techniques, the experimental results show that or method achieved the lowest errors in non-weighted functions. Furthermore, through a qualitative evaluation, it is possible to note that our method reaches significant results, close to the ground truth maps.
Machine learning has become an important tool for data scientists and engineers in recent years. Machine learning allows software and computer hardware to perform pattern recognition tasks in many areas of research and application. Online pattern recognition is an active area of research as the Internet of Things (IoT) has dramatically increased the volume of data requiring pattern recognition. Online machine learning has two advantages: Firstly, it can process data in a streaming fashion allowing for the transmission and storage of less costly meta-data versus raw data. Secondly, it can process data in edge computing environments where resources are restricted in terms of processing capacity, data storage and power.
It is the focus of this paper to investigate online training and pattern recognition by comparing the performance of three kernel algorithms. Online environments have limited memory and kernel machines require a buffer of captured data in order to form their decision function and learn from observed data. In this paper I propose a novel algorithm for the replacement of old data within a kernel machine buffer. It will be shown experimentally that the replacement strategy within a kernel machine buffer can have a dramatic impact on pattern recognition performance.
In this work we investigate the capacity of evaluating human innovation perception from psycophysiological data, including electroencephalography (EEG), electrocardiography (ECG), and eye-gaze measured with a wearable eye tracking and an EEG headset. In order to do so, a dataset was collected while 36 participants watched video clips of the exterior and interior of four different car models, one of which was a futuristic concept car, under two different scenarios. The first involved a ``first impressions,'' unguided period and the second a guided period where participants were explicitly asked to attend to innovative areas of interest (AOI) in the vehicles. In both cases, participants reported their perceived level of innovation of the different AOIs. Experimental results showed that three metrics used for cognitive state assessment stood out for innovation perception assessment on a per-car basis, namely gaze average fixation duration, measured from the eye tracker, arousal (measured from ECG), and motivation (EEG). When averaging over cars and focusing on AOIs, in turn, cognitive load (EEG) showed importance. Lastly, while the guided protocol showed higher correlation when analysing responses per-vehicle, the opposite behaviour was observed when focusing only on AOIs, irrespective of the vehicle. In this scenario, the unguided condition resulted in higher correlation for the majority of the tested metrics.
Intensive rehabilitation after stroke contributes significantly to recovery from hand stiffness and loss of strength. Assistive wearable devices could be used by patients on a daily basis to assist with the rehabilitation process. Unfortunately, robotic devices are currently too large and heavy. To reduce the size and weight of these devices, novel means of actuation other than DC motors need to be considered. Dielectric elastomer actuators (DEAs) may provide a solution to the actuation problem.
The goal of this paper was to evaluate DEAs as a possible solution to the actuation needs of a wearable wrist exoskeleton. DEAs were fabricated and tested to determine their capabilities in terms of force and range of motion. Although the size and weight of the DEA is ideal for wearable devices, the results show that a single DEA strip of reasonable dimensions is not capable of providing the force or range of motion required. However, there is potential for multiple DEAs to provide a viable solution and should be explored in the future.
Radio frequency (RF) heating of leads on medically implanted devices is a critical patient safety matter in Magnetic Resonance Imaging (MRI). Safety is generally assessed via large-scale computer simulations, which relies on the use of a "transfer function" (TF) approach in order to make the simulations sufficiently efficient to allow very large numbers of lead trajectories to be considered. In this work, a method to measure the transfer function of a simple stainless-steel wire with insulator was developed, which serves as proof-of-principle for use of the method in more realistic devices. A Finite-difference time-domain (FDTD) method was employed for comparison and to determine the induced electric field near a test wire which was then compared to the measured values. The TF method was applied to 127.6 MHz RF exposure (corresponding to a 3 T MRI system) using a custom developed RF probe in order to improve the accuracy and sensitivity of the measurements. Hydroxyethylcellulose (HEC) gel was used to mimic the lossy tissue environment of the human body. Reasonable agreement be-tween the simulations and measurement were obtained and the method is under development for use at other frequencies of interest.
Tremor, being one of the most severe symptoms of Parkinson's disease, has been considered as not only a medical problem but also an engineering problem. Increasingly, wearable technologies are being considered as a viable treatment option. In order to study and control tremor in the field of engineering, the first step often includes modeling and simulation, as access to patients is limited. With the successful realization of a finger tremor simulator, a wearable tremor suppression device could be validated prior to testing on humans. In this study, a tremor simulator was designed and validated with recorded patient tremor data. Two experimental assessments were conducted on the validation of tremor motion reproduction and tremor torque reproduction. The results showed that the proposed simulator has 5%, 51% and 84% error on reproduction of the power of the 1st, 2nd and 3rd harmonics of the tremor, and 11.29% mean error on motion reproduction. The tremor torque measured at the index finger metacarpophalangeal joint is 0.02 ± 0.02 Nm, and the output torque from the tremor simulator is 0.05 Nm. Further parameter adjustment of the control system is required to improve performance.
Bipolar Disorder (BD) is characterized by mood changes that manifest as depressive episodes alternating with episodes of euphoria, in varying degrees of intensity. Women with BD may experience worsening symptoms during events of their reproductive life, particularly those suffering from Premenstrual Dysphoric Disorder (PMDD). The presence of PMDD in the diagnoses of BD is considered a marker of severity for the disease. In this study, data from a cohort of 1099 women with BD were used for an exploratory analysis using association rules in order to find associations between PMDD and BD symptoms. Of the thousands of generated rules, those that have associations with PMDD were selected and categorized, with confidence levels between 70% and 100%
Diabetic retinopathy (DR) is a medical condition due to diabetes mellitus that can damage the patient retina and cause blood leaks. This condition can cause different symptoms from mild vision problems to complete blindness if it is not timely treated. Hemorrhages, hard Exudates, and Micro-aneurysms (HEM) that appear in the retina are the early signs of DR. Early diagnosis of HEM is crucial to prevent blindness. In this work, we present the use of texture feature extraction from retinal fundus images to detect DR. Local Ternary Pattern (LTP) and Local Energy-based Shape Histogram (LESH) are used to extract salient texture features characterizing the presence of HEM. The extracted features are classified using SVM to detect DR. The obtained results show that the proposed features give very interesting results. LESH is the best performing technique with an obtained accuracy of 0.904 using SVM with a Radial Basis Function (RBF) kernel. Similarly, the analysis of the ROC curves shows that LESH with SVM-RBF gives the best AUC (Area Under Curve) performance with 0.931
An up-conversion mixer has been designed for 0.7-2.6GHz multi-mode and multi-standard (MMMS) RF subsystems by TSMC 0.18μm process. The circuit uses pseudo-differential structure of the Gilbert cell, and in order to increase the gain and linearity, adopts a complementary trans-conductance current bleeding structure. In addition, a T-type LC networks can reduce the effect of the parasitic capacitance of the LO switching pairs, and make the linearity of the high-frequency greatly improved. Measurement results show that the voltage conversion gain is 7.3 ~ 8.2dB, the input 1dB compression point is -3.1 ~ -1.5dBm, IIP3 is 9.1 ~ 14.5dBm, and the single sideband noise figure is 19.3 ~ 20.4dB.
With technology scaling and increasing parallelism levels of new embedded applications, number of cores in chip-multiprocessors (CMPs) has been shifted from 100 to 1000 cores. To efficiently store and manipulate of large amount of data in future applications and, also, decreasing the gap between cores and off-chip memory accesses, the size of cache systems in CMPs has been dramatically increased. Since on-chip storage systems, particularly last level caches, occupy as much as 50% of the chip area, they are dominant leakage power consumer in future multi/many-core systems. In this context, power consumption becomes a primary concern in future CMPs because many of them are generally limited by battery lifetime. For future CMPs architecting, 3D stacking of last level caches (LLCs) has been recently introduced as a new methodology to combat to performance challenges of 2D integration and memory wall. However, the 3D design of LLCs incurs more leakage energy consumption compared to conventional cache architectures in 2Ds due to dense integration. In this paper, we use the non-uniform distribution of the accesses in banks of LLCs to decrease leakage energy. We propose a runtime cache architecture. The proposed architecture that is based on nonuniform cache architectures (NUCA) disables cache banks that have low accesses and leads to high energy-efficiency. The Experimental results show that the proposed method improves energy-delay product by about 41% on average under PARSEC benchmarks compared to a recent technique named EECache.
In this paper, a rectenna is introduced by making use of an adaptive rectifier topology and a low-cost metasurface based printed antenna to cover a wide-range of input power levels. The low-cost metasurface printed antenna is based on flexible substrates while the rectifier can handle a wide input power range by employing a FET as a switch between low power and high power, which overcomes issues related to failure voltage in conventional rectification devices. The proposed rectifier attains a RF-DC efficiency of more than 40% for an input power ranging from -8 dBm to 25dBm. At 15 dBm, it demonstrates a peak power efficiency of 66% at 915 MHz. The low-cost metasurface-based printed antenna achieved a gain of 3 dBi, a directivity of 5.3 dBi and a radiation efficiency of 57%. The rectenna achieved 40% efficiency over a wide-range of input power from -5 dBm to 23 dBm, making it suitable for Wireless Power Transfer applications.
This study presents a thermal effects investigation of a GaN HEMT power amplifier under a high-power microwave pulses integrated on LTCC substrate. Based on the substrate properties and the GaN device characteristics, a thermal mechanical analysis has been assessed, and performed to investigate the heat cooling and the temperature distribution leading to a thermal breakdown. Self-heating and the temperature accumulation properties are used for deeper discussion on semiconductor protection to optimize the width and the power of the injected signal
This paper highlights some serious operating issues and challenges that are related to a parallel interlinking converters (ICs) interfacing an AC and a DC sub-grids of a hybrid AC/DC microgrid. These issue have not been investigated yet, especially in a hybrid AC/DC microgrid application. The first issue is the non-linear load behavior of an IC during power exchange from AC to DC subgrid; which makes the IC acting like a harmonic voltage source that degrades the AC voltage and current. This behavior of the IC raises the second operation issue which is the circulating current that might exist in parallel ICs configuration. Seamless re-connection of an IC following abnormal operating conditions or schedule maintenance and the challenges associated with IC re-synchronization are also examined. The paper also addresses in details the stability analysis, re-synchronization issue and its effect on system stability. The theoretical expectations are verified by digital simulation using PSCAD/EMTDC simulation package.
The aim of this paper is to project and simulate an energy management system which has application in a real-life hybrid energy generation microgrid. Both rule-based and fuzzy logic control systems are simulated using MATLAB/Simulink. The control strategies are compared in regards to diesel generator usage, storage system charge and discharge behavior and the overall system energy balance.
In this paper, control and simulation of standalone microgrid (SMG) for mine site, are investigated. To achieve maximum power point tracking (MPPT) from wind turbines (WTs) and solar photovoltaic systems (SPVs), power signal feedback and perturbation and observation (P&O) method, are used. Three phase voltage source converter (VSC) with LCL filter is controlled to regulate AC voltage and improve its power quality at point of common coupling (PCC) using new active power control (APC) strategy based on proportional resonant controller (PRC) with anti-windup and active damping. For efficient and safe parallel operation with perfect synchronization of two diesel engine based generators (DGs) to the PCC, new control approach is developed. Performance of the proposed configuration and their control approaches, are validated using Matlab Simulink.
This paper deals with the control and parallel operation of two diesel generators for a new standalone microgrid configuration. Droop control approach is used to control the both Diesel generators, the voltage source converter in order to achieve power sharing and maintain the voltage and frequency constant at the point of common coupling (PCC). To achieve high performance from the solar PV array (PV) the DC-DC buck boost converter is controlled to maintain the DC voltage constant as well as to compensate the fluctuation of generated power from the PV and balance the power in the system during transition. The performances of the proposed approach are tested using Matlab/Simulink under weather and load variation conditions.
Micro energy grid (MEG) is an efficient energy system that can be worthwhile economically if the inter-correlation of energy prices, i.e. electricity and natural gas rates, are employed correctly. This study defines the optimal design of the thermal energy storage capacity. In addition, the study uses branch and bond optimization (MILP) algorithm to operate the MEG efficiently by finding the optimal hourly energy price for electricity and natural gas hour rate. A dynamic model for a MEG system is implemented in the Simulink environment to validate the static model that utilized for optimizing the operation cost to meet the energy demands. This study utilize the actual energy prices in Ontario - Canada.
In this paper, an iterative soft-decision (SD) decoding algorithm for cyclic codes based on extended parity-check equations is developed. The algorithm does not necessarily utilize the algebraic properties of the code but operates on transforming the systematic parity-check matrix using the soft reliability information matrix obtained from the received vector. Results show a significant performance gain when compared with the hard decision Berlekamp-Massey (B-M) and belief propagation (BP) algorithms, but a similar symbol error rate (SER) performance when compared to the adaptive belief propagation (ABP) algorithm. An important feature of the decoder is that it functions within a practical decoding time complexity, and can be generally implemented for the class of linear block codes.
In this paper, we investigate Gray-mapped 16-QAM modulated Luby Transform (LT) codes over additive white Gaussian noise (AWGN) channels. Degree distributions of LT codes with BPSK modulation have been well studied in the literature but those distributions provide unsatisfactory bit-error-rate (BER) performance under high order constellations. To improve the BER performance of LT codes with Gray-mapped QAM, this paper presents a novel optimization design of degree distributions. First, we put forth the concept of variance-to-mean ratio (VMR) in a Gaussian distribution because that the variance is two times larger than the mean in the conventional Gaussian approximation (GA) no longer applies for QAM. Second, we provide a generalized GA process using the obtained VMRs to analyze the asymptotic BER performance of LT codes and furthermore lower bound on BER. Third, aiming at minimizing the average degree of degree distribution, we propose a novel model constrained by the lower bound on BER. Simulation results show that the degree distribution offered by the proposed optimization model can provide outstanding BER performance of LT codes with Gray-mapped 16-QAM over the AWGN channel.
Low-Density Parity-Check code is a kind of near-optimal error correction code. In order to improve the performance of layered decoding algorithms, two algorithms are put forward in this paper. The first algorithm let messages join the messages updating operation earlier, accelerating the convergence speed. The other one concentrates on the correction of certain type of errors. Simulation result show more decoding gain and increased convergence than layered decoding algorithms.
In this paper, we propose a new scheme for joint estimation of carrier frequency offset (CFO) and doubly selective channel in orthogonal frequency division multiplexing systems. In the proposed preamble-aided method, first, the time-varying channel is represented using basis expansion model (BEM) which considerably reduces the number of channel parameters to be estimated. Next CFO and BEM coefficients are estimated using principles of particle and Kalman filtering. The performance of the new method in multipath time-varying channels is investigated in comparison with previous schemes. The simulation results indicate a remarkable performance improvement in terms of the mean square errors of CFO and channel estimates.
Modern wireless systems often utilize rate adaptation in order to maximize transmission efficiency and throughput. Considering WiFi networks, a widely deployed rate adaptation algorithm is Minstrel algorithm. It has been shown that introducing application layer forward error correction (AL-FEC) can improve the efficiency of wireless communications. This research work proposes an extension to the Minstrel algorithm, which considers the presence of AL-FEC on the system. An end-to-end algorithm is proposed which continuously tracks the MCS values selected by Minstrel, and determines an alternative MCS which takes advantage of the AL-FEC properties of the client.
We present an approach to retrieve data from the database by using Private Information Retrieval (PIR). PIR is a cryptographic database technique that solves the seemingly impossible problem of allowing the user to query a database while the content of the user's query is protected from the database server. Different types of PIR-based approaches have been proposed during the last two decades. The common criticism against PIR approaches is that the computational overhead is not suitable for the smartphones' hardware with limited resources and therefore, they are not practical. The main focus of this paper is to reduce the computation cost of decoding the received response(s) from the database on the client side to make PIR more practical for the smartphone applications.
The functional assessment of post-stroke patients is crucial in the rehabilitation theoretical process. In the study presented in this paper, an instrument which allows the patients to perform rehabilitation exercises was developed for the functional assessment without a need of assistance of nurses. This instrument included a desk-top device along with a classifier and can be operated by the patients at home. The grade for the functionality of the limp in the context of post-stroke patients is the Brunnstrom stage assessment system that is popular in clinic. In this paper, the wrist coordination functionality of upper limb (Grade 6 in particular) was taken as an example to demonstrate the effectiveness of this instrument along with the classifier. 16 patients and 10 health persons were involved in the development of the classifier. The instrument along with the classifier was tested, and the result indicated that the instrument can achieve a high classification accuracy (98.68%), sensitivity (92.31%), specificity (100%), and the area under curve (AUC) of the receiver operating characteristic (ROC) (0.99878). Therefore, the instrument along with the classifier can be used in clinic with high confidence.
This paper presents accurate, real-time tracking of a mobile robots 2D pose in a plane. This can be useful for setting up experiments in mobile robot control, robot formation or conflict resolution. Two localization strategies are discussed and then fused together using a Kalman filter. The first method uses odometry from wheel encoders. This method is fast but suffers from accumulating errors due to drift. The second method uses an overhead camera for tracking, which can be used to correct the encoder drift. It is shown how descriptor based matching can be used to track a mobile robot in a global frame. Specifically the detectors SIFT, AKAZE and ORB are tested for their speed and accuracy using the open source computer vision library (OpenCV). They are compared against an edge-based template matching algorithm which has a known accuracy. Finally, it is shown how odometry and machine vision can be combined using an extended Kalman filter and unscented Kalman filter. Root mean squared pose errors of less than 2 mm in translation and less than 1 degree in heading are achieved at a recognition time of less than 50 ms.
The accurate detection of people in indoor environments requires high-cost devices, while low-cost devices, in addition to low accuracy, offer little information about the monitored events. The perturbations that result from indoor movements affect the signals received by an 802.11 interface. Hence, an 802.11 device becomes a widely available,low-cost, and reasonably accurate solution for several applications. This paper presents WiDMove, a proposed prototype to detect the entry and exit of persons, within an indoor environment, using the channel state information (CSI), which is made available by the IEEE 802.11n compliant devices. Based on the gathered CSI records, we applied basic frequency-time analysis to build special features vector using Short-Time Fourier Transform (STFT) and Principal Component Analysis (PCA). We used the extracted features to train and develop a Support Vector Machine (SVM) classifier, which provided very promising initial results. Our initial results have an accuracy near 80%
In this paper, we look at methods of reducing CO2 emissions from data centres by employing new virtual machine management and migration schemes. We simulate a multi-cloud environment through the use of the tool CloudSim and introduce a new virtual machine migration algorithm to minimize energy consumption and, subsequently, CO2 emissions of data centers. The resulting algorithm reduces the overall energy consumption of the system, at the cost of increased SLA Violations.
This session will also feature the WIE best paper announcement
This paper discusses the development of a model for the optimal operation of a hybrid diesel-photovoltaic pumping system using groundwater in a pumped hydro storage scheme which can be used to minimize the daily electricity cost of farm. The developed model can minimize the power produced from the diesel generator while optimally managing the generated power flow from the PV and the groundwater pumped hydro storage given the variable load demand as well as the availability of solar resource. As a case study, the model has been used to simulate a small farming activity in South Africa with the aim of evaluating the potential energy cost saving achievable using the proposed system when compared to exclusive power production using a diesel generator. The simulation results show that a potential 71.3% energy cost saving can be achieved using the proposed hybrid system with the optimal control model rather than supplying the load demand by the diesel generator exclusively.
To resolve power crisis and reduce environmental effect of conventional generation, concentrated solar power (CSP) plant is a viable solution. This paper covers technical and financial details for setting up a parabolic trough CSP plant in Chittagong, Bangladesh. The simulation is carried out for 40 years with a 0.1% compound depreciation rate using a System Advisor Module (SAM). A thermal storage system is integrated with the power plant so that electricity can be generated during the absence of sun, and the Hitec solar salt is used as a heat transfer fluid. 218 GWh of energy can be generated annually from the design CSP plant, which will occupy 923 acres of land. The overall internal rate of return is 12.58%, and the levelized cost of electricity is 19.10 US cents/KWh.
With the increase in renewable generation in power systems, it is critical to accurately determine the capacity value of renewables during generation planning to maintain system reliability. This review paper is intended for readers who are seeking to understand the basics of reliability evaluation and gives an insight on the factors that affect the capacity value of solar resources. The capacity value estimation methods are discussed briefly, followed by a discussion of the factors that may affect the capacity value of solar resources. The methodology explained here is applicable to any renewable generation. However, the focus of the discussion is on the factors that affect the capacity value of solar resources. Also, the impact of input data on the solar capacity value is included. Potential future work is also included.
This paper proposes a new approach to use of Home Energy Storage System (HESS) to improve the lifetime of the electrochemical batteries while utilizing the maximum available solar energy. The proposed approach employs pulsed charging and discharging method in split battery banks to improve the longevity of the Li-ion batteries. In this approach, the harvested energy from solar panels alternatively is stored in the battery banks to prevent losing free energy and thus reducing the consumer electricity cost. To show the effectiveness of the proposed approach, a HESS is developed and the different modes of operation is discussed in this paper. The performance of the proposed system is evaluated experimentally.
This paper presents an approach for photo-voltaic based DG allocation in distribution networks. The objective of the proposed approach to minimize the developer's investment cost associated with energy supply requirements over predefined planning period. Uncertainties associated with supply and demand are considered in this work, as well as, the uncertainty associated with energy prices. In addition, this work introduces smart curtailment of renewable resources, which can maximize the penetration and minimize the overall investment. Simulation results on a typical distribution network are provided, which prove the effectiveness and robustness of the proposed resource allocation approach.
In this paper, a Euclidean-distance multiobjective teaching-learning-based optimization (MOTLBO) is applied to the design of cascade-form IIR digital filters. Minimization of the least-pth minimax errors in passband and stopband magnitude responses and passband group delay response are performed. The digital filter design results of the Euclidean-distance MOTLBO approach compares favorably with a state-of-the-art optimization method.
This paper addresses the estimation of the frequency of a sinusoid from compressively sensed measurements. Normally in parameter estimation, measurements are assumed to contain the signal and additive white Gaussian noise (AWGN). Under the paradigm of compressive sensing, the measurement no longer contains AWGN but correlated noise. Frequency estimation of a sinusoid from measurements obtained through compressive sensing (CS) using the AWGN assumption will be non-optimal. This paper provides near-optimal frequency estimates for a sinusoid obtained through CS. Estimation of frequency of a sinusoid from compressively sensed measurements is cast as a linear least squares problem. A near-optimal solution in closed-form is presented by applying generalized total least squares (GTLS) technique to avoid bias caused by the correlated noise. The accuracy of the closed-form solution is close to the theoretical bound as confirmed by simulations.
In this paper, IIR digital filters are designed using Multiobjective Artificial Bee Colony algorithm (MABC). Artificial Bee Colony algorithm (ABC) is a stochastic optimization algorithm based on the food seeking behaviors honey bees. Even though ABC algorithm can converge to global optimum for complex problems, the time taken for convergence is much higher compared to classical methods. Gbest guided search based multiobjective ABC algorithm can reduce the time taken for converging to global optimum and improves the quality of the solutions by tuning the search process towards the global best in each iteration. IIR filter design is a non-convex optimization problem and requires the optimization of both the magnitude and group delay performances. The results show that, MABC can achieve lower passband and stopband magnitude errors, and passband group delay error than those of the state-of-the-art techniques.
An unconstrained joint optimization method for asymmetric FIR digital filter design using harmonic search algorithm is presented. Lowpass and bandpass digital filter examples are used to demonstrate the design procedure. Design results indicate that the passband and stopband magnitude errors and the passband group delay error can be minimized effectively.
In this paper, IIR digital filter design using constrained multiobjective Cuckoo Search Algorithm is presented. Minimization of the peak errors in passband and stopband magnitude responses and passband group delay response are performed. The digital filter design results of the constrained MOTCSA approach compares favorably with other state-of-the-art optimization methods.
Differential Evolution (DE) has shown a superior performance for solving global continuous optimization problems. The crucial idea of DE is modifying the population of the candidate solutions toward the weighted differences of randomly selected candidate solutions. In this paper, we propose the length scale-based DE which utilizes the obtained information of a landscape analysis metric, the length scale metric, to enhance its own performance. Landscape analysis methods attempt to gain the properties of optimization problems. For two sample points, length scale metric calculates their objective function changes with respect to the distance between them. By computing length scale values of all possible pairs of candidate solutions, DE can employ the pairs of candidate solutions with the greater length scale values to calculate the difference vector in its mutation operator. The length scale-based DE is evaluated on CEC-2014 benchmark functions. Two dimensions, 50 and 100, are considered for benchmark functions. Simulation results confirm that the proposed algorithm obtains a promising performance on the majority of the benchmark functions on both dimensions.
This paper proposes an optimal control approach to formulate and solve the maximum endurance problem for turboprop aircraft in steady cruise. Furthermore, the problem formulation also considers the effect of head and tail-winds on total endurance time. An analytical state-feedback solution is provided for the maximum endurance airspeed of a turboprop, as well as an analytical expression for the maximum endurance time. The sensitivity of the total endurance time with respect to air density is also provided to illustrate the strong effect cruising altitude has on total endurance time. Finally, an example using the Beechcraft King Air turboprop aircraft is provided to validate the results.
Scenarios with mid-air collision events require complex studies where the aircraft may fly on the edge of their flight envelope. This problem is not only limited to the maneuver since the recovery might lead the system to an unstable state if the performance is critical. Design and analysis of computer experiments (DACE) by using the uniform design (UD) experimental method represent a tool to study the performance of the aircraft without the full analysis of the dynamics of flight. In a specific simulation context described in this paper, encounters between two representative general aircraft, a Cessna 172 and a Twin Otter, in a Phi (ϕ) maneuver are simulated. From the encounters, a diving avoidance maneuver is developed in a mid-air collision circumstance. The recovery is later observed and analyzed from two perspectives: an immediate recovery to the original path and an idle state with a straight path while the aircraft awaits for the threat to be removed. Assuming that the computer model is accurate and the simulation stable, the metamodel using UD provides an optimal combination of commands for all the scenarios with a minimum discrepancy. The implications of this paper are seen in the flexibility of this method owing to its adaptability to fit any computer model and simulation scenario. This feature is currently being used to study unmanned aerial vehicles and their interactions with other human-piloted aircraft in the same environment with the purpose of developing critical avoidance maneuvers.
In this work, we address the problem of identification and control of dynamical systems where a generalized orthonormal basis functions (GOBF) model with ladder-structure is used to represent the system. In the identification process, we took a genetic algorithm to optimize the number of functions and the model poles. The identified model is then used as the basis for the implementation of a predictive controller, which incorporates the advantages of this type of modeling in the lack of output feedback. A magnetic levitation system was identified and a model predictive controller was used to stabilize the system. Results show the feasibility of this technique in the control of a real system.
This paper describes the development of a two degree of freedom (2DoF) pointing and tracking simulation used for evaluating the Air-LUSI subsystem behaviour. The Air-LUSI project intends on obtaining high altitude Lunar Spectral Irradiance (LUSI) measurements of the Moon by integrating an automated telescope mount capable of acquiring the Moon as a target and tracking it from the science pod of an ER-2 aircraft as it flies at an altitude of 65,000 feet. By obtaining precise measurements of the Lunar Spectral Irradiance, a Lunar Calibration Model can be used for NASA's Earth Observing System (EOS). The simulations found within this report describe the estimation, filtering, and control strategies applied to the 2DoF gimbal design and compares the tracking accuracy when using system measurements or state estimates produced by the linear Kalman filter (KF) or the nonlinear unscented Kalman filter (UKF) as the input to the PID controller. An additional aspect of this project studies the nonlinear or linear system behaviour described by the interacting multiple model (IMM) algorithm and analyzes the results of a hybrid adaptive control strategy that combines the KF and UKF PID gains using the IMM mode probabilities.
Millimeter-Wave (mmWave) communications may be the key technology for the realization of 5G networks. MmWave communications have significantly different propagation characteristics than microwave frequencies. The recent studies have determined the performance of cellular mmWave networks using stochastic geometry technique assuming a stationary user. The stationary user model doesnot capture correlation in the blocking of the links as the user moves on. In this work, we have determined the performance seen by a mobile user traveling over a path at constant and varying speeds. We have obtained the cumulative information received by the user as a function of its path length for different blocking intensities and cell sizes. The results show that while the received information rate doesnot vary significantly with mobility, the average path length that the mobile user is associated with a base station without interruption drops down sharply with increasing blocking intensity. This will cause in high handover rate, which will result in high overhead. This work demonstrates the significance of the user mobility on the performance of cellular mmWave networks.
This paper aims at investigating the feasibility of ranging and positioning in millimeter (mm) -level accuracy by adopting millimeter-wave and 3D massive antenna array. The agent is equipped with massive phased uniform rectangular array (URA) and multiple anchors are considered in localization networks to pursue a higher accuracy, where a far-field environment is assumed for phased massive URA. Fundamental limits of both time-based ranging and positioning are derived by Cramer-Rao bound (CRB), where the relationship between the fundamental bound of range estimation and that of position estimation is theoretically clarified. Numerical results show that the proposed scenario achieves a precise mm-level accuracy for ranging and positioning.
The amount of connected devices has been growing tremendously over the past decade. These connected devices range from the traditional smart phones to electrical appliances, solar panels, converters, electric vehicles and wearables. Satisfying their connectivity demand is adding pressure to the wireless networks which are barely serving their bandwidth-hungry mobile users. LTE Unlicensed (LTE-U) aims to exploit the unlicensed spectrum to offload its traffic, increase capacity, and hence improve the user/device experience in an era of inflated demand. Meanwhile, WiFi is the dominant technology operating at the unlicensed spectrum. Therefore, LTE-U needs to ensure the performance of WiFi users do not degrade as LTE users offload their traffic. In this paper, we propose a Q-learning based coordinated medium access approach to enhance the Listen Before Talk (LBT) mechanism of LTE-U. Q-learning based LBT helps with the co-existence issue by enhancing the performance of WiFi users at times when LTE-U users try to access the unlicensed bands. Our results show that the proposed Q-learning based coordinated access reduces the end-to-end delay and increases the delivery success rate of WiFi traffic.
This paper presents the indirect methods for constructing radio environment maps (REMs), which utilize known model information, to first estimate the primary transmitter parameters and then generate REMs. Two indirect methods under lognormal shadowing are presented and compared. The better of these two methods is further investigated in different scenarios. These scenarios include different number of sensors, varied size of measurements, several shadowing spread values, different percentages of error in path-loss exponent, and the effect of the number of moving sensors and their speeds to the REM quality. The results show that performance is enhanced as the number of sensors and the size of measurements increase, whereas clear degradation in REM quality is shown when shadowing spread increases or the model parameters are not well calibrated. Also, as the number of moving sensors or their speeds increase, the REM performance becomes less effective.
In future wireless networks, the substantial number of users accessing wireless broadband will be vehicular; such as passengers in public transportation vehicles like buses, trains or trams. The number of blocked vehicular users in the network will be high due to the scarcity of the wireless network resources. This becomes even worse when the mobile relay node (MRN) is about to hand-off between LTE base stations, which are known as Evolved Node B (eNBs). Thus, network operators have started investigating how to serve these vehicular users (VUEs) cost-effectively. One of the main promising solutions is to deploy an MRN with multi-backhaul capability, which is also known as distributed relaying with many-to- many connections between eNBs and relays, on a public transportation vehicle that forms a small cell inside the vehicle to serve VUEs with a proper association approach for all users. In this paper, we employ matching game theory for the user association problem in which every node in the network ranks its preferred match based on its utility function. Simulation results show that our proposed approach has improved the admission rate of users and decreased both the blocking and hand-off failure rate.
The literature is scarce on content-based copy detection and recall (CBCD). Methods on this categories attempt to exploit one or more feature(s) of images such as shape, color, or texture to recall a query image. The performance of these methods is satisfactory but not perfect. In this paper, we present three fast image copy recall algorithms from a database with perfect results. These algorithms are specifically developed to overcome the problems of copies of same image with different amounts of illumination intensities; similar images with trifling difference(s); and a limited image flipping cases in databases. The algorithms are based on spatial signatures that uniquely represent the images in the database as well as the query image to be recalled. We tested our algorithms on a set of 23,443 images from LIVE, PIE databases, ORL Database of Faces (the AT&T laboratories Cambridge Database), Caltech-UCSD Birds database and our miscellany of images. Simulations results show that in each case, the query image was recalled with perfect accuracy. Finally, we compared our results with published methods and found that the proposed algorithms are faster and more accurate.
Multicast distribution employs the model of many-to-many so that it is a more efficient way of data delivery compared to traditional one-to-one unicast distribution, which can benefit many applications such as media streaming. However, the lack of security features in its nature makes multicast technology much less popular in an open environment such as the Internet. Internet Service Providers (ISPs) take advantage of IP multicast technology's high efficiency of data delivery to provide Internet Protocol Television (IPTV) to their users. But without the full control on their networks, ISPs can not collect revenue for the services they provide. Secure Internet Group Management Protocol (SIGMP), an extension of Internet Group Management Protocol (IGMP), and Group Security Association Management (GSAM) protocol, have been proposed to enforce receiver access control at the network level of IP multicast. In this paper, we analyze operational details and issues of both SIGMP and GSAM. An examination of the performance of both protocols is also conducted.
Side-channel attacks are known as the powerful vulnerabilities of secure systems. In this paper, the sensitivity of the FPGA designs against the timing attack are analyzed and then, a dynamic technique is proposed to mitigate the possibility of timing side-channel attacks. The proposed technique decreases the dependency of the circuit delay against internal data. In the presented method, chains of inverters are added on the critical path dynamically according to the delay of the active path at runtime. Various scenarios of the adding delay chains including random delay and controlled delay are utilized to reach an optimum design. Finally, a timing analysis attack is applied to the FPGA hardware to evaluate the security improvement. The experimental results show that the dependency of the output delays to the internal values is reduced considerably, which indicates the system security improvement against the timing analysis attacks.
Distributed Denial of Service (DDoS) is one of the major threats to the Internet security. Various DDoS attacks have been reported against many organizations in the last years. There have been numerous studies investigating the effects of utilizing classification algorithms to detect and prevent DDoS attacks. However, the existing research has many obstacles including achieving practical performance rates of the detection system, the delay of detection, as well as the ability to deal with the large dataset. In this research, we propose a DDoS detection framework that mainly consists of Gradient Boosting classification algorithm (GBT) and the Apache Processing Engine Spark. Experimental results conducted in a Spark and Hadoop cluster, for evaluating the proposed framework regarding the performances as well as the delays using a real DDoS Dataset, show that the integration of the GBT algorithm with Apache Spark works excellently to detect DDoS attack. The volume of the dataset and the features space, as well as the depth of decision trees and number of iterations parameters, have a direct impact on the GBT algorithm performance rates and the delays.
In this paper, we enunciate the theorem of secrecy in tagged protocols using the theory of witness-functions and we run a formal analysis on a new tagged version of the Needham-Shroeder protocol using this theorem. We discuss the significance of tagging in securing cryptographic protocols as well.
Various anonymizing services and networks rapidly grow both in numbers and variance. This paper is concerned with evaluation of delay strategies in Chaum's anonymizing mixes. Metrics of anonymity were obtained from packet delay entropies and additional simulations established clear dependency between these metrics and ability of packet mixing to mitigate attacks on confidentiality based on pattern recognition of packet arrival instances.
Key Management Protocols (KMPs) are intended to manage cryptographic keys in a cryptosystem. KMPs have been standardized for Internet Protocol Security (IPsec), and these KMPs have been formally validated for their security properties. In the Internet, routing protocols have different requirements on their KMPs, which are not met by the existing IPsec KMPs, such as IKE, IKEv2, and GDOI. Protocol modeling has been used to analyze the security of the IPsec KMPs. For routing protocols, there are new KMPs proposed by the Keying and Authentication for Routing Protocols (KARP) working group of the Internet Engineering Task Force: RKMP, MRKM, and MaRK. These KMPs are designed to have better applicability for general routing protocols. However, the security of these protocols has not been validated. In this paper, we have summarized the necessary conditions for security of routing protocols. We have analyzed the security aspects of RKMP, MRKM, and MaRK, by formally validating those protocols using the AVISPA modeling tool. This has shown that these KMPs meet the necessary security requirements.
Small cells, such as femtocells, are becoming a key component in improving the service of wireless networks. These small cells are low-powered, low-range cellular base stations that connect to the LTE core through the Internet or the network operator's wireline network, and provide new and diverse ways to increase wireless coverage and bandwidth in high-density areas. These new advances in capabilities introduce new opportunities for adversaries to exploit technology against the public interest. In this paper, we present a threat model of small cell LTE networks from a network perspective. As small cells are an evolving technology and ever increasing in use and importance, threat modelling is important to identify and address the threats introduced by new deployment architectures and designs. Our asset-focused model explores three main categories and the factors that contribute to the risks of: denial of service against the Evolved Packet Core network; unauthorized access to network operator resources; and compromise of end-user privacy, confidentiality, or service availability. While this model is not meant to be an exhaustive list of risks, we hope it raises awareness of the threats discussed, and encourages security discussions on this important aspect of LTE and future 5G networks.
The increasing use of network-connected devices places a higher risk on security and privacy of data. The characteristics of the wireless channel can be employed to provide secrecy in wireless communication, in the form of Physical Layer Security (PLS). This review paper provides a tutorial on practical PLS based on multiple antenna and relay network systems, and identifies current challenges in this important research area. The emphasis is also put on the crucial step of secure channel estimation, as well as discriminatory channel estimation (DCE), without which the practical application of PLS remains limited.
Denial of Service (DoS), and their larger-scale vari- ant Distributed DoS (DDoS), attacks seek out and exploit various network vulnerabilities in order to overwhelm a node to the point of severe impairment. Our heavy dependency on services such as social media, file storage, streaming, and online banking often makes such services the target for attackers. As the Internet grows and changes with paradigms like Software Defined Networking (SDN) and cloud computing, new opportunities for DoS attackers also emerge. This survey aims to cover the latest strides in attack methodologies and related defence mechanisms specific to the Transport layer.
The Capacity Outage Probability Table (COPT) is a common analytical model for the generation adequacy evaluation. A recursive algorithm can be used to form the COPT which demonstrates arrays of capacity levels of generation with their probabilities of occurrence. Determine the number of states in a COPT is critical because more states means a higher modelling accuracy and also a longer computation time. The Fuzzy C-means is an effective approach to reduce the number of states in the COPT while maintain calculation accuracy. In this paper, the Fuzzy C-means approach is adopted, and the generation adequacy evaluation with wind farms is conducted using historical hourly wind speed data of St. John's, Newfoundland and Labrador, Canada. The study results are demonstrated using Roy Billinton Test System (RBTS).
In this paper, a new technique is applied to conduct mode identification using ambient measurement data. The proposed hybrid measurement- and model-based method can accurately estimate the system state matrix in ambient conditions, the eigenvalues and eigenvectors of which readily provide all the modal knowledge including frequencies, damping ratios, mode shapes, and more importantly, participation factors. Numerical simulations show that the proposed technique is able to provide accurate estimation of modal knowledge for all modes. In addition, the discrepancy between the participation factor and the mode shape is shown through a numerical example, demonstrating that using the mode shape may not effectively pinpoint the best location for damping control. Therefore, the proposed technique capable of estimating participation factors may greatly facilitate designing damping controls.
In power system operations, the operating cost of thermal generating units is calculated based on polynomial functions of the units' power generation. Thus, they are applied only if these units are connected to the grid to serve the load. Spinning reserve units are kept running without committing them to the grid. Completely depending on the intercept of the preceding polynomial equations does not help to determine the proper fuel consumption if the units are operated at different speeds. This paper treats this phenomenon by considering different fuel-cost models specified for just these spinning reserve uncommitted/running thermal units. The results obtained from this study prove that the deficiency associated with the existing fuel-cost models can be resolved by replacing them with these novel fuel-cost models.
Load demand forecasting is a broad branch of electric power systems engineering. In the last few decades, hundreds of methods have been suggested by many brilliant researchers around the world to improve the current forecasting tools. These tools are categorized as: deterministic, probabilistic, stochastic, and artificial intelligent (AI) algorithms. Among these approaches, artificial neural networks (ANNs) proved themselves as very competitive and highly precise methods to accurately forecast energy. ANNs are divided into two main types called: feed-forward networks and recurrent/feedback networks, where each type of them has multiple sub-types. This study tries to improve the performance of any type/sub-type of ANNs by optimizing its configuration through using the biogeography-based optimization (BBO) algorithm. The number of input variables, layers, neurons, and the types of training algorithms and activation function all are optimized. The goal is to preserve the simplicity, so only very simple multi-layer feed-forward ANNs are used instead of using time-series-based feed-forward/feedback ANNs. To prove the effectiveness of hybridizing ANNs with evolutionary algorithms (EAs), numerical simulations are carried-out on some Nova Scotia's loads. The results obtained from these optimally configured ANNs are highly significant, and thus they confirm that allegation.
This document presents an innovative tool for assessing the consequence of the final failure a power transformers, this metric considers aspects as power quality, overload of other equipment, safety and reliability of the system and the environmental and social impact. For this evaluation, existing standards as well experience and industry practices are integrated. For this regard a fuzzy inference system was proposed, the membership functions and the rules for integrating each input are based on a survey applied to power transformers park managers in Ecuador; this survey considers technical, economic and social characteristics. The resulting membership functions, and the rules are presented.
this paper demonstrates average and detailed models for type-III wind turbine that can be used by the researchers and engineers as a benchmark to develop their research and studies. The models are developed in PSCAD/EMTDC and compared in terms of simulation time and accuracy. The comparison is conducted for two different dynamics including fast transients in an event of a fault and slow wind speed variations. It is shown that simulation results obtained from the average model are comparable to those of the detailed model in terms of accuracy while simulation time is 15 times faster. The sub-synchronous impedance of the models are obtained and shown that the average model has proximity with the detailed model. Moreover aggregate model of wind farm is developed in such way that intensive computations can be eluded and simulation time can be saved. A method is introduced to model aggregated wind farm to save simulation time.
Neutral grounding resistors play a critical but often undervalued role in power systems by controlling the severe transients that appear in neural system. Continuity of service of these key assets should be ensured since the defected neutral grounding resistors leave the system unprotected against transient over-voltages and over-currents and cause false sense of security. An efficient solution is proposed to detect this issue for unit-connected generators that are equipped with the subharmonic injection based generator stator ground protection. The proposed technique demonstrates reliable performance for various conditions of the resistor and power system, observed through comprehensive software analysis. In contrast to existing methods, this technique relies on existing protection and monitoring installations making it an efficient alternative that first, adds more value to existing sub-harmonic injection infrastructures and second, initiates an alarm and prevents ground fault trip in case of failed-short neutral grounding resistor.
Until the early 1970s, the planning activity of the electricity sector had a simple logic, to meet the growing demand for the least economic cost, and environmental issues were considered as secondary issues to this development. However, over the years it has been realized that environmental issues should also be considered, emerging the Integrated Resources Planning. The complex nature of this type of planning that seeks to satisfy multiple economic, environmental, technical and social objectives has become a challenge, especially with the advent of smart grids and Distributed Generation units. With the use of Internet of Things (IoT) and electronics devices this planning process, and also, operation of the system seeking to meet these multiple objectives and including environmental externalities, has been facilitated and improved. Since power electronics plays an important role in the efficiency and optimal control of power systems and together with the IoT devices represent a fundamental tool in the control and development of smart grids, which in recent years have emerged as one of the new forms of energy distribution.
AC interference is a growing concern within the power industry due to the proximity of other utilities (pipelines, railways, etc.) sharing the same right-of-way (ROW) and corresponding safety issues. This paper presents a methodology for determining the optimal phasing of power transmission lines to reduce the interference and improve on safe operation of utilities sharing the same ROW. The induced voltage levels on an adjacent conductor are calculated using various existing methods to compare their accuracy. Several important findings are reported here regarding the AC interference modeling under steady-state condition. The importance of the inherent unbalance in phase currents and the effect of soil resistivity are discussed in detail. The test case in the study is based on a project in BC Hydro to build two 500 kV transmission lines located in the Peace Region area, British Columbia. The results of this study were used in determining the optimal phasing for these transmission lines.
Modern electrical power systems control the ground fault condition and its consequent challenges by proper neutral earthing. The neutral grounding resistors are well-known apparatuses that are widely used in this field of power system engineering. These resistors fail due to vibration, intermittent arcs, corrosion, etc. and cause the danger of being ungrounded or solidly grounded. The continuity of service of these resistors is very critical to many industries resulted in various neutral grounding resistor monitoring techniques. In this paper, all existing monitoring techniques will be reviewed. Moreover, the understood trend of this art will be used to anticipate the next generation of the existing neutral grounding resistor monitoring techniques. Thereafter, the performance of an existing monitoring method will be analyzed under various conditions considering different configurations of the power system. The situations that the monitoring method fails to monitor correctly will be highlighted followed by potential solutions.
For the quality inspection task of remote radio unit (RRU), it is necessary to insert the testing probes into its power and network ports. In this paper, the problem of alignment of robot's end effector with the power port of RRU in 4 degrees of freedom (DoF) has been solved. An image based visual servo (IBVS) controller has been designed to perform the alignment task by using the visual features of the power port in image plane. Decoupled features have been selected which eliminate the need to use image jacobian during the control design. This not only reduces the computational cost but also removes the hassle of dealing with image jacobian singularities during the visual servo. The findings have been validated both by performing simulations and designing an experiment which uses an industrial manipulator.
Understanding the brain is perhaps one of the greatest challenges facing twenty-first century science. While a traditional computer excels in precision and unbiased logic, its abilities to interact socially lags behind those of biological neural systems. Recent technologies, such as neuromorphic engineering, cloud infrastructure, and big data analytics, have emerged that can narrow the gap between traditional robots and human intelligence. Neuromorphic robotics mimicking brain functions can contribute in developing intelligent machines capable of learning and making autonomous decisions. Cloud based robotics takes advantage of remote resources for parallel computation and sharing large amounts of information while benefiting from analysis of massive sensor data from robots. In this paper, we survey recent advances in neuromorphic computing, cloud based robotics, and big data analytics and list the most important challenges faced by robot architects. We then propose a novel dual system architecture where robots can act on their own as well as cooperate with a brain centered cloud and use big data analytics.
The tracking control of wheeled mobile robot is one of the most complex problems encountered in robotic science. In real applications, many serious difficulties affect the control of the robot. Nonlinear model, parameters uncertainties, and external disturbances limit the study of mobile robot tracking control. To reduce the mobile robot tracking error, we propose an adaptive law based on sliding mode control applied to a nonlinear model and taking into account uncertainties. Using the Lyapunov theory, the stability and the convergence of the tracking errors are proved. Simulations are used to illustrate the efficiency of the proposed controller.
During a path, skid-steer rovers' power consumption is highly dependent on their turning radius. For example, a point turn requires a lot of energy compared to the straight line motion. Accordingly, in the path planning for these kinds of rovers, turning radius is the main factor that explicitly should be considered explicitly. In addition, in the published approaches for path planning, there is no theoretical analysis to guarantee their global optimality. In this paper, a new approach to path planning for the skid-steer rovers is proposed. The idea is to use paths that are constructed from circular arcs. Accordingly, the radii of the circular arcs can be determined to obtain an energy-efficient path. It is analytically proved that for which conditions, a simple path, that has two circles, is energy efficient for skid-steer rovers. Therefore, the proposed method in this paper is the stepping stone to find globally optimal paths (energy efficient paths) for skid-steer rovers while using the general arcs.
In this study, we propose a socially aware navigation framework for mobile service robots in dynamic human environments using a deep reinforcement learning algorithm. The primary idea of the proposed algorithm is to incorporate obstacles information (position and motion), human states (human position, human motion), social interactions (human group, human--object interaction), and social rules, e.g. minimum distances from the robot to regular obstacles, to the individual, and to human groups into the deep reinforcement learning model of a mobile robot. We then distribute the mobile robot into a dynamic social environment and let the mobile robot automatically learn to adapt to an embedded environment by its experiences gained through trial-and-error social interactions with the surrounding humans and objects. When the learning phase is completed, the mobile robot is able to navigate autonomously in the social environment while guaranteeing human safety and comfort with its socially acceptable behaviours.
This paper considers a max-min fairness power control problem for the spectral efficiency of multiuser massive multiple-input multiple-output systems in a uplink transmission, where a base station receives data signals from the various users. A physical channel model in which the angular domain is separated into a finite number of distinct directions is also considered in this paper. Based on the large-scale fading coefficients, a power control method is formulated as an optimization problem in order to maximize the minimum spectral efficiency among the various users under the peak power constraint. This optimization problem is solved by employing a geometric program. The numerical results are conducted by two practical scenarios to evaluate the proposed power control method. In both scenarios, it has been shown that the proposed power control method is superior to other existing schemes in terms of minimum spectral efficiency.
Long Term Evolution (LTE) in unlicensed bands (LTE-U) has emerged as a promising solution to address the unprecedented growth of mobile data traffic. LTE-U extends the benefits of LTE with unused portions of the unlicensed 5 GHz spectrum, which is primarily used by Wi-Fi users. However, uncertainty about the availability of bandwidth makes the adoption of LTE-U a challenging new task for operators. In this work, we propose a stochastic programming approach for the allocation of LTE-U resources to expand bandwidth and coverage while controlling the risk of conflict with Wi-Fi demand. Three models from the literature for this demand are used in our computational experiments. The results show the importance of prior knowledge about the distribution of Wi-Fi demand.
Heterogeneous wireless access network (HWAN), composed of different radio access technologies (RATs) with overlapping zones, provides high data rates and supports bandwidth hungry application. In this paper, we explore a centralized approach for load balancing in HWAN that utilizes central controller node (CCN). The CCN balances the load by r-allocating the radio resources such that an equal load ratio is maintained across all the available RATs in HWAN. The performance of this centralized mechanism is evaluated through call blocking probability and network utilization. After load balancing, a decrease in call blocking probability for overload RATs and increase in BW utilization for under-load RATs can be visualized from the obtained simulation graphs.
Inter-Cell Interference Coordination (ICIC) using frequency reuse schemes in Orthogonal Frequency Division Multiple Access (OFDMA) cellular networks is one of the most promising approaches to reduce the effect of interference and improve the system performance. Fractional Frequency Reuse (FFR) schemes are efficient interference mitigation techniques that have been used to improve system performance in multi-relay multi-cell OFDMA cellular networks, especially for the cell edge users. The purpose of FFR design is to deploy frequency patterns (sets) in such a way that a Mobile Station (MS) user can reduce interference from adjacent cells. The Frequency Reuse Factor (FRF) of 7/3 with frequency reuse pattern (7,3,1) is used to improve the system performance of FRF=1 and FRF=3. This paper proposes a new formula to generate and deploy frequency sets with Amplify and Forward (AF) fixed relays to improve the performance of the system. Simulation results show that the proposed pattern has an improved performance compared to the performance of other cooperative and non-cooperative schemes due to the reduction in Inter-Cell Interference (ICI).
The ability to communicate is critical aspect of safety in underground mine operations, where conventional radio communication technology is severely limited or not reliable due to disaster. Once again we can determine that the benefits obtained from the principle of the Through-the-Earth (TTE) system can be utilized. This paper presents a new modeling technique in characterizing the electric field strength in connections with the propagation of the waves through the strata of the underground mines. This modeling technique was validated practically in a real mine site environment in Irtishskiy mine, Kazakhstan. Our model considered number of parameters that are critical to performance of the system such as transmitting power, frequency, geometry of antenna and system grounding. To the best of our knowledge, this paper presents a novel analyses and modeling results demonstrating the effects of different grounding systems designs on the wave propagation behaviors in TTE systems. It is evident that this research can be applied in enhancing the performance of the TTE systems and the depth of signal propagations.
The concentrated solar power (CSP) generation is a promising type of renewable energy sources. The islands of Bangladesh are yet to connect with the national grid, key requirements to install CSP based power plant in Sandwip Island is investigated in this paper. After analyzing environmental conditions, economic prospect and demand data, a 20 megawatt parabolic trough CSP plant for the island is proposed through this research. The System Advisor Model (SAM) is used to prepare the simulation model. The simulation is conducted based on real environmental conditions of Sandwip Island. Simulation results indicate that the generated power is sufficient to supply the entire island. A cost comparison is carried out among different renewable energy sources to ensure the economic feasibility of the proposed system.
The need to accurately forecast available solar irradiance is a significant issue for the power industry. It poses special challenges for utilities who serve customers in isolated regions where weather forecast data may not be abundant. This paper proposes a method to forecast two hour ahead solar irradiance levels at a site in Northwestern Alberta, Canada using real-time solar irradiance measured at remote monitoring stations. The proposed method uses an artificial neural network to forecast the solar irradiance levels and uses the genetic algorithm to determine the optimal array size and positioning of solar monitoring stations to obtain the most accurate forecast. There are two main findings of the presented study. First, the results show it is possible to use as few as five remote monitoring stations to obtain a near-peak forecasting accuracy from the algorithm. Second, adequate geospatial separation of the remote monitoring sites around the target site is more important than positioning the sites in the strictly upwind directions.
Residential AC Nano-PV Systems (NPV) are small (less than 10 kW) solar PV systems that can generate, store, control and import or export electrical power to/from the Grid. At the core of these systems are Energy Routers which are hardware/software components that allow control of the system's generators (e.g., solar PV panels), energy storage devices, and loads. In this paper, we analyze AC NPV design considerations for residential systems, including short-term load prediction for constant power generation, and security issues.
This paper presents a low cost automated solar water pumping system for irrigation in developing countries. The programmed sensor module detects the temperature, humidity, soil moisture level and sends the information to ESP32 microcontroller. A water level sensor also observes the water level and sends the data to the microcontroller unit. Based on the information and boundary conditions, the microcontroller decides either to start or to stop the pump motor. This paper also describes how to decide soil moisture limits for a particular soil. The ESP32 microcontroller also sends results to webserver so that the user can see that. The user can operate the irrigation system far from the field by a simple click on a cellphone. A manual ON/OFF system is also incorporated with the proposed design.
In this paper, CO2 emission reduction of Photovoltaic Thermal (PVT) model using variable flow rate values has been examined and evaluated for seventy -three different Photovoltaic (PV) coverage area cases of PVT (between 20% to 80%). A data set of weather conditions of a city in Canada over one year is used for this study. CO2 emission reduction is investigated for each month for the different cases in this work. For minimizing CO2 emissions, the percentage of PV coverage area values for each month is determined. To maximize the annual CO2 emission reduction, specific PV coverage area values are carefully chosen for each month using variable flow rate. Results show that the annual CO2 emission reduction can be maximized by using adapted (dynamic) PV coverage area values compared to the conventional (static) PV coverage area values of the PVT system during a year. The fitted- curve function is obtained which evaluates the CO2 emissions reduction for each month at different PV coverage area ratio.
This paper presents a current sensing circuit specifically designed and prototyped for power studies of micro-level energy sources such as micro-photosynthetic cells (μ-PSC). The proposed current sensing circuit takes into consideration recently proposed current sensing topologies. It provides an accurate and dynamic current reading from nA up to few mA range. The dynamic and wide range measurement features aimed to help researchers visualize the effect of each element (temperature, micro-organism, membrane…) involved in the generation of energy from a μ-FSC. The practical results obtained show an acceptable accuracy up to second digit (in %) for the whole operation range (nA up to mA).
In this paper, a Battery Energy Storage System (BESS) is sized, and controlled using MATLAB to reduce the constraints imposed on Hydro Quebec grid by the Louis-Hippolyte-La Fontaine Tunnel by means of peak load reduction. The peak load reduction was attained through peak load shaving by utilizing a BESS, and also through energy and peak load shaving by using BESS in conjunction with Photovoltaics solar cells (PV). The presented analysis highlights the substantial peak load reduction alongside the expected financial benefits. Validating the developed system by simulating the system in National Renewable Energy Laboratory's (NREL) system advisor model (SAM), and HOMER Pro by HOMER Energy shows the potential savings of the developed control.
Synchronizing a generator to a power system and synchronizing two islanded systems together are critical yet delicate problems that have existed ever since alternating current was adopted as a means of transmitting electrical power. Mis-synchronization can damage equipment (e.g., generators, transformers and breakers). This article describes a centralized synchronization control based on synchrophasors. This control automatically determines running and incoming voltage sources for each breaker using a topological method whose inputs are the states of the breakers and disconnecting switches. A synchronism check (25) is performed on each breaker. In addition, an automatic synchronization (25A) may be performed according to the breaker selected by the operator. Dynamic compensation of the variable delays due to the use of synchrophasors allows the required level of accuracy to be achieved. The control has been implemented on an IEC 61131 platform and tested on a modified IEEE 3-machine 9-bus system simulated in real-time using Hypersim. The overall architecture is fully redundant and it is 61850 compliant.
Estimation of the distribution system voltages and currents is of utmost importance for the network operator to take online decisions. Traditional state estimation techniques require redundant meters readings in addition to pseudo measurements in order to correctly estimate the network states. In order to estimate the network states with few real time measurements, this paper presents a novel real time estimation technique. The proposed technique requires no additional pseudo or virtual measurements for estimation. Moreover, the introduced technique solves the lack of observability problem associated with few measurements. The proposed technique is based on the placement of smart meters at few selected locations; these locations are only dependent upon the network topology and do not change with the injection point of the distributed generators. The proposed algorithm is efficient in dealing with balanced as well as unbalanced distribution networks. The estimation algorithm is implemented and tested on the 69 bus balanced feeder and the IEEE 34 bus unbalanced feeder. The results obtained are compared to the actual load flow results to show the accuracy of the developed technique.
This paper proposes a new two-stage scheduling scheme that aims to mitigate voltage unbalance and reduce operational costs through the use of battery energy storage systems. The first stage aims to find the best unbalance index achievable given BESS limitations. The second stage is obliged to maintain this index while reducing operating costs through energy arbitrage techniques. The scheduling problem is formulated as a nonlinear programming problem and was implemented on the IEEE 123-bus system. Obtained results prove the efficacy of the proposed scheme in improving the voltage unbalance while reducing the cost of energy from the grid.
This paper implements a novel wavelet-based multi-resolution analysis power system stabilizer (WMRA-PSS) for a power system consisting of a synchronous generator connected to an infinite bus system through transmission lines. To overcome the drawbacks of the conventional power system stabilizers that have a design based on a linearized model of the power system, the WMRA-PSS is implemented to enhance the dynamic behavior of the power system under different operating conditions using MATLAB Simulink. The results verify the efficiency and stability of the suggested WMRA-PSS under different operating conditions.
Data centers have evolved to become large power consumers. Their supporting infrastructure includes large HVAC systems, and is in addition to massive computer and lighting loads. Nowadays, some of these centers have been built specifically to mine electronic currency (bitcoins). Their sheer sizes exceed those of typical large commercial customers by a vast amount. With such massive sizes, their impact on power quality has become an important concern and needs to be addressed by careful design. There is a need for research into application of innovative design and technology for conditioning power supply in a system supplying such data centers. This paper presents measurements collected at the Point of Common Coupling (PCC) between the electric utility and two large (20MW and 32MW) data centers. Their impact on harmonic emissions, interharmonic injection, voltage flicker and imbalance are analyzed. It is found that individual device standard compliance does not translate into aggregate load standard compliance, and utilities need to be cognizant of these challenges and work with the customer to strategize effective harmonic mitigation.
In this work, we present an unsupervised algorithm for image segmentation using Inverted Dirichlet mixture model. The proposed approach integrates spatial information with the Inverted Dirichlet mixture model. This method uses Markov Random Field to incorporate spatial information between neighboring pixels. The segmentation model is learned using Expectation Maximization (EM) algorithm based on Newton Raphson step. The obtained results using real image data set are more encouraging than those obtained using similar approaches.
In this paper, THz-TDS was used for for an environmentally friendly laminate based on agglomerated cork. Fast Fourier transform (FFT) was applied on the raw data to retrieve the phase and amplitude information. Then, a new amplitude polynomial fitting (APF) algorithm was proposed to improve the image performance. Finally, a comparative analysis was given to conclude the advantages of different applied methods.
One of the most important steps in the extraction of layout for reverse engineering of the integrated circuits (ICs) is the image segmentation of wires and vias from scan electron microscope (SEM) images. This segmentation is challenging due to the gigabytes of image data just for a single IC, image noise, and artefacts. Existing approaches rely on image intensity threshold-based methods but requires significant amount of manual user interactions to correct errors in segmentation. In this paper, we describe an image processing pipeline for segmenting IC layouts from SEM images. Our pipeline includes image normalization, image preprocessing, and segmentation. The segmentation results were compared using a custom-built comparison tool. The results showed, with the correct filters/methods selection, an increase in accuracy of the segmentation for all tested image sets.
The recent applications in the field of thermography and Infrared Non-Destructive Testing (IRNDT) involved many different research fields, and in most of these applications well-known infrared approaches have been utilized for thermal image enhancement, thermal image segmentation, and particularly defect segmentation in IRNDT. Principal Component Analysis (PCA) or Principal Component Thermography (PCT) is one of these methods that has been countlessly used and it is unequivocally one of the constantly referred approaches in this field. Unfortunately, it suffers from being a linear transformation and besides that, finding appropriate basis through its eigen-image decomposition is the shortcoming. Here, an application of non-linear eigen decomposition using Sparse Principal Component Analysis/Thermography (Sparse-PCA or Sparse-PCT) is addressed for segmentation of defects inherent to two hybrid composites (carbon and flax fiber epoxy prepregs). The results indicate considerable segmentation performance when it is compared to similar approaches.
Induction thermography technique is assessed experimentally on aircraft engine parts with fatigue cracks using a three-loop coil. Results show that induction thermography can be effective at detecting cracks in engine parts, with typical inspection time being less than 1 s. While coating the parts to increase emissivity help the signal to noise ratio, it was not necessary to perform this step. Despite local heat gradient resulting from the parts edge, cracks were still detected. This edge effect made the short cracks more challenging to detect, while the longer crack was easier as 2 anomalies could not be seen in the infrared images. In all experiments the optimal observation time was between 0.1 s and 0.25 s. It is shown that more complex geometry part, such as engine disc, can also be inspected by induction thermography. However, in this case only some of the cracks were detected. Similar findings were obtained using a finite element model of the engine disc.
Large-scale 3D structure reconstruction is a complex task. This area has attracted increasing interest in recent years in both the research community and the industry. Google for example used Lidar technology and a fleet of vehicles to achieve this task and build 3D scans of big cities such as San Francisco or New York. However, this task remains limited to a small number of cities and companies due to cost and access to technology. In order to extend 3D reconstruction to cities and to large-scale buildings worldwide, it is necessary to use a more affordable technology. In this work, we present a framework for large-scale 3D structure reconstruction using crowd-captured images. We use images captured by smart-phones for the reconstruction of large 3D structures. The geo-localization data is used to group images based on their relative position to each other. We develop a combined SLAM - SFM algorithm to build 3D blocks based on clusters formed by closest images. Geo-localization is also used to iteratively register closest 3D blocks and form a large-scale 3D structure. The proposed approach permits the reconstruction of large structures using images captured by one or more individuals. Additionally, the proposed model allows for an incremental reconstruction by exploiting new images to build missing blocks and refine the large structure. The obtained results are promising and show the efficiency of the proposed CrowdSLAM framework for large-scale 3D structure reconstruction.
Mobile crowd-sensing (MCS) has appeared as a viable tool to acquire data without dedicating sensors to provide smart services in the Internet of Things (IoT) Era. Ensuring data trustworthiness is a grand challenge in MCS in the presence of adversaries that aim to spread misinformation at the crowd-sensing platform. All MCS frameworks assume that the sensing tasks are opportunistically assigned to the sensing service providers regardless of the recruitment's being participatory or opportunistic. In this paper, we present two scenarios where sensing service providers can join either a high income or a low income community. The decision of selecting a community of sensing service providers is basically turning of the corresponding sensors in the mobile devices while keeping the rest on. Our feasibility study through simulations under realistic scenarios shows that in order to maintain high platform utility and an average reasonable user utility, user-centric reputation-based recruitment of sensing service providers has to be implemented at the platform as opposed to income-based selective data acquisition in mobile crowd-sensing. When low income-based selective data acquisition is adopted, average user utility can be improved by 14%-28% under the best case. On the other hand, we show that high income based selective data acquisition can result in negative platform utilities whereas platform utility of low income-based selective data acquisition can be as low as 25%-50% of non-selective and reputation-based data acquisition even under the best case scenario.
This paper presents the detailed analysis of various Artificial Neural Network (ANN) modelling techniques for chaotic systems. Specifically, Rossler's system and Chua's system are ¨ selected for this study for their practical applications, and the outputs of these two systems are used for the ANN training. The Nonlinear Auto-Regressive(NAR) modelling is used for chaotic time series prediction. The Nonlinear Auto-Regressive with Exogenous Inputs (NARX) modelling is used for generating chaotic time series outputs with varying system parameters as exogenous inputs. The research results show that ANN performs well in modelling chaotic systems. Rossler's attractor is modelled using ¨ Radial Basis Function Network(RBFN) and a comparative study between FeedForward Neural Network(FFNN) and RBFN are done. The result shows that RBFN uses more neurons to achieve similar training performance compared to FFNN. The 3-layer ANN architecture with hidden neurons varying from 1 to 16 is designed and trained using MATLAB NN toolbox. In a fixed point FPGA implementation perspective, the ANN modelling of chaotic systems is very efficient.
We propose a fully Bayesian learning approach using reversible jump Markov chain Monte Carlo (RJMCMC) for asymmetric Gaussian mixtures (AGM). Compared to classic Gaussian mixture model, AGM doesn't imply that target data is symmetric which brings flexibility and better fitting results. This paper also introduces a RJMCMC learning implementation based on Metropolis-Hastings (MH) within Gibbs sampling method. As an improvement of traditional sampling-based MCMC learning, RJMCMC has no assumption concerning the number of components and, therefore, the AGM model itself could be transferred between iterations. For better evaluating models with different mixture components number, the model selection is achieved by calculating integrated likelihood using Laplace approximation to figure out the best-fit components number. We selected both synthetic and a challenging spam filtering datasets to show the merits of the proposed model.
With embedded devices collecting, manipulating, and transmitting growing amounts of data in various Internet of Things applications, it is increasingly important to process data on device for performance and energy efficiency. A common data processing function is computing hash functions for use in hash-based data structures and algorithms. The limited computation and memory resources of embedded devices results in different performance characteristics compared to general purpose computers. This research implements and experimentally evaluates the performance of non-cryptographic hash functions. Seven hash function algorithms were chosen on the basis of implementation complexity, popularity, and compatibility with microcontroller architecture. These functions were implemented in C/C++ for the ATmega328P 8-bit microcontroller used in the Arduino Uno, and on the Microchip PIC24 16-bit microcontroller. Some optimizations were implemented to reduce memory usage. Experimental results demonstrate that there are platform specific performance differences.
Where sizeable libraries with complex template metaprogramming were previously required to facilitate functional programming in C++, we demonstrate that simple monads can be implemented in a compact and relatively straightforward manner in C++17. Along with general discussion of functional programming in C++, and a brief review of functional programming itself, the Maybe and Either monads are described and implemented using C++17 features. We look superficially at the trend toward functional programming apparent in the standards proposals and the evolution of the ISO C++ standard.
In this paper, we conduct a comparison between soft computing and statistical regression techniques in terms of a software development estimation regression problem. Our study includes both support vector regression (SVR) and artificial neural network (ANN) as soft computing methodologies on one side, and stepwise multiple linear regression and log linear regression as statistical regression methods on the other side. The experiments are conducted using NASA93 dataset from the well-known PROMISE software repository. Multiple dataset preprocessing steps are performed in order to guarantee confident results including outliers study. We rely on the hold-out technique associated with 25 random repetitions with confidence interval calculation within 95% statistical confidence level. Pred(30) evaluation criteria, from literature, is employed to compare between different models. Also, features pruning, in the case of SVR, shows significant impact on the model precision.
The recurrent neural network (RNN) shows a remarkable results in sequence learning, particularly in architectures with gated unit structures such as long short-term memory (LSTM). In recent years, several permutations of LSTM architecture have been proposed mainly to overcome the computational complexity components of LSTM. In this paper, we present the first study that will empirically investigate and evaluate LSTM architecture variants specifically on intrusion detection dataset. The investigation is designed to identify the learning time required for each LSTM algorithm and to measure the intrusion prediction accuracy. The results show that each variant exhibits improvement at specific parameters, yet, with a large dataset and short time training, none outperformed the standard LSTM.
Static-State Estimation has been a key function of electric power grids for almost 50 years. During that time, power system engineers have been depending on it to monitor and control power networks, optimize power flow, and perform contingency analysis. State Estimation has always been vulnerable to cyberattacks targeting the availability and integrity of the grid. Nowadays, with the rapid expansion of ICT integration into power systems towards a Smart Grid, this cybersecurity threat is even more pronounced. In order to prepare for the new cybersecurity challenges brought upon by Smart Grids, this paper serves as a concentrated summary of the research on stealth false data injection attacks against Static-State Estimation.
Existing studies on False Data Injection Attacks (FDIAs), a type of stealth attacks against power grids aimed at compromising the system cyber-physical security, have primarily been conducted on wired systems in which state estimation is represented by overdetermined DC power flow models. The emerging trend of Smart Grid (SG) assisted by the widespread deployment of Wireless Sensor Networks (WSNs) for various new functionalities, on the other hand, calls for a review of how some certain well-established premises need to be adjusted in the new context. In addition to related studies based on traditional bus-based systems, certain broad changes brought by the use of WSNs in grid systems will be introduced in this paper. Subsequently, differences in terms of bad data detection (BDD), false data injection attack strategy or physical feasibility of attack methods caused by the shift of scenario will be compared and briefly analyzed. A summary of new or previously overlooked requirements for FDIA studies will then be appended. By presenting a comprehensive review of related studies we will shed light on potential future research directions.
One critical challenge in design and operation of network intrusion detection systems (IDS) is the limited datasets used for IDS training and its impact on the system performance. If the training dataset is not updated or lacks necessary attributes, it will affect the performance of the IDS. To overcome this challenge, we propose a highly customizable software framework capable of generating labeled network intrusion datasets on demand. In addition to the capability to customize attributes, it accepts two modes of data input and output. One input method is to collect real-time data by running the software at a chosen network node and the other is to get Raw PCAP files from another data provider. The output can be either Raw PCAP with selected attributes per packet or a processed dataset with customized attributes related to both individual packet features and overall traffic behavior within a time window. The abilities of this software are compared with a product which has similar intentions and notable novelties and capabilities of the proposed system have been noted.
Network intrusions can be modeled as anomalies in network traffic in which the expected order of packets and their attributes deviate from regular traffic. Algorithms that predict the next sequence of events based on previous sequences are a promising avenue for detecting such anomalies. In this paper, we present a novel multi-attribute model for predicting a network packet sequence based on previous packets using a sequence-to-sequence (Seq2Seq) encoder-decoder model. This model is trained on an attack-free dataset to learn the normal sequence of packets in TCP connections and then it is used to detect anomalous packets in TCP traffic. We show that in DARPA 1999 dataset, the proposed multi-attribute Seq2Seq model detects anomalous raw TCP packets which are part of intrusions with 97% accuracy. Also, it can detect selected intrusions in real-time with 100% accuracy and outperforms existing algorithms based on recurrent neural network models such as LSTM.
The identity-based attacks are easy to launch in wireless networks with growing number of devices connected via wireless medium these kind of attacks are imminent. The standard cryptography procedures are resource intensive and do not provide adequate protection in some cases. The studies of Received Signal Strength (RSS) has shown promise in identifying and detecting identity-based attacks. The researchers have proposed different RSS based techniques. However, each solution has some shortcomings that make it impractical for the largescale constrained-environment wireless network. In this paper, identity-based attacks detection techniques utilizing RSS are reviewed to see if any suitable solution exists that can be adopted or modified for large-scale critical infrastructure ecosystem.