"Indigenous Ways of Knowing" is a useful term that recognizes the beautiful complexity and diversity of Indigenous ways of learning and teaching. Many people continue to generalize Indigenous experience and lived realities. The intent of the phrase "Indigenous Ways of Knowing" is to help educate people about the vast variety of knowledge that exists across diverse Indigenous communities. It also signals that, as Indigenous Peoples, we don't just learn from human interaction and relationships. All elements of creation can teach us, from the plant and animal nations, to the "objects" that many people consider to be inanimate. So, our Indigenous ways of knowing are incredibly sophisticated and complex. These ways relate to specific ecology in countless locations, so the practices, languages and protocols of one Indigenous community may look very different from another. Yet, Indigenous ways of knowing are commonly steeped in a deep respect for the land, and the necessity of a reciprocal relationship with the land.
Time series issues, like forecasting and detection, are prevalent in various fields, impacting industrial system functionality. Accurate handling of these problems is vital; for instance, precise electric load prediction ensures power grid efficiency. Approaches encompass statistical and machine learning methods. Machine learning models, e.g., support vector regression, LSTM, and transformers, excel here. Limited data and evolving distributions challenge these models. Meta learning, or learning to learn, accelerates the process, proving effective in few-shot scenarios. The tutorial's objectives are to introduce meta learning basics, recent advancements, classic time series problems, industrial applications, real-world case studies, and potential research directions.
Engineering products and services wield significant environmental, social, and economic influence. Understanding their impacts and making design choices that optimize positive outcomes across these spheres is crucial. Engineers play a pivotal role in bolstering sustainability's three pillars: environment, society, and economy. However, a focus on environmental and economic factors often sidelines social sustainability due to limited comprehension and resources. Amidst the global shift in manufacturing and energy usage, numerous engineering projects are emerging. This presents an opportunity for Canada and other participants to integrate green engineering principles, ensuring balanced and sustainable advancement across all three pillars.
Businesses globally harness artificial intelligence (AI) to tackle challenges. Healthcare achieves precise diagnoses, retail offers personalized shopping, and automakers enhance vehicle safety and efficiency. Deep learning, using layered neural networks, excels in object detection, speech recognition, and language translation. This tutorial empowers participants with hands-on computer vision exercises, employing popular deep learning tools on cloud-based GPU workstations. It trains models from scratch, imparts techniques for accuracy, and teaches utilization of pre-trained models for efficiency. A collaboration between NVIDIA Deep Learning Institute and the University of Regina, this tutorial equips attendees to independently create new deep learning applications.
ChatGPT and Generative Artificial Intelligence (GenAI) has taken the world by storm and revolutionized various fields, including computer vision, natural language processing, and creative arts. This tutorial aims to provide participants with a brief understanding of generative AI techniques, their applications, and how it would affect post-secondary education. Whether you are a student, researcher, educator, or professional seeking to explore the fascinating realm of AI creativity, this tutorial will equip you with the knowledge and skills to embark on your generative AI journey.
This paper presents a novel method to boost the performance of CNN inference accelerators by utilizing subtractors. The proposed CNN preprocessing accelerator relies on sorting, grouping, and rounding the weights in order to create combinations that allow for the replacement of one multiplication operation and addition operation by a single subtraction operation when applying convolution during inference. Given the high cost of multiplication in terms of power and area, replacing it with subtraction allows for a performance boost by reducing the power and area. The proposed method allows for controlling the tradeoff between the performance gains and the accuracy loss through increasing or decreasing the usage of subtractors. Using a rounding size of 0.05 and by utilizing LeNet-5 with the MNIST dataset, the proposed design can achieve 32.03% power savings and a 24.59% reduction in the area at the cost of only 0.1% in terms of accuracy loss.
State-space models (SSMs) are a class of fundamental models in control theory. SSMs are well-known for their concise mathematical representations and capability of capturing the evolving dynamics of systems. They have also proven useful in time series modeling. Recent studies suggest that SSMs are able to conceptually generalize to classic machine learning models (CNNs, RNNs and RNN-variants) as well as provide theoretical justification for the design of novel sequence models. In this work, we investigate one type of SSM, the Legendre Memory Unit (LMU) and LMU's parallelized variant (LMUFFT), and propose a stacking strategy leveraging the LMUFFT backbone and a Deep Adaptive Input Normalization (DAIN) scheme. Model performance is evaluated using short-term time series forecasting tasks formulated from real-world data. The proposed structure outperforms the traditional machine learning models in efficiency and prediction capability. Our results also suggest that this family of state-space models has potential in the realm of machine learning research. By shedding light on the benefits of SSMs in short-term time series forecasting, we hope to pique the interest of machine learning researchers in the use of SSMs and inspire further investigation into novel model designs based on these models.
Protein physicochemical properties always lead to regulate the protein structure quality; hence it has been precisely used to differentiate native or native like structure from the pool of decoy of modelled protein structures. In this work, firstly we evaluated the TOPSIS score and then we explored 15 machine learning methods with 4 qualitative parameters i.e. Global Distance Test Total Score (GDT_TS), Root-mean-square deviation for the entire target structure (RMS_CA), Template Modelling Score (TMscore), and Z-Score[D] to predict the rank of different modelled protein structures in the absence of its native protein structure. In related work, protein structure prediction center used only one parameter i.e. GDT_TS Score to determine the ranking of different modelled protein structures but in this work 4 parameters are used to rank these modelled protein structures. This research work largely focuses on predicting the quality of modelled protein structures, where it's true native structure is missing. There are total of 2400 modelled protein structures are collected from CASP-13 and CASP-14. TOPSIS method is used to evaluate the TOPSIS score, which further used to predict the rank of modelled protein structures using various machine learning methods. Through the comprehensive experiments, it is establish that randomForest method surpass among other machine learning methods. This task makes the prediction economical and faster. The performance outcomes of rf method illustrates the prediction of rank on Correlation is 1; R² is 1; RMSE is 19.56; and Accuracy is 95.84% (with ±0.1 err) on the testing dataset.
Real-time video streaming has become the largest portion of internet traffic in recent years. Therefore, improving the efficiency of video coding remains an important research issue. Modern video codecs perform inter-frame prediction by motion estimation. However, inter-frame prediction is one of the most computationally expensive and time-consuming operations in video coding. Convolutional neural networks (CNN) have been used in recent research for inter-frame prediction tasks. The CNN architectures in previous work use floating point arithmetic whereas motion estimation in video codecs only use integer arithmetic. Thus, inter-frame prediction using CNNs instead of motion estimation may not always result in a better time complexity. Floating point CNNs can be quantized into integer CNNs. Integer CNNs can reduce the network latency but can also result in a loss of prediction accuracy. In this paper, we investigate the latency vs accuracy trade-off of quantized CNNs in inter-frame bi-prediction. We present experimental results which demonstrate that the integer CNN is at least 5% faster than the floating point CNN, while the prediction quality degradation of the integer CNN is no more than 0.6 dB in PSNR.
With increasing penetration of wind power, accurately predicting wind speeds are essential for planning and operation of power grids. In this paper, a short-term deep learning-based wind speed forecasting approach is proposed using one-dimensional convolutional neural network (1D CNN), in which 1D CNN aggregates the weather information of the last hour to predict a hour-ahead wind speed accurately. The input feature selection, data preprocessing, and model evaluation are discussed in this paper. Wind speed at a specific time can be predicted in less than a few milliseconds using the proposed approach along with the meteorological data measured an hour earlier. Three years historical wind speed data from 2020 to 2022 measured in Saskatoon International Airport, Saskatoon, Saskatchewan, Canada are used in this study. Experimental results verify that this 1D CNN-based wind speed forecasting technique provides accurate wind speed prediction. It can contribute to sustainable energy development in Saskatchewan and beyond.
Distribution transformers are the key components in distribution systems to maintain reliability of the system operation and reduce power outages. In this paper, a literature review is conducted on data-driven methods of the distribution transformer health monitoring by classifying the research streams and emphasizing advancements in machine learning, artificial intelligence and hybrid approaches in this area. The significance of data-driven methods is highlighted, demonstrating their ability to overcome traditional analytic limitations by providing real-time monitoring, prediction, and adaptability. As the distribution system continues to expand due to the increasing penetration of distributed energy resources (DERs) and electric vehicles (EVs), data-driven techniques emerge as a dependable and adaptable option for the effective transformer health monitoring.
In this paper, a new configuration of a non-isolated zeta-flyback converter is proposed. A zeta converter can reduce output current and voltage ripples, and by interleaving a flyback converter with it, the proposed converter is an excellent choice for interfacing solar photovoltaics (PV) panels, fuel cells and ultra-capacitors due to its high gain, high efficiency and low costs. The flyback section of the converter, including a transformer, increases the output voltage gain and forms a step-up converter. However, the switch used in this converter experiences high voltage and high current stress during switching operations, resulting in a lower system efficiency. To increase the converter's efficiency and achieve zero switching losses, a soft switching method is utilized for the zeta-flyback converter based on a resonance structure with an auxiliary switch and a diode. The performance of the designed converter is validated by simulation using the software, Power Sim.
Increasing integration of renewable energy sources, such as Solar photovoltaic (PV) systems, has introduced significant challenges in planning and operation of electric power grids. Frequency control is an essential technique for renewable energy sources through their interfacing inverters to the grid. More PV systems connected to a power system will reduce the system's inertia due to their interfacing power electronic inverters, and may cause the frequency instability issue in the system. In this paper, advanced frequency control techniques with and without auxiliary devices are reviewed. Battery, flywheel, superconducting storage devices, and static synchronous compensators (STATCOM) are several auxiliary devices reviewed in the paper. Inertia emulation, de-loading, and grid-forming are frequency control techniques without auxiliary devices. Their benefits and drawbacks are highlighted. The future research directions in frequency control are also recommended.
As the adoption of plug-in electric vehicles (PEVs) and rooftop photovoltaic (PV) systems continues to rise, understanding their combined impact on the health of distribution transformers becomes increasingly vital to ensure grid stability and longevity. In this paper, such combined impacts are investigated through a case study in Saskatoon, Canada. Using OpenDSS and MATLAB simulation, various PEV penetration levels and PV generation capacities are explored. Transformer heating dues to ambient temperature and loading is considered. The results show that the increased PEV adoption exacerbates the transformer stress, while the PV integration mitigates these effects, especially during summer, which highlights the need to encourage the rooftop PV adoption to balance the PEV charging demand on transformers. This paper offers valuable insights for electric utilities and policymakers to manage the grid infrastructure and promote sustainable energy practices.
Safety is the cornerstone on which the commercial airline industry is built. However, maintaining an aircraft is expensive, and a traditional inspection takes a long time and is prone to mistakes. The time required for general visual inspections of aircraft can be drastically reduced by using deep learning and remotely piloted aircraft systems (RPAS). Deep learning techniques can be used in aircraft maintenance thanks to the availability of Graphic Processing Units. In our proof of concept study, we use YOLOv5 to build a model that uses high-quality data to find five different aircraft flaws.
Detecting anomalies in videos is not only crucial but also an intriguing task in surveillance systems. It is a sequential modeling problem in nature that requires careful selection of spatial and temporal dependent patterns from a sequence of frames. There are several research works from traditional approaches to modern deep learning-based techniques introduced to address this problem. However, there is a huge demand for research and development to ameliorate the performance of the existing solutions. In response to that, this study proposes an improved video anomaly detection model using deep features extracted from a dual-modality input representation. The proposed model demonstrates effectiveness in the benchmark-UCF crime dataset by achieving the best AUC of 87.52%, which is ≈ 12.3% improvement compared to a baseline. The application aspect of this work includes strengthening the security measures in common places, viz. airports, banks, public transits, schools, and shopping complexes by detecting aberrational or suspicious activities in surveillance videos.
One of the most common insects that attack wheat crops in North America is the orange wheat blossom midge (WMs) Sitodiplosis mosellana (Diptera: Cecidomyiidae). WMs larvae cause significant feeding damage to wheat kernels, decreasing yield/productivity. To determine when WM adults emerge and to help determine population size and threat level, manual counts of male WM attracted to pheromone-baited sticky traps can be used. This method is labour-intensive due to the often large numbers of WM males stuck to traps (1500-3000), which can take around one hour to count properly. If multiple traps per field are used, the time to count is magnified. A machine vision system that monitors the traps with high frequency (48 times a day) is more convenient because it can continuously collect and analyze large amounts of data quickly and accurately. This research utilizes a state-of-the-art object detection network, You Only Look Once version 8 (YOLOv8), to detect and count WMs in the images taken from white sticky cards under natural field settings. It achieves a mean average precision (mAP at 0.5 IoU) of 87.11% and mAP at 0.5-0.95 IoU of 43.55% in detecting WMs with 98.7% precision and 99.03% recall values. These results represent an improvement over the performance of the previously top-performing object detection model, YOLOv5, which achieved mAP at IoU 0.5 of 77.37%, mAP at IoU 0.5-0.95 of 41.07%, a precision of 86.07%, and a recall value of 88.46%.
Falls are a significant cause of injury and mortality, particularly among the elderly population. Early detection of falls is crucial to mitigate their impact. Thermal imaging is a promising technology for detecting falls as it is non-invasive and operates in low-light conditions. However, accurately detecting falls in thermal images remains challenging due to the low resolution and lack of color information in these images. This paper proposes a novel approach for improving fall detection in thermal image data using a stacking ensemble of Autoencoder (AE) and 3-D Convolutional Neural Network (3D-CNN) models fed into a meta-neural network which is trained to detect falls and non-falls. The effectiveness of the proposed system is demonstrated through ablation studies on the publicly available benchmark dataset "Thermal Simulated Fall" and achieves an accuracy of 83%. Comparative analysis shows that the proposed solution outperforms an AE-based baseline by 9.2% in fall detection performance. The combination of autoencoders and 3D-CNNs allows us to harness the power of both supervised and unsupervised learning methodologies, whilst mitigating the limitations and biases of each individual model, offering a promising solution for accurate and efficient fall detection in thermal images.
Neuropsychiatric Symptoms (NPS) are often manifested in People Living with Dementia (PwD), with agitation being one of the most common symptoms. Agitated behavior in PwD causes distress and raises the risk of injury to patients and caregivers. Therefore, detecting agitation events is essential for the safety of PwD and the people around them. AI-powered tools can monitor agitation behavior, alert care providers to instances of agitation, and help them respond quickly and effectively to improve the quality of life for PwD. Furthermore, research shows that selecting the proper set of features significantly affects the outcomes of a machine learning model and performance. This work investigates using a new set of features with various machine learning models to detect individual patterns of NPS. These features are extracted from sensor data collected by multi-modal wearable devices from 17 PwDs admitted to a specialized Dementia Unit in Canada. Several machine learning models are trained using these features, and our findings show that Extra Trees achieves higher performance with the new feature set compared to the state-of-the-art feature set known to date. Performance evaluation shows that the new models successfully classified behavioral symptoms from personalized models with a median AUC of 0.941.
Most deaths from acute myocardial infarction occur outside the hospital environment. A recent proposal called cardiac sonothrombolysis with microbubbles has the potential to promote significant improvements in patient care. However, it is essential that the main phenomenon - cavitation of microbubbles - be controlled to avoid harm to the patient. The objective of our work was to investigate the detection of cavitation in sonothrombolysis in order to allow control of pressure intensities, aiming at creating safer equipment for the patient. Sources of stable and inertial cavitation were simulated, and their waves were propagated through a medium. The signals were received by an 8x8 matrix of ultrasound transducers which were designed for emission and reception. The acoustic source was estimated by coherent summation using a delay-and-sum approach. Signal-to-noise ratio (SNR) in the frequency domain of the source signal were analyzed in characteristic bands. Mean effective SNRs in each band, after discounting effects from nearby bands, were used for cavitation detection. Using narrowband receivers, the area (AUC) under the receiver operating characteristic (ROC) curve for threshold SNR was as high as 0.94 for stable cavitation detection and 0.91 for inertial cavitation; using broadband receivers instead, these figures increased respectively to 0.95 and 0.99. The obtained sensitivity and specificity were in the range of 0.77 to 0.96. We conclude that it is possible to discriminate cavitation types with a single transducer set, or alternatively with broadband receivers.
With Artificial Intelligence (AI)'s advancement in computer vision, AI-aided medical system has become a must, such as for breast cancerous mass detection. Moreover, to fight against this kind of deadly disease, detection in the early stage plays a vital role. A deep learning model can be trained to detect cancerous mass without wasting time. There is widespread agreement that efficient deep network training necessitates thousands of annotated training examples. With this concern, we experimented with the different deep learning models based on U-Net: traditional U-Net, ResU-Net, and Connected U-Net to compare and differentiate between the models for detecting breast cancerous tissue. By analyzing different U-Net variants, a novel U-Net architecture is developed, namely Connected ResU-Net, by combining the algorithm of ResU-Net and Connected U-Net algorithms. With this arrangement, the model is trained for 50 epochs, and to measure the accuracy, the Mean IOU (intersection over union) metric is used, which gives a Mean IOU of 72.50% for the training and 63.10% for the testing dataset.
Dielectric spectroscopy using open-ended coaxial probe measures the complex permittivity of a medium as a function of frequency by applying an electromagnetic field (EF) and observing the energy reflected back. In heterogeneous tissue, a critical parameter that defines the measurement accuracy is the penetration depth (PD) of the EF in the tissue. This paper evaluates the effect of tissue and probe parameters on the PD under different simulation conditions considering an open-ended coaxial probe inserted into a 2-layered tissue in 54,000 simulations. A model for extracting the permittivity from the reflection coefficient is also described. The results show for the first time that the PD increases as the difference in permittivity between the layers increases, and that the PD decreases with the excitation frequency but increases with the diameter of the probe. These findings can greatly aid in quantifying the sensitivity of biological tissue classification using dielectric spectroscopy.
This paper investigates the categorization of speech Frequency Following Response (sFFR) evoked by the five English vowels using machine learning models. As part of the control system of the brain-controlled hearing aid, the models based on Convolution Neural Networks (CNNs) are used to extract spectrum features from sFFR signals to classify vowels. The highest average accuracy reaches 80.00% for CNN with loose recurrent connections. In addition, we tested the performance of models when there is different additive noise to the input sFFR signals and found that the loose recurrent connections help the CNN model to obtain stronger robustness. We also use motif topology to analyze the content features of the lateral connection network after training. At last, we compared our work with previous research. The results of this work show the potential of machine learning models for the categorization of sFFR signals applied to brain-controlled hearing aids, especially in the presence of noise.
Regulating emotion is crucial for maintaining well-being and social relationships. However, as we age, the volume of the frontal lobes is reduced, which can cause difficulties in regulating emotions. Electroencephalography (EEG)-based emotion recognition has the potential to understand the complexity of human emotions and the atrophy of the frontal lobes that leads to cognitive impairment. In this study, we investigated a multimodal deep learning approach for subject-independent emotion recognition using EEG and eye movement data. To that end, we proposed an attention mechanism layer to fuse features extracted from the EEG and eye movement data. We tested our approach in two benchmarking emotion recognition datasets: SEED-IV and SEED-V. Our approach achieved an average accuracy of 67.3% and 72.3% for SEED-IV and SEED-V, respectively. Our results demonstrate the potential of multimodal deep learning models for subject-independent emotion recognition using EEG and eye movement data, which can have important implications for assessing emotional regulation in clinical and research settings.
Speech enhancement (SE) is crucial for reliable communication devices or robust speech recognition systems. Although conventional artificial neural networks (ANN) have demonstrated remarkable performance in SE, they require significant computational power, along with high energy costs. In this paper, we propose a novel approach to SE using a spiking neural network (SNN) based on a U-Net architecture. SNNs are suitable for processing data with a temporal dimension, such as speech, and are known for their energy-efficient implementation on neuromorphic hardware. As such, SNNs are thus interesting candidates for real-time applications on devices with limited resources. The primary objective of the current work is to develop an SNN-based model with comparable performance to a state-of-the-art ANN model for SE. We train a deep SNN using surrogate-gradient-based optimization and evaluate its performance using perceptual objective tests under different signal-to-noise ratios and real-world noise conditions. Our results demonstrate that the proposed energy-efficient SNN model outperforms the Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge) baseline solution. Furthermore, the SNN model achieves acceptable performance when compared to an equivalent ANN model while consuming only 53.97× less compute energy.
The area, timing, and power characteristics of an FPGA tile are important metrics that should be measured accurately in order to accurately measure the performance of a chosen FPGA architecture. This work investigates the accuracy of classical statistical and machine learning models in predicting the post-synthesis cell count, cell area, net area, total area, worst-case delay and leakage power of 7 nm FinFET standard cell based FPGA tiles. We find that the prior mentioned metrics can be predicted with a maximum percentage deviation of less than 11%.
Emotion recognition is a topic of interest in Affective Computing (AC). While deep learning architectures have gained popularity for classification tasks, their reliance on large datasets limits their applicability when data availability is scarce. An alternative approach is feature engineering, which involves extracting relevant features to train supervised machine learning models. Neuroscientific theories on emotion processing, such as the lateralization theory, have motivated the introduction of asymmetry features for emotion prediction. However, none of these studies have statistically evaluated whether including asymmetrical features could reduce classification error or computational time. To address that direction, the current work compared two approaches for emotion recognition. The first approach used features extracted from individual EEG channels, while the second used asymmetry features calculated by matching pairs of EEG nodes. The two approaches were compared in terms of performance and fitted computational time. The comparison indicated that the performances of both approaches were not statistically significant. Notably, the asymmetry approach required less computational time for the training stage. This finding implies that incorporating asymmetry features in emotion recognition models is viable when computational resources are limited, without significantly compromising performance.
Cancer is a leading cause of morbidity and mortality worldwide, with an estimation of 10 million deaths attributed to cancer each year. Among various types of cancer, Breast cancer is the most common cancer diagnosed among women, accounting for approximately 30% of all new cancer cases among women. Periodic clinical checks and self-tests will assist in early identification of Breast cancer. Breast cancer detection at an early stage helps the patients to receive suitable treatment, which can increase the chances of their survivability. Automated clinical solutions, such as Computer-Aided detection and AI-based algorithms, have shown increasing promise in improving the accuracy and efficiency of Breast cancer screening and diagnosis. Automated clinical solutions have the potential to revolutionize Breast cancer detection and diagnosis, while several serious challenges have to be addressed such as, lack of standardization and regulation of these technologies, need for large amounts of data for training and validation, as well as patients' privacy and data security. To address these challenges, a novel Federated Learning architecture that, rather than sharing data, enables knowledge integration by sharing the model parameters of each client during federated training process. In this paper, we proposed a federated Support Vector Machine for the early detection of Breast cancer. This work utilized the potential benefits of Federated Learning, Firefly algorithm and Support Vector Machine to study the early detection of Breast cancer disease. The proposed model leverages a distributed computing approach, allowing the SVM model to be trained across multiple datasets, while preserving the data privacy. In addition, we used the Firefly algorithm as a feature selection technique to identify the most relevant features for Breast cancer detection. We evaluated the performance of the proposed model using publicly available Breast cancer datasets namely Wisconsin dataset. The experimental results of this proposed work showed that our proposed model has achieved an accuracy of 95.68%. This results were compared to other experimental result of the existing works, which proved to have significant improvement in model's performance. Finally, we highlighted the significance of the proposed work with the potential benefits of employing Federated Learning in Breast cancer detection.
Accurate wind speed forecasting is essential for power dispatch scheduling and energy commitment of wind farms. As a conventional approach to predict wind speed, the Auto Regressive Moving Average (ARMA) models are only accurate for very short-term/short-term time horizon wind speed forecasts within 0-6 hours. To overcome this issue, in this paper, a machine learning-based approach, known as Boosted Regression Tree (BRT) algorithm, is developed for wind speed forecasting, and is compared with ARMA models at different time horizons. It is found that, as the forecasting time horizons increase, the BRT model outperforms the ARMA model significantly. Historical wind speed data measured from the Meter Station at the Saskatoon International Airport, Saskatoon, Canada in 2022 are used for wind speed forecasting.
Solar energy harvesting systems can operate efficiently at relatively high threshold luminance, and they exhibit 0% efficiency below threshold luminance value. This introduces the sensitivity topics where it highlights what is the minimum input power where the efficiency remains acceptable, and the harvester/charger can still operate. The efficacy of solar power harvesters has been a matter of ongoing subject; however, the concept of sensitivity is not explored. The objective of this paper is to discuss the sensitivity topics in the low power solar harvesters and to compare two approaches with their sensitivity and efficiency. These topologies will be dissected, and the experimental results will be demonstrated.
Along with the population of the smart grid, the potential vulnerabilities of the power system in the cyber-security aspect are increasing. False data injection attacks (FDIA), which can stealthily bypass the system operator's bad data detection (BDD), threaten the power system the most. FDIA is typically designed to attack the vulnerable nodes of the system. The system operator needs to locate those vulnerable nodes and further defend against the FDIA. In this paper, we consider using quantum computing to locate those vulnerable nodes. A quantum vulnerable node location framework (QVNLF) is designed to locate vulnerable nodes against FDIA by applying the quantum approximate optimization algorithm (QAOA). We also prove that attacking those vulnerable nodes located by the minimum cut-set (MCS) method is the most harmful to the system. The performance of our proposed method is evaluated in the case study based on IEEE 5-Bus Transmission System and IEEE 9-Bus Transmission System, respectively. The results obtained from our method are the same as the results obtained from the traditional Stoer-Wagner algorithm, which verifies the effectiveness of our proposed framework based on quantum computing.
Greenhouse gas emission reduction goals set by the United Nations are urging countries all over the world to switch from traditional power generation to renewable energy, such as photovoltaic (PV) and wind power. Weather conditions highly affect these renewable energy (RE) plants' power production, and uncontrollable weather changes may cause voltage and frequency unbalances in electric power grids. The lack of dispatchability of renewable energy sources (RES) forces grid operators to curtail the power generation at these RE plants if the demand is not high enough, wasting all the power that could have been generated at that time of the day. The integration of Battery Energy Storage Systems (BESS) with these RE plants can mitigate the power quality issues and provide the power grid with a smooth and controlled output. In addition, the BESS can provide ancillary services to power plants and grid operators, such as frequency control and peak-shaving. The objective of this paper is to perform an in-depth literature review of BESS technologies and their current applications in power systems.
Controlling the temperature of office buildings plays an important role in optimal energy management. Using proper heating, ventilation, and air-conditioning (HVAC) systems as well as construction material selection of the facility can help to reduce energy costs and CO2 emission. In this paper, we present an analysis of the heating and cooling system of a real laboratory of a university building with actual data. These data have been gathered using digital relative humidity and temperature (DHT) sensors from a real laboratory. Then, we design and simulate all materials of the laboratory building using Google Sketch-Up for 3D design, and simulate the thermal elements including heating, cooling, infiltration, ventilation, gains, losses, and control system using TRNSYS software. This study is performed using one-month continuous data with one-minute intervals and based on the weather data of Regina City, SK Canada. The results of the simulation are compared with the real data of DHT sensors. The comparative analysis of the simulation and actual data confirmed the effectiveness of the simulation.
This study presents a convolutional neural network (CNN) architecture developed using the TensorFlow framework to accurately recognize individual letters of American Sign Language (ASL). The CNN architecture consists of various layers including two-dimensional convolutional layers, max-pooling layers, batch normalization layers, dropout layers, and fully connected layers. The model achieved a mean validation accuracy of 95.48% and a test accuracy of 99.77% in identifying ASL characters. However, the live visual depictions revealed certain difficulties encountered by the model in identifying some ASL letters, highlighting the need for further improvement of the model's framework and dataset curation. This research contributes to the scholarly discussion on the use of machine learning approaches in identifying sign language alphabets and provides insights into the feasibility and effectiveness of utilizing these techniques in ASL recognition tasks.
In the field of multi-image super-resolution, most advanced models adopt a strategy of calculating an increment and then adding it to a baseline image. However, most existing work focuses on obtaining the increment by modeling the correlation between input images using deep learning techniques, while little attention is paid to the computation of the baseline, which is typically obtained by simply averaging the input images. This paper proposes an improved model that replaces averaging with self-attention mechanism in the existing PIUNet model, which makes the baseline computation phase more powerful. The experimental results show that compared to the original model, our improved model not only shows improvements over the state of the art models on a subset of the PROBA-V dataset but also reduces the required training time.
Detecting educational emotion of the students is important as these play a vital role in their learning process. Generally in a regular classroom, instructors can understand the emotion of the students by observing the facial expression but in an online learning environment it is quiet difficult. Many state of the art deep learning architectures can detect the emotion from facial expression but they are very deep in nature and are not suitable to deploy on edge devices. In this study, we propose and develop a deep learning model which is lightweight in nature and can be deployed on edge devices. We evaluate our model using a dataset collected from online learning environment. From evaluation we can see our model shows very competitive performance in terms of accuracy with state-of-the-art models.
When it comes to computer vision, scene parsing is a crucial part of semantic segmentation. It has a wide range of applications, including autonomous driving, robotics, gaming, natural language processing, object detection, and image and video editing. Semantic segmentation works by classifying each pixel of an image according to the object it belongs to, and scene parsing provides contextual information to improve the accuracy and robustness of deep learning models used for this purpose. In this study, we used the Fully Convolutional Network (FCN-8) architecture, a popular deep learning-based technique that achieves higher accuracy than traditional and state-of-the-art methods. This is achieved by creating hierarchies of distinctive features in an image. The FCN-8 is used to perform semantic segmentation efficiently, taking an image of any size as input and producing correspondingly sized output with effective inference and learning. To fine-tune the FCN-8 for the MIT Scene Parsing Challenge Dataset, we employed a transfer learning approach. Our results showed that our proposed approach achieved an accuracy of 72% on the dataset. This is significant given the relatively small number of samples and the 150 classes of objects. Our work demonstrates a successful pilot study for deploying transfer learning and the FCN-8 architecture for scene parsing and semantic segmentation.
Prostate cancer (PCa) is the most common type of cancer in men worldwide. It is a cancer that starts in the small walnut-shaped male gland called the prostate. From the prostate, it can form a metastasis into other organs. If detected and diagnosed early the survival rate may increase to 95%. Therefore, early detection and diagnosis are important tasks performed by a pathologist. The pathologist identifies the severity levels using a scale called the Gleason grading group (GGG). The GGG is found by pathologists by looking at a biopsy sample and assigning a grade of low, intermediate, or high to the sample. The pathologist then assesses a second sample in the same manner. The GGG is found by adding these two scores provides the total Gleason score. In this paper, we have explored tissue microarray (TMA) and clinical data collected by pathologists of Alberta Precision Laboratory, for predicting the severity level of prostate cancer using various machine learning methods. Traditional classifiers, such as Na¨ıve Bayes, Decision Tree, Support Vector Machine with Radian basis function (RBF), Logistic Regression, and ensemble classifiers, such as Random Forest, and Bagging with k-nearest neighbors have been applied through the machine learning pipeline containing imputation and sampling techniques. An integrated SMOTE-Tomek Links method is adopted for handling the class imbalance problem. The highest accuracy obtained is 99.64% from the random forest method.
Robotic Process Automation (RPA) can minimize human errors, improve efficiency, and create a seamless operational environment in the healthcare industry. This paper examines the existing radiology imaging requisition system, which requires human labourers to perform medical request processing and classification. To improve this slow, error-prone, and hard-to-scale process, we design an RPA approach that significantly improves efficiency. The proposed RPA-based system consists of automatic fax forwarding, optical character recognition (OCR), form classification, and auto file storage. Compared with the existing methods, the proposed RPA approaches have much faster processing speed (94% reduced), much lower cost (98.4% reduced), easier scaling and increased error-handling efficiency.
Transportation hub cities play a significant role in the spread of infectious diseases due to their role as centers of connectivity for local, national, and international transportation networks. In this paper, we studied the transportation hub cities' effect on the COVID-19 spread in smaller cities such as Regina and Saskatoon. The time-lagged correlation analysis showed that Canada's international transportation hub cities lead the spread of COVID-19 in smaller non-hub cities and the amount of correlation is considerable. In addition, we have used a univariate and a multi-variate LSTM model to demonstrate that considering the disease status in the major transportation hub cities will decrease the forecasting error in two smaller non-hub cities. Overall, the results show that considering the data from the major transportation hub cities is a beneficial indicator for analyzing and predicting the disease spread in smaller non-hub cities.
Bioelectric signal sensors are susceptible to various contaminants, including motion artifact. Signal quality analysis helps avoid misinterpretations, including misdiagnoses, that can arise from signal contaminants. To further the research in the signal quality analysis and cleaning domain, large amounts of contaminated data are required for training, testing, and validation of signal quality analysis systems. Motion artifact is a common contaminant. Currently, databases of motion artifact are limited, resulting in the repeated use of the same signals. A database of motion artifact is established by developing a system for the removal of electrocardiograms (ECG) in ambulatory recordings. The Long Term AF Database, which contains 84 ECG recordings, each approximately 24 hours in length, was used to create the motion artifact database. The system for the removal of ECG was validated using artificially contaminated ECG signals, a combination of clean ECG signals with added simulated motion artifact signals.
Course discussion forums play a vital role in connecting students to their peers, exchanging ideas, opinions, and information in online learning. These forums are not only the key point of contact for the course, but also facilitate student's learning. In this paper, we present a short review of applications of machine learning and natural language processing techniques to analyze course discussion posts to provide insights and improve students' learning outcome. We categorized these methods into four main groups based on the area of applications: Automated question answering systems, Thread recommender systems, conversational agents, and topic modeling. The methods in automated question answering systems focus on identifying common questions, concerns, and confusion among learners and generating responses without human intervention. The methods in thread recommender systems focus on identifying and recommending useful threads to the students. The methods of conversational agents group focus on creating virtual agents to provide personalized support to students in a natural conversation. The topic modeling group focuses on identifying the topic of interests and is mostly discussed by the students. The research findings indicate that course forum analysis can be integrated in a logical way into smart learning environments which can transform the effectiveness and accessibility of online courses. Such integrations could improve online learning experiences by providing learners with more personalized and engaging education and instructional supports.
Ion channel-modulating peptides play a crucial role in various physiological processes, making their identification a significant area of research. In this study, we present STACKION, a novel stacking-based ensemble machine-learning approach for the identification of ion channel-modulating peptides. Five feature extraction methods, including amino acid composition (AAC), pseudo-amino acid composition (PAAC), dipeptide composition (DPC), tripeptide composition (TPC), and composition-transition-distribution (CTDC), were employed to extract discriminative features from peptide sequences. Additionally, eight machine-learning algorithms were applied to build predictive models. Through extensive experiments and evaluation, we demonstrate that our proposed method, STACKION, consistently outperformed the other feature extraction methods in predicting sodium (Na + ) and calcium (Ca + ) ion channel-modulating peptides with the DPC feature extraction method. Moreover, when combined with the PAAC feature extraction method, STACKION demonstrated excellent predictive accuracy in identifying potassium (K + ) ion channel-modulating peptides. These findings highlight the effectiveness of STACKION in peptide identification, with potential implications for understanding peptide functionality and the development of therapeutic agents targeting ion channels.
Reducing an individual's essential facial expressive sentiment could be compared to the artist establishing the range of color needed to capture a scene. They reserve space on their palette for only the colors they need. Therefore, can a deep learning model use the palette for training a reduced number of facial expressive states to generate reenacted images portraying an individual's emotion? A person's mood, audience, feelings, and environment can restrain expressions in breadth and intensity, thus simplifying the required expressions in a 'palette' to convey human, nonverbal communication. The findings in this research discovered grouping similar facial expression images together using unsupervised methods and assigning a condition can train a deep-learning generative model capable of reenacting a diverse, high-quality, palette of differing human expressions
The traditional deep learning framework faces two critical challenges: limited data available for successful model training and concerns regarding user data privacy. Federated learning, which operates in a decentralised paradigm, offers a promising solution to these challenges. Federated averaging (FedAvg) is a common aggregation procedure in federated contexts. FedAvg, however, experiences convergence issues, especially when there is significant diversity in the data distributions among clients. To address this problem, we explore two effective aggregation techniques, namely random-sampling federated maximum (FedRSMax) and random-sampling federated median (FedRSMed) with adaptive moment estimation (Adam), and compare their performance characteristics with FedAvg. In this study, we use a well-established convolutional neural network (LeNet) as a global model for federated learning, and the HAM10000 dermatoscopic image dataset is used as the primary data source. We balance the dataset and generate random subsets to induce data heterogeneity for different simulated clients and evaluate the performance of the proposed techniques. Our findings demonstrate that FedRSMax outperforms FedRSMed and FedAvg in terms of accuracy, recall, and precision and can therefore serve as an effective alternative for aggregation in federated learning.
Due to their enhanced performance and absence of carbon emission, electric vehicles (EVs) are attracting much interest worldwide. Properly interfacing energy storage with power converters is crucial for the overall EV drivetrain performance and efficiency. Power converters convert electrical energy between different voltage levels and waveforms from the battery to the motor and vice versa. Variations in control schemes based on pulse width modulation (PWM) techniques directly impact losses and temperature profiles on converter power semiconductors, which are directly related to the reliability of power converters. This paper presents a simplified modeling approach to show how PWM-based control strategies impact power converter losses in EV drivetrains. The investigated drivetrain system consists of a two-level inverter interfacing with a bidirectional current boost chopper and supplying power to a permanent magnet synchronous machine. Based on electrothermal models, the influence of sinusoidal PWM method and space vector PWM method on the power converter losses and thermal stresses in power semiconductors are evaluated and analyzed. A performance comparison of different PWM methods is performed through PLECS software-based numerical simulation results.
Microgrids (MGs) constitute a promising solution to enhance power distribution system (PDS) resilience against natural disasters. Yet, in the existing research works, the scale of MGs, which can be quantified by the number of nodes covered by MGs, has not been optimized. An MG containing too many nodes can certainly prolong the restoration process, because loads are typically picked up step by step for transient stability consideration. To this end, this paper proposes a load restoration strategy based on balanced graph partition. The problem is formulated as a multi-objective optimization problem. One objective is to minimize the total weighted load shed, while the other one is to consider the scale of MGs by minimizing the variance of the scale of MGs. To speed up the computation, a decomposition-based objective approximation is presented. The optimality gap is also derived, and proven to be bounded. Lastly, the proposed strategy is performed on the IEEE 37-Node and 123-Node Test Feeders. The simulation results demonstrate that by minimizing the variance of the scale of MGs, a more resilient load restoration can be achieved.
This paper presents an innovative secondary control scheme that takes into account the switching topology of the communication network. Based on the absolute values of distributed energy output variables, we propose an observer-based event-triggered cooperative secondary control scheme to regulate voltage and frequency simultaneously in stand-alone microgrids (MGs). In the propose scheme, we provide sufficient conditions to ensure the boundedness of formation errors, and also, the Zeno behavior is excluded by calculating a lower bound for the sampling intervals. Simulation results are provided to validate the proficiency and efficiency of the proposed control scheme subject to switching topologies. Furthermore, we undertake a comparative analysis between our proposed method and a contemporaneously developed control approach.
Electric vehicles (EVs) have become popular due to significant developments in the electric transportation industry. The rising number of EVs drives a surge in the demand for residential charging infrastructure and may have a negative impact on the power system stability. This paper evaluates the impact of EVs integration into the power grid. The impact of EV penetration on power distribution systems is evaluated by integrating EV charging profiles and base demand into a load flow model. The impact of uncontrolled EV charging load is evaluated based on transformer loading and voltage drop at customers' houses. For the sake of verification, the base load and EV charging profiles are analyzed based on real data from Saskatchewan, Canada. This study can help utilities in assessing the power distribution system's design standards, such as transformer capacity and cable sizes, for urban residential areas in the future, especially during on-peak demand with electric vehicle integration.
A new control for operating a Battery Energy Storage System (BESS) as a STATCOM, termed BESS-STATCOM, is proposed to stabilize a critical 5 hp induction motor against large disturbances on a 24/7 basis to prevent significant financial losses to the motor facility. Simulation studies are performed using PSCAD on the realistic distribution feeder with a 5 kW BESS to demonstrate the effectiveness of the proposed control method. When a large disturbance is created, even if the BESS is in a high state of charge or discharging, the control system curtails the active power to release the entire inverter capacity for operating as a STATCOM. This active power curtailment lasts for less than a minute. During this time, the induction motor is stabilized. The proposed BESS-STATCOM technology is expected to be ten times more economical than installing a new STATCOM of equivalent size for stabilizing the same critical motor. The proposed BESS-STATCOM control will soon be field demonstrated in the utility network of Elexicon Energy, Ajax, Ontario.
Technological advancement has provided many facilities and made life easy but every coin has two sides. As the life of humans has become comfortable but at the same time many new problems are arising. In recent years humans are fighting fatal diseases like covid-19. The pandemic has changed life drastically and as an ill effect of it, social distancing and masks have become a societal norm. The proposed research focuses on the use of Artificial Intelligence and Image Processing to fight such deadly viruses and Deep Learning can be used to stop the spread of the virus by enforcing the use of face-mask. Vision Transformers are used to identify whether the human face is with a mask or without a mask. The study of vision transformers is carried out for a dataset of less than a thousand images and vision transformers are found to have an accuracy equal to 0.86. Comparative analysis of the vision transformer with different image patch sizes is also carried out and inferred that an increase in the patch size of images reduces the accuracy of the vision transformer. This paper focuses on vision transformers being a novel method for face mask recognition.
Deep learning has been widely used in computer vision applications and one of the recent breakthroughs in this field is the use of attention modules. Present models, to the best of our knowledge, are not accurate enough in terms of distinguishing difficult object classes like pedestrians and bicycles in street scenes. In this paper, we proposed the use of self-attention blocks in the encoder section (instead of the decoder section) of UNet and FCN with the aim of improving the performance of the models in segmenting difficult object classes. We tested our proposed models on the Cityscape Dataset and the experimental results show that our proposed models improved the IoU score by 0.1 in FCN-32 when self-attention was deployed. Similarly, in UNet the IoU was improved by 5 percent with attention block. Also, the visual representation of the output images shows how the self-attention block in the encoder of the model can improve their accuracy in detecting occluded yet important classes like Pedestrian.
Real-time and online action localization in videos poses a critical and formidable challenge. Achieving accurate action localization necessitates the integration of both temporal and spatial information. However, existing approaches rely on computationally intensive 3D convolutional neural network (CNN) architectures or redundant two-stream architectures with optical flow, rendering them unsuitable for real-time, online applications. To address this, we propose a novel approach that leverages fast and efficient key-point-based bounding box prediction for spatial action localization. Additionally, we introduce a tube-linking algorithm that ensures the temporal continuity of action tubes even in the presence of occlusions. By combining temporal and spatial information into a cascaded input for a single network, we eliminate the need for a two-stream architecture, enabling the network to effectively learn from both types of information. Instead of using computationally demanding optical flow, we extract temporal information efficiently using a structural similarity index map. Despite the simplicity of our approach, our lightweight end-to-end architecture achieves state-of-the-art frame mean average precision (mAP) of 74.7 % on the challenging UCF101-24 dataset, demonstrating a notable performance gain of 6.4 % over previous online methods. Moreover, we achieve state-of-the-art video mAP results compared to both online and offline methods. Furthermore, our model achieves a frame rate of 41.8 FPS, representing a 10.7% improvement over contemporary real-time methods.
Sugarcane is one of the main economic crops in the world, moving the financial market with the sale of its products, resulting from this cultivation. The amount of images that can be extracted from this crop have increased exponentially due to the advance of remote sensing technologies like Unmanned Aerial Vehicle (UAV). When processed and analyzed, the images can provide valuable information about productivity, diseases, water stress, among others. However, the collected images have low resolution, given the flight altitude of the UAVs. Therefore, our goal in this work is to improve the resolution of the images of the sugarcane crop, applying deep learning techniques. In order to improve further processing by algorithms for extracting data of interest, we experimented with a REAL-ORGAN, which is a variation of ESRGAN (Enhanced super-resolution generative adversarial networks). It is important to note that, although that model was originally designed to work with images of landscapes, people, cars, and even anime, our initial experiments with agricultural images are quite promising, with superior results by 141,37% when compared to classic algorithms used for upsampling. in images. Our proposal managed to visually improve the images significantly, proving to be an attractive alternative for extracting information about the culture. As future work, it is intended to improve the accuracy of the proposal and extend the comparison with other algorithms.
Moving gracefully while being efficient is an ability which animals possess. Many characteristics of animals' central nervous systems are shaped on basis of locomotion features. For robots, producing satisfactory locomotion skills, which allow them to perform more efficiently is also of primary importance. In this study, Central Pattern Generator (CPG) is utilized for the first time in trajectory planning on a parallel manipulator. Conventional methods have long been used to devise trajectory planning methods, most of which are highly passive and offline. In this paper, a method is proposed to enable a Parallel Manipulator to switch one trajectory to another one online which was absolutely impossible in more conventional methods. The trajectory planning for industrial manipulators are often cyclic, for instance machining, and the ultimate goal of manipulator's application in industry is to pass a trajectory as smoothly as possible with a negligible rate of acceleration and jerk rate. When a manipulator is to switch from one trajectory to another, it is expected to stop and start the second trajectory. Inevitably though, the switch between trajectories causes the whole system to tolerate a high rate of acceleration and consequently jerk due to its halt and start which gradually leads to the system' breakdown. In this paper, the technique proposed allows the system to eliminate the jerk as a result of the resumption of the movement and due to the fact, the jerk no longer exists. This method lets the whole system survive longer. In this study, the manipulator switches form one trajectory to another and it never stops during the travel which is an undeniable advantage over previous trajectory method in which the resumption of motion was an indispensable part of it.
Haptic devices have emerged as a promising technology for enhancing the experience of playing virtual musical instruments by providing tactile feedback to the user. This paper presents a novel haptic device that emulates plucked musical instruments as part of an augmented reality (AR) system. This wearable device features a pair of parallel 5-bar mechanism that provides haptic feedback. While initially developed to simulate harp playing, this technology could also be applied to emulate fingerstyle guitars, banjos, and other plucked instruments. The system can detect the position of a finger relative to a virtual string's projected position and provide haptic feedback by moving a string piece against the fingertip. Upon detecting a plucking motion, the device plays a musical note based on the virtual string's projection the finger is closest to and moves the string piece back to its rest position. The device has been designed, developed, and tested to ensure functionality and user comfort.
This paper presents backstepping constraint control approaches for a quadrotor unmanned aerial vehicle (UAV) control system. The proposed methods are applied to a Parrot Mambo drone model to control rotational motion along the x, y, and z axes during hovering and trajectory tracking. The backstepping constraint control method, based on barrier Lyapunov functions, is designed not only to track the desired trajectory but also to guarantee no violation of the position and angle constraints. Symmetric and asymmetric barrier Lyapunov functions are introduced in the design of the controller. A nonlinear mathematical model is considered in this study. Based on Lyapunov stability theory, it can be concluded that the proposed controllers can guarantee the stability of the UAV system and the state converges asymptotically to the desired trajectory. Validation of the proposed controllers was performed by simulation on a flying UAV system.
The function of Parallel-Kinematic Machines (PKM) is widely popular for the precision they can guarantee in industrial applications such as welding and pick-and-place. A kinematically redundant manipulator could conduct a task which needs fewer degrees-of-freedom (DOF) than proposed by the manipulator. The presence of an additional DOF can be used to improve performance indexes of the manipulator. To gauge the performance of a manipulator, indexes such as dexterity, sensitivity, manipulability are used. In this study, we propose a method which is based on gaining the best performance of a manipulator by exploiting a redundant DOF. This novel method is on the basis of obtaining an optimum rotation of the end-effector (EE) and it allows a manipulator to reach its best kinematic performance. To evaluate the validity of the method, several performance indexes are employed and it clearly shows that performance indexes improve considerably. Case studies are also conducted to demonstrate the ability of proposed method in exploiting the best performance of a three DOF parallel manipulator. Finally, a PKM in the lab is used to verify the results and the validity of the method is tested while the application and improvement of indexes can be seen.
The ITALICS facility was developed to collect data using a suite of sensors in a flight-like environment without using an MRUAV. This testbed does not have the weight or flight time restrictions typically associated with an MRUAV. The facility is being used to evaluate how the choice of optical sensors can influence MRUAV system performance and testing effects of sensor fusion on navigation. To fuse the individual sensor data into a single model, sensor-to-sensor calibration is used to obtain the relative poses for multi-sensor configurations. Preliminary testing has been performed with the testbed to verify the individual sensor capabilities using localization techniques and the accuracy of the inter-sensor calibration.
Honeybees have a significant impact on agriculture, and their ability to pollinate is crucial for the economic viability of farms. The decrease in honey bee populations in recent years, coupled with the laborious task of manually inspecting beehives, has led to a growing interest in the automated remote monitoring of beehives. Out of the different modalities used to monitor honeybee colonies, acoustics has demonstrated great versatility. It has been shown that beehive audio can be used to detect e.g., swarming, queen absence, and hive strength. Notwithstanding, there are numerous external and environmental factors, such as rain, wind, traffic noise, and the presence of beekeepers' voices in the background, which can significantly degrade the recorded bee audio quality and beehive monitoring performance. In this paper, we investigate the potential of three voice activity detectors (i.e., short-time energy thresholding, WebRTC, and a recent method based on a convolutional recurrent deep neural network) in detecting human speech within a beehive audio recording. We evaluate the performance of each method on two different datasets, one publicly available and another collected in-house. Experimental results show the superiority of WebRTC in detecting speech within bee buzzing audio, achieving F1-scores of approximately 0.7 and 0.8 for each dataset.
This paper introduces CBEx, a novel Cloud-Based knowledge Extraction system that efficiently and effectively ingests data from various sensor platforms across disparate domain Command and Control (C2) constructs and sensors. It employs pre-processing techniques tailored to each data type, ensuring consistent analysis and structuring according to a unified taxonomy. The structured data is stored in a unified format in a database, enabling real-time decision support based on Essential Elements of Information (EEIs). This solution offers tools to enhance multi-domain situational awareness by integrating data sources from multiple domains and by employing advanced Artificial Intelligence (AI)/Machine Learning (ML) analytics for accurate and rapid decision-making throughout the mission cycle. The feasibility of CBEx is demonstrated through three realistic scenarios, showcasing its ability to integrate a variety of data sources effectively while providing decision support in real-time.
This paper presents a study on the application of supervised machine learning algorithms for the purpose of distinguishing and categorizing Virtual Private Network (VPN) and The Onion Router (TOR) traffic on the dark web. The dark web, characterized by its anonymity and inaccessibility, has become a popular platform for illicit activities such as drug trafficking, money laundering, and cybercrime. While VPNs and TOR can be used for legitimate purposes such as privacy protection and bypassing internet censorship, they can also be exploited by cybercriminals. The CIC-Darknet2020 dataset, which includes a comprehensive collection of network traffic captures from the dark web incorporating traffic features from both VPN and TOR technologies, is used for this study. We employ classification algorithms such as Random Forest, Support Vector Machine, Naive Bayes, and Decision Tree classifiers to construct our model. The performance of the model is evaluated using parameters such as execution time, accuracy, precision, F-measure, and recall, utilizing five-fold and ten-fold cross-validation and 66/34 and 80/20 percentage splits. Our results show that the Decision Tree (J48) classifier outperforms other classifiers, achieving 99.6% accuracy with an execution time of 15 seconds for ten-fold cross-validation. The findings of this study have implications for enhancing cybersecurity measures in identifying and mitigating threats associated with VPN and TOR traffic on the dark web.
This research introduces an innovative approach to microwave filter design, utilizing an artificial neural network-based model that learns the relationship between geometric parameters and the microwave filter's response. In particular, the proposed model is applied to the design of a 3-pole Chebyshev capacitively lossless bandpass microwave filter. Two examples with different center frequencies and percentage bandwidths are considered to validate the model's effectiveness. Distinct datasets are generated for each center frequency and percentage bandwidth to ensure accuracy. The trained ANN models are used for parameter extraction at each aggressive space mapping iteration, offering a fast and accurate solution for microwave filter design. This approach provides a simple yet promising alternative to traditional design methods.
The combination of catalytic, electronic properties and experimental data provides more information for the analysis of catalysts to improve innovation and design. Here, we compute electronic properties, including the bandgap, fermi energy and magnetic moment of known catalysts of the oxidative coupling of methane reaction (OCM). In combination with available data on experimental conditions for OCM, we are able predict catalytic performance and reaction outcomes in the form of methane yield, ethene yield, ethane yield, and carbon dioxide yield. A comparison of different machine learning models suggests Extreme Gradient Boost Regression (Xgboost Regression) is an ideal model for predicting catalytic performance with great accuracy. The fermi energy of the catalyst promoter, its atomic number, and the active metal oxide band gap have been found to be good electronic descriptors of the catalytic performance of the OCM reaction. Transition metals, including Platinium, Rhodium, Ruthenium and Iridium, have been predicted to promote catalyst performance in the oxidative coupling of methane reaction. The study proposes seventy-nine novel bimetallic combinations for metal dioxides and 616 novel catalytic materials for methane conversion at a low temperature of 700 o C as an effective catalyst for oxidative coupling of methane reaction. These new catalysts were predicted to enhance methane yield in the range of +/-30% to +/-95%, an increase from the prior research's maximum methane conversion of 36.
Electric machines are essential components of modern society, and their failure can have serious consequences. Eccentricity fault is a common type of fault that can occur in these machines. Dynamic eccentricity is a type of eccentricity fault that occurs when the rotor center is not aligned with the center of rotation. This can cause several problems, including increased vibration, decreased efficiency, and even catastrophic failure. In this paper, a new method is proposed for detecting dynamic eccentricity faults in synchronous machines. The proposed method uses principal component analysis (PCA) of sideband frequencies of motor current signals to detect and isolate dynamic eccentricity from static eccentricity fault, irrespective of the load condition of the machine.
This paper develops a signal-processing based method for stator inter-turn fault detection in brushless direct current (BLDC) motors. In the proposed approach, the current waveforms are firstly transformed into dq axis using Park transformation matrix. Then, the faults are identified by means of Savitzky-Golay smoothing filter, the modified cumulative-sum method and a novel ratio-based index. In addition to being simple and efficient, the proposed technique is highly capable of functioning in different BLDC motor conditions without changing its threshold settings. To assess the developed scheme, datasets from a laboratory BLDC motor setup are considered. The results confirm the quickness and high accuracy of the proposed technique. Moreover, to validate the efficiency of the suggested approach, it is compared with some other similar methods from various aspects.
The early identification and diagnosis of AC machine defects are vital because they ensure dependable operation, avoid equipment damage, increase operational safety, and boost energy efficiency. Inter-turn faults and eccentric faults are two frequent fault types that can have a major influence on the performance and dependability of AC machines. Even in newly manufactured machines, there will be a small degree of static eccentricity, which is defined as a slight misalignment or asymmetry in the machine's air gap distribution. However, the precise location of static eccentricity i.e. the point of minimum air gap will be unknown. When a stator inter-turn fault occurs in such a machine, analysis needs to be carried out to determine the impact of relative position between the point of minimum air gap and the physical location of the stator inter-turn fault. In this work, the authors have carried out such an analysis using Maxwell software. The authors have conducted Finite Element (FE) simulations to examine hybrid faults in a Reluctance Synchronous Machine (RSM). The impact of varying the relative position between the point of minimum air gap and the stator inter-turn fault on the frequency spectrum of line currents has been monitored. Moreover, a data-based technique called Principal Component Analysis (PCA) has also been used to carry out the analysis. The data-based technique revealed the changes in the relative position of the two faults, paving way for future research in fault localization.
Modern FPGAs (Field Programmable Gate Arrays) like Xilinx 7-series ones incorporate DSP blocks that contain 18x25 bits two's complement embedded multipliers. When FPGA-based small size signed multipliers are required, it is not practical to use these large size embedded multipliers. Thus, one can use LUTs (Look Up Tables) in FPGAs to implement them. Since the target signed multipliers are assumed in two's complement, a preprocessing is required for a LUT-based implementation. In this paper, Baugh-Wooley and sign-magnitude are used as preprocessing algorithms to realize two's complement 8x8 bits multipliers using LUTs in FPGAs. These two algorithms are used since they allow for a parallel realization of the signed multipliers. We synthesize 8x8 bits two's complements multipliers on LUTs using these two algorithms. As an application, we use the resulting synthesized designs to synthesize 8-taps and 16-taps digital Finite Impulse Response (FIR) filters for input data and coefficients in two's complement. Experimental results on Xilinx Artix-7 FPGAs using the Vivado 2020.2 synthesis tool show that the synthesized designs using the Baugh-Wooley algorithm are better in terms of speed and area compared to using the sign-magnitude.
This paper presents the FPGA implementation of two different topologies of an Artificial Neural Network (ANN) on the Xilinx Zynq-7000 evaluation board. The engine dataset available in MATLAB is used to train the neural network. The resulting parameters of a neural network are taken from MATLAB and are used to implement it on FPGA. Two structures are implemented with different accuracy of sfix_24_8 and sfix_32_16 and different clock frequencies and resource utilization is measured. The maximum achievable frequency measured is 83.33 MHz and the minimum power is 0.203 W.
Field Programmable Gate Arrays (FPGAs) are being used in the realization of real-life applications. Adders are required in data processing units and in the realization of other arithmetic operators. We compare three types of FPGA-based adders, propose optimizations, and use the non-optimized and optimized adders for realizing a digital Finite Impulse Response (FIR) filter on FPGA and we compare the implementations. As an FPGA platform, we used the Altera Cyclone IV. The used synthesis tool is the Quartus Prime 19.1 from Altera. Experimental results are provided and discussed.
This paper presents the design of a 56 GHz voltage-controlled oscillator (VCO) designed in an Indium Phosphide (InP) 0.8 μm kit. InP (III-V) semiconductor technology was chosen since the fastest photodiodes that currently exists on the market (for use with 112 GBaud) make use of InP PIN diodes. The goal is to integrate everything in one die and one technology of InP (including the PIN diode, front-end transimpedance amplifier (TIA), and CDR blocks with the VCO); for this reason InP was used and to assess if such a VCO can be designed for use in 112 GBit/s NRZ (or 224 GBit/s PAM-4) receiving CDR units. The 56 GHz VCO can either be used directly in a half-rate CDR or be used in a full rate 112 GBaud CDR using a frequency doubler circuit. Post-layout electro-magnetic simulations of the VCO had a tuning range of 55 GHz - 58 GHz. The small tuning range is due to the fact that there were no varactors in the InP kit, hence makeshift HBTs with their limited tuning capacitances were used as the core varactors. The VCO used 11.1 mA of current (including periphery circuitry: buffers & current mirrors) from a 3.3 V supply; for a total power consumption of 36.6 mW. The output signal had a phase noise of -98 dBc/Hz at 10 MHz offset (which is comparable to similar InP works). The VCO design used a -gm cross coupled pair that can provide differential oscillation to recover the clock signal from a differential data input. The core size of the VCO (meant to be used as part of a CDR unit) required only 113 μm × 94 μm of layout space.
Power System State Estimation (PSSE) is the backbone of monitoring in recent power systems, and hence, any deficiency in the operation of the PSSE algorithm may result in wrong control and protection decisions. It is shown in the literature that PSSE can be targeted by cyberattackers who aim to manipulate the PSSE output by injecting false data into system measurements. In this paper, a measurement classification-based method is proposed to protect the PSSE against False Data Injection (FDI) attacks. Power Flow Measurements are classified based on their redundancy into critical and essential sets. The proposed method depends mainly on securing the critical subset and considering different essential subsets for running the PSSE, which helps to identify the attacked and non-attacked measurements. A sensitivity analysis has been carried out to show the reliability and the probability of failure of the proposed method.
The rapid growth of Internet of Things (IoT) resulted in a heightened risk of security breaches, as cybercriminals have begun to target IoT devices and networks with increasingly sophisticated techniques. However, IoT security monitoring platforms face several challenges, including the inability to identify unknown threats, limited real-time prediction capabilities depending on signature-based threat identification, and the need for standardization and integration issues. In this paper, we propose a Real-Time Security Monitoring (RSM) platform based on the results of Deep Learning models, which can predict attacks on IoT networks and visualize the prediction results in a custom-built Power BI dashboard in a real-time manner. To evaluate our proposed solutions, we compare the effectiveness of three deep learning models - Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and Deep Neural Networks (DNN)
The Border Gateway Protocol (BGP) is a crucial component of the Internet's infrastructure that enables the exchange of routing information among multiple Autonomous Systems so data flow from one network to another becomes possible. However, rare anomalies in BGP, such as IP prefix hijacks, misconfigurations, and worm attacks, when they occur, can cause significant disruptions to the network and threaten the stability and reliability of the Internet. Considerable efforts have been made to understand the nature of normal and abnormal BGP updates to identify and mitigate their disruptive consequences. Recent studies in the literature suggest that machine learning (ML) techniques can achieve a high level of accuracy and robustness in anomaly detection. To fully leverage the advantages of ML techniques, it is necessary to pre-process the data and choose a suitable model that helps identify and mitigate against any such BGP anomalies and improve the stability and reliability of the Internet. This paper evaluates multiple machine learning models for detecting BGP anomalies and comprehensively analyzes their effectiveness. Results reveal that AdaBoost achieves an impressive accuracy of 97.22%, making it the optimal choice for BGP anomaly detection.
The combination of network virtualization (NV) and wireless sensor networks (WSNs) represents a promising approach to address the challenges of scheduling network resources in delay-sensitive grid asset monitoring systems. Virtual network embedding (VNE) can efficiently allocate sensor device resources, considering Quality of Information (QoI), Quality of Service (QoS), and wireless interference handling. A higher acceptance ratio of VNE in this application translates to more virtual measurement requests (VMRs) being mapped onto the smart grid environment at a time, resulting in faster asset monitoring. However, VNE's shared and complex nature exposes these networks to security risks. Since secure and high measurement quality with low delay is an indispensable aspect of asset monitoring in many smart grid applications, we implement a trust-aware virtual sensors network (TA-VSN) algorithm to maximize the acceptance rate of the VMRs and minimize the cost of embedding on the monitored environment while improving QoI, QoS, and security. The TA-VSN algorithm achieves a high-quality suboptimal solution faster, which makes it suitable for monitoring smart grid assets. The simulation results show that adding security constraints limits the acceptance ratio but improves average network throughput, and measurement error efficiency, and enhances delay, making the VNE algorithm more practical for delay-sensitive monitoring applications.
In this work, the effect of Time Delay (TD) as well as Distributed Denial-of-Service (DDoS) attacks on the Automatic Generation Control (AGC) is investigated. For this purpose, a cyber-physical communication platform bridging a communication network simulator Graphical Network Simulator-3 (GNS3) and a power system simulator OPAL-RT is utilized. It is shown that TD and DDoS attacks have a significant effect on the frequency stability in the power system. To address this problem, three Multiprotocol Label Switching (MPLS) technologies are investigated to detect and mitigate DDoS and delays in the communication network. A 2-area AGC system is simulated and a comparative study on the utilized three MPLS technologies are provided to show their abilities in mitigating TD and DDoS attacks and their direct impact on the AGC operation.
In view of the escalating electricity demand and the pervasive implementation of electrical appliances, the safe and efficient operation of smart grids has been recognized as of significant importance. In recent years, machine learning has been widely applied to smart grid core applications, i.e., anomaly detection and electric load forecasting. In order to achieve precise anomaly detection and accurate time series forecasting, a significant quantity of historical data is usually required for model training. In practice, however, acquiring such a sizable dataset is often accompanied by high costs and many challenges, making it impractical in real-world tasks. In this work, we employ data augmentation techniques to expand the training set size for smart grid anomaly detection and time series forecasting tasks. Specifically, we investigate the efficacy of noise injection and a generative adversarial networks based augmentation method on various machine learning-based models. Extensive experiment results on two real-world datasets demonstrate the effectiveness of data augmentation techniques on anomaly detection and time series forecasting tasks. Experiment results show the benefits of data augmentation and provide guidelines for researchers and engineers.
Face masks are crucial for controlling the spread of dangerous diseases, as the global epidemic has highlighted. In this study, we describe an innovative method for automatic face-mask identification in public areas that combines the activation functions of MobileNet with scaled exponential linear units (SELUs). Our suggested model is a realistic option for wider adoption because it is small, effective, and capable of achieving real-time performance on edge devices. The MobileNet design, which strikes a compromise between computational complexity and precision, serves as the foundation of our model. By using SELUs, which address the disappearing and exploding gradient problem typical of deep learning models, we marginally enhanced the model's performance.
Ground Penetrating Radar (GPR) is a non-destructive geophysical technique that has been in use at Saskatchewan potash mines for over four decades. The GPR system is an innovative technology used in imaging salt beds above or below a mined room. The borer mounted GPR application has proven to be a reliable tool for mapping the roof beam thickness which is normally a meter from the mine roof to the immediate clay seam above. Utilizing an automated picking algorithm, real-time data interpretation is provided to borer operators to make informed safety decisions. Hence, it's important that an auto-picking algorithm is adequately tuned to declutter noise and identify geologic features seen within the mine roof. This paper presents a series of studies aimed at understanding and improving data interpretation of the GPR during active mining as geologic variations within the mine roof can lead to GPR data degradation. An approach to this challenge was to develop a robust and intelligent auto-picking algorithm called the Cluster Ratio Derivative (CRD) that utilizes a data reduction technique to improve the signal to noise ratio (SNR) and machine learning to pick the clay seam in the GPR data. Additional work was performed by developing numerical earth models of a potash mine using gprMax. The generated synthetic datasets, also served as testbed in developing the CRD algorithm. The success of this work has led to the implementation of the novel CRD auto-picking algorithm on borer GPR software. The goal is to continue to ensure that meaningful GPR interpretations are provided to operators during active mining.
The Internet of Things (IoT) actively transforms physical objects, including portable, wearable, and implantable sensors, into an information ecosystem that enriches the technology and data in every aspect of life. This paper examines two anomaly detection approaches: novelty and outlier, for IoT networks. In this respect, we leverage four unsupervised learning algorithms, namely Isolation Forest (IF), Local Outlier Factor (LOF), One-Class Support Vector Machine (OSVM), and variational encoder (AE), on four publicly available IoT datasets. The experiments reveal that by embracing the novelty approach by considering only pure benign data for training, the AE model achieves a high F1-score and AUC up to 97% and 0.97.
Underwater acoustic communication networks often experience significant challenges that are induced by the oceanic medium. This medium often causes changes in communication topology that make advanced routing techniques out of reach for many current technologies. For this reason Underwater Acoustic Sensor Networks (UASNs) often apply flooding, or restricted flooding, protocols to perform information diffusion tasks. The DFLOOD protocol is one of the most successful restricted flooding protocols used in UASNs. This work presents DFLOOD+; a protocol that improves on the duplicate counting and delayed forwarding methods of the DFLOOD protocol. DFLOOD+ is shown to produce a 42.3% mean reduction of duplicate messages when compared to DFLOOD in a network-layer simulation.
Underwater communication across long distances remains a challenging task, primarily due to factors such as noise, geometric spreading, and multi-path propagation. These causes of signal distortion often result in significant condtion variations, making reliable communication difficult. In an effort to address this challenge, Defence Research and Development Canada (DRDC) participated in a sea-trial experiment during the summer of 2021 to test the effectiveness of UWSPR, a com- munication scheme designed for achieving reliable and narrow band communication. The experiment involved transmitting 16 UWSPR one-minute frames near a marginal ice zone, while an array of hydrophones placed 2.3 kilometers away from the projector recorded the transmissions. In this paper, the results of the analysis of the recordings are presented. The recordings were demodulated using a novel multi-frame and multichannel strategy that combines energy from several channels, where each hydrophone and acoustic frequency pair defines a channel. A performance assessment of the demodulated signals is provided, demonstrating the effectiveness of the enhanced UWSPR scheme in achieving reliable communication in the challenging underwater environment.
Wireless underground sensor networks (WUSNs) have practical applications in domains such as military operations, agriculture, and information science. However, the large attenuation of signals underground has always posed a challenge, especially when it involves dynamic and changing environments. In large irrigation landscapes, the irrigation process is usually required multiple times per day in torrid areas, and the wireless signal is highly attenuated due to soil moisture. Consequently, communication link disconnections may easily happen and highly waste time and energy consumption. Thus it is necessary to have a precise path loss model considering soil moisture to ensure that the path loss does not exceed the allocated link budget. In this work, we present a comprehensive study on the impact of soil moisture on the communication link among underground and aboveground nodes and propose a mathematical long-range (LoRa) path loss model that considers the complex dielectric constant of the soil. Furthermore, we develop a self-detection stochastic gradient descent (SSGD) approach with a distributed clustering sensor network architecture that can self-detect disconnections caused by the irrigation schedule. Based on our case study, it is demonstrated that the SSGD approach is more efficient and reliable than the traditional stochastic gradient descent (SGD) algorithms in high-moisture conditions for smart irrigation applications.
The classical LEACH algorithm is a well-known clustering protocol for WSNs that helps to improve the network's energy efficiency and prolong its lifetime. However, LEACH and its variants have limitations, such as the likelihood of node failure due to uneven distribution of energy consumption among the nodes. This paper proposes iLEACH, a modified version to improve the network's longevity. We propose to include residual energy as a main parameter in the cluster head formula to improve the selection process. We employ the best-fit statistical distribution to further ensure fairness and effectiveness in the selection process. This approach helps to identify the nodes with the most suitable residual energy levels for the cluster head role. Finally, to enhance the efficiency of data collection, we use a mobile data collector (MDC) that can move around the network and collect data from cluster heads, thus improving the overall energy efficiency of the network. This modification helps to balance the energy consumption among the nodes and avoid the likelihood of node failure, hence improving the network's longevity and overall performance.
The COVID-19 pandemic has revealed significant challenges in the structure, delivery, and financing of long-term care (LTC) in Canada. The high number of COVID-19-related deaths among LTC residents, who are primarily older adults with multiple underlying health conditions, has raised concerns about the ability of LTC sites to effectively respond to the crisis. Consequently, in Saskatchewan, where outbreaks have occurred, there is a growing interest in expanding home care (HC) services and making investments in this area. To establish HC as a viable option for helping to address the LTC crisis, it is crucial to assess the current state of people, processes, and technologies involved in HC operations. This assessment aims to identify areas for improvement and enhance the provision of HC at various levels. This paper examines the enhancements needed for current HC processes and technologies while exploring the potential benefits of utilizing an open-data and open-source software solution to improve HC operations in Saskatchewan. Additionally, future research and development directions are discussed.
Data deduplication is a technique for reducing storage space by identifying and eliminating redundant data. The division of files into chunks is one of the key steps in the deduplication process and directly impacts deduplication effectiveness. Despite the numerous algorithms available for chunking, there is a limited understanding of their strengths and weaknesses in virtual machine backup environments. We present DedupBench, a framework designed to assess the performance of different chunking algorithms for deduplication on user-specified data. DedupBench allows for the evaluation of chunking techniques by comparing their deduplication ratio and chunking throughput. DedupBench incorporates a generic design, allowing for the effortless integration of additional chunking techniques developed in the future. We evaluate four widely used chunking algorithms using a VM-based dataset with DedupBench. Our evaluation contrasts earlier studies and demonstrates that Asymmetric Extremum (AE) has the best deduplication efficiency for VM-based datasets among the tested algorithms, highlighting the need to evaluate chunking techniques on user-specified data before designing deduplication systems.
As the front-end web frameworks ecosystem evolves, we have encountered problems managing client data. Not only are the solutions for this problem diverse, but the problem too has devolved into two parts - client-side state and server-side state. The server-side state is not the same as UI (client-side state) and should be managed differently, which leads to the problem of ensuring synchronization between the two states. Our goal is to provide a consolidated architecture that ensures a full sync between the two states while being performant and developer friendly. Based on our tests against React Context API, we increased the dispatch performance by over 400%, significantly reducing the network calls and eliminating irrelevant re-renders.
Software non-functional properties (NFPs) play the dominants role for the acceptability of software in the market. As in single software systems, testing NFPs in software product lines is also important to ensure quality of software products. Research in the area of software product lines testing has been very active over the past decade. However, the most focus of this research has been on testing software functional properties, while testing of NFPs has not received much research attention. In this paper we address non-functional requirements testing based on goal models. Specifically, we proposed a methodology for reusable test case design during domain engineering that supports early testing at the domain analysis stage to help create testable non-functional requirements that will be used for designing effective test cases at the domain testing level. We focus on testing of domain core components. A prototype testing system was also developed to support testing based on the proposed methodology.
The smart grid is a modern system that connects various elements and presents unique security issues, such as cascading failures caused by power system component failure or cyber attacks. To address this problem, machine learning algorithms are increasingly being employed to identify and predict such failures. In this paper, we propose a new mechanism that uses supervised machine learning algorithms to detect early-stage failures in smart grid networks. We use a realistic methodology to create a dataset to train the algorithms, and the mechanism can detect failures at the early-stages of propagation. To improve detection accuracy, we use the eXtreme Gradient Boosting (XGBoost) algorithm and consider both power and communication network features. The mechanism's efficacy is evaluated using the IEEE 14-bus system.
The role of Vehicle-to-Everything (V2X) networks in smart grid applications has become increasingly important in recent years, largely due to integrating electric vehicles (EVs) with the grid, such as charging stations, other EVs, smart homes, and grid control centers. However, the decentralized and complex nature of V2X networks raises security concerns. Previous research suggests using blockchain technology to mitigate these concerns. Therefore, we examine a V2X network scenario employing a Trust-based Access control Blockchain mechanism for IoT networks (TABI). This mechanism, when applied to V2X networks, is referred to as the Trust-based Blockchain mechanism for V2X networks (TBVX) mechanism. We use the Hyperledger Fabric (HLF) blockchain network and Hyperledger Composer to analyze the V2X network scenario. Also, we use the Hyperledger caliper to evaluate the performance of TBVX in terms of throughput and latency. Our experimental analysis shows the effectiveness of the TBVX mechanism in the V2X network scenario.
The Internet of Things (IoT) has revolutionized the way people interact, communicate, and perform daily activities in various domains ranging from households to industries and cities. MQTT is one of the commonly adopted protocols for implementing IoT. However, IoT systems that are connected through MQTT are susceptible to security breaches as MQTT was not originally designed with security as a priority. The credentials and messages transmitted in plaintext by default, thereby compromising data confidentiality and integrity. This study presents a comprehensive analysis of the MQTT protocol, including experimentation on an MQTT system using various cryptographic implementations, such as AES-CBC, RSA, and ECC AES Hybrid Scheme, to assess the processing time and message size. The findings indicate that payload encryption increases processing time and message bytes. Among the cryptographic implementations, RSA incurs the highest processing time, followed by ECC AES Hybrid Scheme and AES-CBC. Furthermore, the study demonstrates the effectiveness of attack prevention between standard MQTT and secured MQTT implementations by simulating various IoT attacks, such as black-box penetration attack, identity spoofing, DoS attack, and MITM attack. The results and subsequent discussion provide insights that answer the research question, revealing the cryptographic algorithms that result in the most overhead on the standard MQTT implementation and their capacity to resist common attacks.
In an effort to modernize federal privacy laws, the Canadian government introduced Bill C-27 on June 16, 2022. This bill encompasses three acts, among them the noteworthy Artificial Intelligence Data Act (AIDA). AIDA, along with the CPPA (Consumer Privacy Protection Act), forms part of Bill C- 27, under the legislative initiative named the Digital Charter Implementation Act, 2022. This landmark legislation signifies Canada's inaugural step toward regulating artificial intelligence (AI). Consequently, it is imperative for Canadian researchers to stay informed about this nascent proposal. This paper delves into the salient aspects of Canada's proposed Artificial Intelligence and Data Act, shedding light on both its addressed and overlooked facets.
As e-banking networks have advanced rapidly, the majority of financial transactions are conducted through the use of credit cards. A number of qualitative indices, including error rate and response time, are influenced by the performance of each node in the transaction process. This study examines the issue of tuning the time-out between a sample bank's switch and core systems using statistical data analysis. By analyzing the transaction data of three years and applying a statistical parameter tuning approach, resources can be allocated effectively to prevent errors and delays, thereby improving the QoS of e-banking. Moreover, this approach can be applied to other banks or payment systems to enhance performance without requiring significant hardware modifications. Results from the parameter tuning, based on analyzing data, showed a considerable improvement in the error average and variance and an increase in bank switch capacity, which was confirmed by statistical analysis and the central bank's reports.
Parkinson's disease is a complex neurological disorder that affects various neural, behavioural, and physiological systems. To provide optimal treatment and improve patient outcomes, an accurate and early diagnosis is essential. This study explores the use of Artificial Intelligence techniques to diagnose Parkinson's disease. The study utilizes four machine learning classifiers: Decision Tree, Logistic Regression, Random Forest, and K-Nearest Neighbors, along with a Genetic Algorithm (GA) for feature selection. The study highlights the effectiveness of GA in selecting the most relevant features from a large dataset. Comparative analysis of the classifiers reveals that the Random Forest classifier, combined with Genetic feature selection, performs the best in terms of accuracy, with an accuracy rate of 93.88%. This research contributes to the growing field of machine learning-based diagnostic tools for neurological disorders and provides valuable insights for the development of accurate, powerful, and patient-focused diagnostic tools for Parkinson's disease.
Effective traffic management plays a vital role in improving emergency response times and ensuring the efficient movement of vehicles on roadways. In this study, we propose an innovative approach to enhance traffic management through the implementation of a YOLOv5-based Ambulance Tracking System. The YOLOv5 algorithm, known for its high-speed and accurate object detection capabilities, is employed to track ambulances in real-time. By leveraging the power of computer vision and deep learning, our system provides precise and reliable tracking of ambulances, allowing traffic authorities to make informed decisions and take proactive measures to facilitate their smooth passage. The proposed system offers significant benefits such as reduced response times for emergency vehicles, minimized traffic congestion, and improved overall road safety. Through experimental evaluations, we demonstrate the effectiveness and efficiency of our YOLOv5-based Ambulance Tracking System in various traffic scenarios. The results highlight its potential to revolutionize traffic management and emergency services, ultimately saving valuable time and lives.
The banking industry is a frequent target of security attacks, and DDoS attacks are among the most common types that can cause significant financial losses. In this paper, we present a big data analytics approach to analyze 33.4 billion transactions of a sample bank over five years, identifying transaction types, acquiring terminals, and expected income. We estimate the demand load pattern during DDoS attacks' downtime and lost opportunities using pattern recognition. Our findings show that a DDoS attack can cost several thousand dollars per hour of downtime, which varies across different days and times. Our study contributes to the literature on the financial impact of security attacks on banks and has implications for developing more effective security measures. By providing a comprehensive and accurate approach to estimating the business cost of security attacks, big data analytics can help banks mitigate operational risks and improve their cybersecurity posture.
The large datasets related to network traffic flow classification in the internet benefit machine learning (ML) and deep learning (DL) models for more accurate classification, which is used in many applications such as in detecting traffic anomaly for prevention of potential cyber-attacks. Data Parallelization allows for faster training times on large datasets as shown in our results, and it is also beneficial in the cloud-edge environment by allowing efficient distribution of computation and data across multiple nodes. We deployed Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), advanced hybrid Convolutional LSTM (ConvLSTM), Convolutional GRU (ConvGRU), and XGBoost algorithm using Data Parallelization approach. The experimental setup was implemented in the cloud and parallel training was executed using Nvidia Tesla Graphics Processing Units (GPUs). Lastly comparison of the performance metric results is presented between the non-parallel centralized (single node) and data parallel distributed approaches.
HTTPS is a widely used protocol for secure communication on the web, relying on the TLS protocol for encryption and authentication. In this study, we conducted a large-scale measurement on the entire IPv4 address space to analyze the TLS certificate ecosystem used in HTTPS. Over eight consecutive days, we found 46.80M hosts with an open 443 port, of which 33.36M (71.2%) successfully completed a TLS handshake, and we collected 27.88M unique SSL/TLS certificates. This paper presents an overview of the certificate status and distribution, including the prevalence of untrusted and expired certificates. We found that TLS 1.2 is still widely used, accounting for 53.90% of all TLS protocol usage, while TLS 1.3 has shown a significant increase in usage, reaching 43.21% of all TLS protocol usage. Our study also investigates the certificate authorities that issued the certificates, revealing a diverse set of organizations, with Let's Encrypt being the most prominent one. We compare our results with a study conducted a decade ago to examine the changes in the TLS certificate ecosystem. The findings have significant implications for internet security and highlight the need for improved certificate management and monitoring practices.
In this study, we present a phased array antenna system for trajectory-based user tracking and beam assignment in a hallway. We suppose that the antenna array covers all three of the predetermined paths in the hallway and that there are three of them. To enhance communication quality and lessen interference, the system aims to identify which user is on which trajectory and assign a beam to each user. We combine a number of signal processing strategies, such as beamforming and user tracking algorithms, to accomplish this. While the user tracking algorithm uses data from antenna to estimate the location and movement of users within the hallway, the beamforming technique is used to create directional beams that are directed towards each trajectory. Our evaluation of the proposed system demonstrates its capability to precisely track users and allocate beams depending on their placement inside the predetermined trajectories. The findings show that when compared to current techniques, the suggested system is able to enhance signal quality and decrease interference. The system can assist increase the dependability and effectiveness of wireless communication in these contexts by precisely tracking users and allocating beams based on their location within the hallway.
Biometric authentication systems have a large range of applications in the real-life situations. These systems receive various types of signals from the biometric sensors and process them efficiently in order to determine their authenticity. However, these systems are often vulnerable to the biometric spoof signals that are synthetically generated. In view of this, the design of the efficient spoof detection methods for the biometric authentication systems is of paramount importance. There exist various biological signal processing-based biometric authentication systems that use the combination of electrocardiogram (ECG) and photoplethysmogram (PPG) signals for performing their operations. In this paper, we propose a novel spoof detection method for the ECG and PPG signal processing-based biometric authentication systems that is able to reliably detect the spoof signals from the authentic ones. Specifically, the proposed method utilizes two stages, which the first stage involves in obtaining the authenticity status of the ECG and PPG signals separately using the convolutional neural networks and the Bayes classifier. In the second stage of the proposed scheme, the spectro-temporal features of the ECG and PPG signals are extracted in order to further enhance the performance of the task of spoof detection. It is shown that our spoof detection system is able to provide a high performance on the benchmark dataset of the biometric applications.
This paper presents two different methods to increase the capacitance in a gap-coupled coaxial resonator for characterizing dielectric materials. The increased capacitance allows the dielectric characterization of materials over a wider range in terms of dielectric constant and loss tangent. A resonator employing the methods to increase capacitance is designed using ANSYS Electronics Desktop. Simulation results are used to demonstrate the effect of these methods in dielectric characterization of materials with dielectric constant in the range of 2.1 to 7.1, and loss tangent from 0.001 to 0.03. Testing is done to determine dielectric constant and loss tangent of coffee and rice to extend the characterization of dielectric to food materials.
Car sickness is anticipated to occur more frequently in self-driving vehicles because of their design, especially the electronics and seating arrangements optimized for work and entertainment. Therefore, mitigating motion sickness is a crucial research area that is essential to the effective use of electronics in autonomous vehicles and, ultimately, their broad adoption. An investigation using machine learning techniques in combination with physiological measures (electrocardiogram, electrodermal activity and head movement) was done to detect and predict the severity of car sickness. A total of 40 adults (20 male and 20 female) were exposed to two 20-minute rides on a motion-base simulator, one while reading and one while performing no task. Car sickness incidence and severity were subjectively measured during the conditions using the Fast Motion Sickness Scale (FMS) questionnaire every two minutes and the Simulator Sickness Questionnaire (SSQ) at the beginning, midpoint and end of the experiment. Car sickness symptoms were successfully elicited in 31 participants (77.5%) while avoiding simulator sickness. Head movement had the strongest relationship with car sickness, and there was a moderate correlation between heart rate and skin conductance, and with a subset of participants, heart rate had a moderate correlation with car sickness. A classification score of 77% distinguishing between motion-sick and non-motion-sick participants was found using the random forest model. Overall, the findings suggest that physiological measures alone cannot be relied upon to reliably detect or predict the onset or severity of car sickness in real-time.
Renewable energy sources and energy storage systems have been considered promising solutions to improve the sustainability of the current society, where energy management is essential for ensuring the efficiency and reliability of energy systems. This paper investigates stochastic energy management of sustainable communities connected to smart distribution systems. The proposed sustainable community incorporates multiple renewable energy sources (RES), various energy storage devices, an innovative wastewater treatment plant, and a neighboring smart building. Considering the randomness of the electric load, wastewater flow, RES, and weather conditions, this optimal energy management problem is formulated based on the multi-timescale Markov decision process, where the objective is to minimize the total operating cost of the community while mitigating the impacts on the smart distribution system. The proposed energy management scheme is evaluated based on the IEEE 33-Bus Test Feeder, as well as real data of weather conditions, modeling for wastewater generation, photovoltaic (PV) and hydropower generation.
With global warming on the rise, the push for zero emission transportation continues to grow. The transportation sector's solution to these increasing concerns introduced society to electric vehicles (EVs) as a replacement for traditional internal combustion engine (ICE) vehicles. Although the idea of EVs seems conspicuous, the problem is more complex than it seems. EVs come with an undeniable problem, battery decommission and disposal. However, this possibly offers a unique opportunity if research continues in its current direction. An exclusive characteristic to EV batteries is their requirement to deliver power in such a way that the vehicle can accelerate quickly and drive extended distances. These demanding applications mean the battery has to be at a sufficient state of health (SOH) to deliver satisfactory results. Once a battery's SOH reduces to a level that is no longer adequate, it must be retired from the EV. The EV population has grown significantly and is forecasted to continue growing exponentially, thus coming with the accumulation of retired batteries. Serious concerns are drawn to the handling of such batteries. However, research shows that there is promising repurposing that can give retired EV batteries a second life, referring to them as second life batteries (SLBs). Research in this area is ongoing to realize concerns about performance and cost compared to using new batteries in various applications, under a variety of conditions.
The increasing popularity and rising number of electric vehicles have resulted in extensive demand for efficient, reliable, and effective infrastructures of electric vehicle charging stations (EVCSs). However, the development and implementation of such infrastructure pose severe challenges towards the power quality, security, and stability of the power system. This paper presents a holistic understanding of the challenges, mitigation approaches, and available technologies and protocols related to EVCS network deployment. This review aims to provide insights to develop sustainable and efficient EVCS infrastructure while overcoming the challenges and optimizing the benefits.
Solar power is a widely used renewable energy technology that is going to play a key role in the clean energy transition. To address the solar power intermittency problem, battery energy storage systems are integrated into the grid such that the excess photovoltaic energy can be stored for later use. However, batteries suffer from energy degradation and therefore storing energy in the form of a chemical fuel helps in overcoming this challenge. Energy storage in the form of hydrogen is an option, but hydrogen is linked with high flammability, poor volumetric density, and high storage costs. Alternatively, ammonia addresses several issues associated with hydrogen. Excess solar energy can be stored in the form of ammonia via the direct electrochemical ammonia synthesizer (EAS). Accordingly, it is crucial to assess the ammonia production requirements as it directly determines the EAS capacity requirements and the overall system costs. Additionally, there is a loss of efficiency with an increase in ammonia production. This study proposes the utilization of power smoothing filters to assess the ammonia production rates of the EAS as well as the nitrogen and hydrogen input requirements. Sliding window filters such as the moving average, moving mean, and moving regression (MR) filters have been utilized to determine the excess solar power available for ammonia production. Simulation results conclude that the power tracking capability of the smoothing filters has a direct impact on the EAS ammonia production rates. Overall, the MR filter has superior power tracking thereby resulting in lower EAS capacity requirements and thus reduced system costs.
Join us for an afternoon in the beautiful city of Saskatoon for a tour at the Canadian Light Source! The Canadian Light Source is the only synchrotron in Canada and one of the largest scientific infrastructure investments in our country's history. The Canadian Light Source (CLS) is one of the largest science projects in Canada's history. The facility speeds up electrons to produce intensely bright synchrotron light that allows scientists to study materials at a molecular level. Over 1,000 researchers from around the world use the CLS every year. On your tour, you'll learn about the history, how the machine works, and examples of how researchers have used the facility to conduct groundbreaking research in the fields of health, agriculture, the environment, and advanced materials.
The one of a kind control centre, located in Regina SK, employs approximately 65-75 people who are in charge of 14,000 kilometres of transmission lines in the province. This facility monitors and predicts the power usage for the province of Saskatchewan and subsequently determines the plan of how the province receives the power it needs. The SaskPower "slip simulator" offers a unique experience for workers and is used to help individuals learn to walk on ice and slippery surfaces safely. This part of the tour will include a short presentation on the slip simulator and discussion on SaskPower's experience with the training and the trends noted since implementation. Participants of the tour will be invited to participate in experiencing the slip simulator first-hand, please be aware that time may be limited to participants.
There is no cost for this tour. There will be two opportunities to take this tour, once in the morning and once in the afternoon. Participants will receive transportation departing/returning to the Delta Hotel for the SaskPower Grid Control Center tour. The maximum capacity for this tour is 24 participants. Please take note of the hazard acknowledgment linked here for participants wishing to take part in experiencing the slip simulator.