Preliminary Program
Time (Tunis) TULIPE 1 / webex 1 TULIPE 3 / webex 3 Webex 5 Webex 4 TULIPE 2 / webex 2 MIMOSA ORCHIDEE 4 ORCHIDEE 5 Elsewhere

Sunday, July 9

13:00-14:00 Registration
14:00-14:30 Tutorial 1: Breaking Boundaries: Discovering New Horizons in Association-Oriented Data Model, by Krystian Wojtkiewicz (Wrocław University of Science and Technology, Poland) OWC: Workshop on Optical Wireless Communication     ICTS4eHealth - S1: Deep Learning   MoCS: 13th Workshop on Management of Cloud and Smart City Systems    
14:30-16:00       DistInSys: 3rd IEEE International Workshop on Distributed Intelligent Systems  
16:00-16:30         Coffee Break      
16:30-17:30 OWC Keynote: Visible light communication (VLC) for cars, by Takaya Yamazato (Japan)     ICTS4eHealth - Special Session: Technological Advancements in Artificial Intelligence for Medical and Healthcare Applications (TAAIMHA)     MLCID: IEEE Workshop on Machine Learning from Class Imbalanced Data  
17:30-18:30                
20:00-21:00 Welcome Reception

Monday, July 10

09:00-09:30 Opening
09:30-10:30 Keynote: AI-Enabled 6G: Embracing Wisdoms from Classical Algorithms, by Khaled B. Letaief (Hong Kong)
10:30-11:00 Coffee Break
11:00-13:00 S2: Artificial Intelligence (AI) in Computers and Communications (onsite) S1: 5th & 6th Generation Networks and Beyond (hybrid) S3: Cloud and Edge Computing (online) S4: Emerging Topics in AI and Machine Learning (online) ICTS4eHealth - S2: IoT, Edge Computing, and Relational Agents        
13:00-14:00 Lunch
14:00-14:20     S7: Services and Protocols (online) S8: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (online) Industrial Keynote: Decarbonising the built environment - a technology perspective, by Sohaib Qamar Sheikh (United Kingdom)        
14:20-15:00 S5: Cloud and Edge Computing (onsite) S6: Security in Computers and Communications (hybrid)        
15:00-16:00 ICTS4eHealth - Keynote: Artificial Intelligence for Diabetes, by Tomáš Koutný (Czech Republic)        
16:00-16:30           PDS1: Poster Session and Coffee Break      
16:30-18:10 S9: Services and Protocols (hybrid) S10: Security in Computers and Communications (hybrid) S11: 5th & 6th Generation Networks and Beyond (online) S12: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (online) ICTS4eHealth - S3: eHealth        
18:10-18:30            
20:00-22:00 Steering committee dinner (by invitation)

Tuesday, July 11

08:30-10:30 S13: AI in Computers and Communications: Machine Learning (hybrid) S14: Security in Computers and Communications (onsite) S15: Wireless Networks (online) S16: Internet of Things (IoT) (online) ICTS4eHealth - S4: Machine Learning        
10:30-11:00           PDS2: Poster Session and Coffee Break      
11:00-12:00 Keynote: Leveraging Urban Computing with Smart Internet of Drones, by Azzedine Boukerche (Canada)
12:00-13:00 Keynote: Generative Artificial Intelligence: Opportunities and Challenges, by Fakhri Karray (Canada/UAE)
13:00-14:00 Lunch
14:00-16:00 Tutorial 2: Continuum Computing platforms for self-adaptive machine learning based IoT applications, by Nabil Abdennadher (Swizerland) S17: Vehicular & Space Communications (onsite) S18: AI in Computers and Communications: Machine Learning (online) S19: Security in Computers and Communications (online) ICTS4eHealth - S5: Security and Privacy        
16:00-17:00         PDS3: Poster Session and Coffee Break      
17:30-23:00 Touristic Tour and Gala Dinner

Wednesday, July 12

08:30-10:30 S20: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (hybrid) S21: Services and protocols (online) S22: online Short Papers (online) S23: Security in Computers and Communications (online) S24: Artificial Intelligence (AI) in Computers and Communications (online)        
10:30-11:00                 Coffee Break
11:00-13:00 S25: Cyber Physical Systems and Internet of Things (IoT) (onsite) S29: AI in Computers and Communications (Online) S26: Short Papers (online) S27: Security in Computers and Communications (online) S28: Artificial Intelligence (AI) in Computers and Communications (online)        
13:00-14:00 Lunch
14:00-15:00 Keynote (Online): Some Methods to Improve IoT Performance and Cybersecurity, by Erol Gelenbe (UK)
15:00-15:30 Closing and Awards Ceremony

Sunday, July 9

Sunday, July 9 13:00 - 14:00 (Africa/Tunis)

Sunday, July 9 14:00 - 16:00 (Africa/Tunis)

ICTS4eHealth - S1: Deep Learning

Room: TULIPE 2 / webex 2
Chair: Antonio Celesti (University of Messina, Italy)
ICTS4eHealth - S1.1 COVID-19 Analysis in Canada Using Deep Learning and Multi-Factor Data-Driven Approach with a Novel Dataset
Shaon Bhatta Shuvo, Swastik Bagga and Ziad Kobti (University of Windsor, Canada)

As the world recovers from the COVID-19 pandemic, there is a growing need for effective strategies to prepare for future health crises. Artificial Intelligence (AI), driven by comprehensive and up-to-date data, can play a crucial role in addressing such challenges. Focusing on Canadian data, this study demonstrates the importance of extensive data collection and its implications for global health crisis management. Using feature extraction and deep learning-based regression techniques, we identified key predictors of COVID-19, achieving a coefficient of determination of 0.93 and 0.80 for predicting new cases and deaths. The results emphasize AI's potential in guiding data-driven strategies, stressing the need for global collaboration in data collection and AI deployment to prepare for future health crises.

ICTS4eHealth - S1.2 A Deep Learning Approach to Remotely Monitor People's Frailty Status
Linda Senigagliesi (Università Politecnica delle Marche, Italy); Antonio Nocera (Università Politecnica Delle Marche, Italy); Gianluca Ciattaglia (Università Politecnica delle Marche, Italy); Matteo Angelini, Davide De Grazia, Fabiola Olivieri and Maria Rita Rippo (Università Politecnica Delle Marche, Italy); Ennio Gambi (Universita' Politecnica Delle Marche, Italy)

With the progressive aging of the population, monitoring the state of frailty of a person becomes increasingly important to prevent risk factors, which can lead to loss of autonomy and to hospitalization. Hygiene care, in particular, represents a wake-up call to detect a decline in physical and mental well-being. With the assistance of both environmental and localized sensors, measurements of hygiene-related activities can be made quickly and consistently over time. We here propose to remotely monitor these activities using a fixed camera and deep learning algorithms. In particular, three activities are considered, i.e., washing face, brushing teeth and arranging hair, together with the non-actions class. Considering a dataset consisting of 11 healthy subjects of different age and sex, we show that using a Long-Short Term Memory (LSTM) neural network the selected activities can be distinguished with an accuracy of more than 92%, thus proving the validity of the proposed approach.

ICTS4eHealth - S1.3 Early Heart Disease Detection Using Mel-Spectrograms and Deep Learning
Sricharan Donkada, Seyedamin Pouriyeh, Reza M. Parizi, Chloe Yixin Xie and Hossain Shahriar (Kennesaw State University, USA)

Heart disease is a leading cause of morbidity and mortality worldwide, necessitating the development of innovative diagnostic methodologies for early detection. This study presents a novel deep convolutional neural network model that leverages Mel-spectrograms to accurately classify heart sounds. Our approach demonstrates significant advancements in heart disease detection, achieving high accuracy, specificity, and unweighted average recall (UAR) scores, which are critical factors for practical clinical applications. The comparison of our proposed model's performance with a PANN-based model from a previous study highlights the strengths of our approach, particularly in terms of specificity and UAR. Furthermore, we discuss potential avenues for future research to enhance the model's effectiveness, such as incorporating additional features and exploring alternative deep learning architectures. In conclusion, our deep convolutional neural network model offers a significant step forward in the field of heart sound classification and the early detection of heart diseases.

ICTS4eHealth - S1.4 Predicting Out-Of-Hospital Vital Sign Measurements Through Deep Learning
Khalid Alghatani (King Fahd Medical City, Saudi Arabia); Abdelmounaam Rezgui and Nariman Ammar (Illinois State University, USA)

People may want to monitor their health conditions while they conduct their daily activities. For this, they need solutions that perform continuous health monitoring. Today, most existing solutions in the market (e.g., Apple watch, SpO2 ring) only report current vital sign measurements; they do not predict future values of those vital signs. We developed two predictive models that give people (patients or non-patients) predictions regarding the measurements of two critical vital signs, namely, heart rate and Oxygen saturation level. Our experimental results indicate that both models have a good accuracy.

ICTS4eHealth - S1.5 Influence of Convolutional Neural Network Depth on the Efficacy of Automated Breast Cancer Screening Systems
Vineela Nalla, Seyedamin Pouriyeh, Reza M. Parizi, Inchan Hwang, Beatrice Brown-Mulry, Linglin Zhang and MinJae Woo (Kennesaw State University, USA)

Breast cancer is a global health concern for women. The detection of breast cancer in its early stages is crucial, and screening mammography serves as a vital leading-edge tool for achieving this goal. In this study, we explored the effectiveness of Resnet 50v2 and Resnet 152v2 deep learning models for classifying mammograms using EMBED datasets for the first time. We preprocessed the datasets and utilized various techniques to enhance the performance of the models. Our results suggest that the choice of model architecture depends on the dataset used, with ResNet152 outperforming ResNet50 in terms of recall score. These findings have implications for cancer screening, where recall is an important metric. Our research highlights the potential of deep learning to improve breast cancer classification and underscores the importance of selecting the appropriate model architecture.

Sunday, July 9 14:00 - 16:00 (Africa/Tunis)

MoCS: 13th Workshop on Management of Cloud and Smart City Systems

Room: ORCHIDEE 4
Chairs: Javier Berrocal (University of Extremadura, Spain), Armando Ruggeri (University of Messina, Italy)
MoCS.1 A Multi-Cloud Observability Support Based on ElasticSearch for Cloud-Native Smart Cities Services
Sofia Montebugnoli and Luca Foschini (University of Bologna, Italy)

Effective communication and information sharing among different districts and cities are crucial for the management of utility flows, traffic, and emergencies in smart cities. In this scenario, a smart city requires cloud-native solutions to collect and analyze data from various sources, including traffic sensors and public transport vehicles. Thus, a multi-cloud observability approach is proposed to aggregate data from different localities. The solution aims to provide a complete suite for observability capable of collecting data across layers of a multi-cloud and integrating already existing open-source projects.

MoCS.2 Deploying Digital Twins over the Cloud-To-Thing Continuum
Sergio Laso (Global Process and Product Improvement SL & University of Extremadura, Spain); Lorenzo Toro-Gálvez (Universidad de Málaga, Spain); Javier Berrocal (University of Extremadura, Spain); Carlos Canal (University of Málaga, Spain); Juan Manuel Murillo Rodriguez (University of Extremadura, Spain)

Smart cities have deployed a myriad of devices to sense the status of the city and its citizens in order to reconfigure different elements to improve the quality of life. However, the analysis and reconfiguration of these elements on the fly can lead to unforeseen problems. The digital twin paradigm has risen as a promising technology to analyze and test these re-configurations before their execution. These digital twins are usually centralized in the cloud. However, the emulation of highly distributed systems can lead to scalability, response time, and security and privacy problems. In this paper, we propose a hierarchical and distributed architecture for digital twins, deployed over the Cloud-to-Thing Continuum. The proposal is illustrated by means of a case study about public transportation in smart cities.

MoCS.3 EFCC: A Flexible Emulation Framework to Evaluate Network, Computing and Application Deployments in the Cloud Continuum
Luis Jesús Martín León (University of Extremadura, Spain); Juan Luis Herrera (University of Bologna, Italy); Javier Berrocal and Jaime Galán-Jiménez (University of Extremadura, Spain)

In recent years, the number of devices connected to the Internet (and hence the data traffic) has significantly increased. The adoption of the Internet of Things paradigm, the use of the MicroServices Architecture for applications and the possibility of deploying such applications at different layers (fog, edge, cloud), makes the selection of an appropriate deployment a critical task for network operators and developers. In this paper, an emulation framework is proposed to allow them make a decision for the network, computing and application deployment in the cloud continuum, while satisfying the required Quality of Service. The framework is compatible both for IP and SDN network paradigms and is extensible to different types of scenarios thanks to its approach based on Docker containers. The evaluation over a realistic network scenario shows that it is extensible to any scenario and deployment required by the research community working on the cloud continuum.

MoCS.4 NETEDGE MEP: A CNF-Based Multi-Access Edge Computing Platform
Vinicius Ferreira (University of Minho & DTx Colab, Portugal); João Bastos (DTx Colab & University of Minho, Portugal); André Martins, Paulo J. Araújo and Nicolás Lori (University of Minho, Portugal); João Faria (Portugal); Antonio D. Costa (Universidade do Minho & Centro ALGORITMI, Portugal); Helena Fernández López (University of Minho, Portugal)

Multi-access Edge Computing (MEC) is an active topic of research and standardization, with both the industry and academia actively planning to develop proofs-of-concept (PoCs). Implementing a standards-based MEC infrastructure is expensive and time-consuming, leading to the use of partial implementations or simulation. This paper proposes NETEDGE MEP, an open-source MEC Platform (MEP) based on cloud-native technologies that can be easily deployed and scaled and are fully compliant with European Telecommunications Standards Institute (ETSI) MEC standards and recommendations. NETEDGE MEP meets the need of researchers and operators who plan to implement realistic PoCs. The tests conducted show that NETEDGE MEP can be deployed as a network service and has a low compute and network resource consumption.

MoCS.5 A VNF-Chaining Approach for Enhancing Ground Network with UAVs in a Crowd-Based Environment
Davide Montagno Bozzone (University of Pisa, Italy); Stefano Chessa (Universita' di Pisa, Italy); Michele Girolami (ISTI-CNR, Italy); Federica Paganelli (University of Pisa, Italy)

In the context of a 5G and beyond network operating in a smart city, in which the fixed network infrastructure is supported by a flock of unmanned aerial vehicles (UAV) operating as carriers of Virtual Network Functions (VNF), we propose a Mixed Integer Linear Programming (MILP) model to place chains of VNFs on a hybrid UAV-terrestrial infrastructure so to maximize the UAV lifetime while considering resource constraints and by taking into account the network traffic originated by crowds of people assembling in the city at given hotpoints. We formalize the UAV deployment problem and we test our solution with a practical scenario based on DoS detection system. The experimental results assess the deployment in a practical scenario of a DoS detection system and show that the proposed solution can effectively enhance the capability of the system to process the input flows under a DoS attack.

MoCS.6 Docflow: Supervised Multi-Method Document Anonymization Engine
Gabriele Morabito, Valeria Lukaj, Armando Ruggeri, Maria Fazio, Maria Annunziata Astone and Massimo Villari (University of Messina, Italy)

Nowadays the process of anonymization of documents has been the subject of several studies and debates. By anonymization of documents, we mean the process of replacing sensitive data in order to preserve the confidentiality of documents without altering their content.

In this work, we introduce Docflow, an open-source document anonymization engine capable of anonymizing documents based on specific filters chosen by the user. We applied Docflow to anonymize a set of legal documents and performed a processing performance analysis. By providing a Markdown input file to be anonymized, Docflow is able to redact all information according to users' choices, preserving the document content. Docflow will be integrated with NLP algorithms for the generation of the Markdown source file starting from documents already processed in different formats, but always with human supervision in the loop.

MoCS.7 Large-Scale Agent-Based Transport Model for the Metropolitan City of Messina
Annamaria Ficara, Maria Fazio, Antonino Galletta, Antonio Celesti and Massimo Villari (University of Messina, Italy)

Complex traffic dynamics can be modeled in real time through simulation models and methods which are attracting more and more research efforts. In particular, agent-based models based on agent behaviors with local plans can be useful for transportation study areas. These models can solve real-world policy problems simulating certain regions or cities. In this paper, we implemented an agent-based transport model for analyzing traffic in the metropolitan city of Messina (Sicily, Italy). We created a scenario using the Messina road network information from OpenStreetMap, public transport supply data of the municipality of Messina from General Transit Feed Specification, and census data related to the six districts of Messina. Then, we made a preliminary analysis of the generated simulation output computing average travel time by agent trip mode, average activity duration and link volumes. Our scenario can be adapted to solve specific problems related to the mobility of individuals in Messina.

Sunday, July 9 14:00 - 16:00 (Africa/Tunis)

OWC: Workshop on Optical Wireless Communication

Room: TULIPE 3 / webex 3
Chairs: Chedlia Ben Naila (Nagoya University, Japan), Takaya Yamazato (Nagoya University, Japan)
OWC.1 Spatial 4PPM Correlation with Successive Interference Cancellation for Low-Luminance WDM/SDM Screen to Camera Uplink
Alisa Kawade, Wataru Chujo and Kentaro Kobayashi (Meijo University, Japan)

In previous study for uplink optical wireless communication from a smartphone screen to an indoor telephoto camera, low-luminance space division multiplexing (SDM) using spatial 4 pulse position modulation (4PPM) was demonstrated without threshold learning. In this study, to increase the data rate further, low-luminance wavelength division multiplexing (WDM)/SDM is demonstrated by spatial 4PPM correlation and successive interference cancellation (SIC) without threshold learning. WDM/SDM causes both spatial inter-symbol interference (ISI) and inter-wavelength ISI. Even under the condition of spatial ISI caused by SDM, spatial 4PPM with SIC can remove inter-wavelength ISI caused by WDM. Furthermore, it is also demonstrated that low-luminance WDM/SDM using spatial 4PPM with SIC enhances physical layer security at wide angles.

OWC.2 Neonate Heart Rate Variability Monitoring Using Optical Wireless Link
Amel Chehbani (CNRS, XLIM, University of Limoges, France); Stéphanie Sahuguède (XLIM UMR CNRS 7252 - University of Limoges, France); Anne Julien-Vergonjanne (University of Limoges & XLIM CNRS 7252, France)

In this work, we investigate the quality of heart rate variability (HRV) features extracted from electrocardiogram (ECG) signals transmitted by optical wireless communication (OWC). The proposed solution exploits infrared links between a transmitter placed on the chest of a newborn lying in a closed incubator and optical receivers installed on the ceiling of a neonatal intensive care unit. The specific environment and the corresponding transmission channel were modeled and simulated using ray-tracing monte-carlo technique. Temporal HRV parameters were determined using the Pan Tompkins algorithm and analyzed as a function of emitted optical power. The results obtained show that it is possible to guarantee good HRV measurements from ECG signals transmitted by OWC in the proposed context. An excellent quality of the HRV parameters could be obtained with an emitted optical power of [2.8-4.1]mW for the OOK modulation.

OWC.3 A Testbed to Integrate Private 5G Networks with Visible Light Communication for Service Area Expansion
Jinxing Zheng and Takaya Yamazato (Nagoya University, Japan); Katsuhiro Naito (Aichi Institute of Technology, Japan)

This paper reports an extension of the private 5G service area to the outside of the licensed area by integrating visible light communication (VLC).
As VLC is license-free, integrating the VLC function into the private 5G networks gives a more flexible system design.
For example, it will be possible to provide services to mobile vehicles and robots working outside the service area of a private 5G network.
Alternatively, private 5G networks in different locations can be connected.
A challenge is integrating the VLC function into a 5G core network.
We successfully designed a testbed and an experiment using open-source 5G projects to evaluate and compare methods to integrate VLC and private 5G, including the simulation of poor network environments.
Experimental results demonstrate the availability of integrating private 5G with VLC, which are compared with image-based methods and show better latency performance.

OWC.4 Performance of Intelligent Reflecting Surface Based-FSO Link Under Strong Turbulence and Spatial Jitter
Takumi Ishida, Chedlia Ben Naila, Hiraku Okada and Masaaki Katayama (Nagoya University, Japan)

Intelligent reflecting surfaces(IRSs) are considered as an emerging technology that can be applied in free space optical communication (FSO) systems to relax the strict requirements of a line of sight (LOS) link between the transmitter and the receiver. In this paper, we investigate the performance of an IRS-assisted FSO system under strong atmospheric turbulence while considering pointing error induced fading due to both the transmitter jitter and the IRS jitter angles. A closed-form expression of the average BER has been derived. The impact of the strong atmospheric turbulence is characterized using the K-distribution. The numerical results show that the system performance strongly depends on the system configuration as well as the transmitter jitter and IRS jitter. Furthermore, we show that an improvement in terms of the average BER can be achieved for optimal placement of the IRS.

OWC.5 Wavelength Selection Considerations for Optical Wireless Positioning Systems
Jorik De Bruycker, Frédéric B. Leloup and Nobby Stevens (KU Leuven, Belgium)

Indoor Positioning Systems act as an important technology to provide real-time location estimation, enabling a large variety of industrial applications. In this context, Optical Wireless Positioning employs the propagation characteristics of optical signals as a means to calculate an accurate and precise position. While Visible Light Positioning focuses on the use of LED lighting to support both positioning and illumination simultaneously, Infrared-based systems also offer viable positioning solutions featuring distinct advantages and drawbacks over their visible light counterparts. The selection of the used wavelength thus proves to be an important design consideration for an Optical Wireless Positioning system. This work summarises the main differences and trade-offs fundamental to the selection of the wavelength, and compares them in order to make an informed decision on the selection.

Sunday, July 9 14:00 - 17:30 (Africa/Tunis)

Tutorial 1: Breaking Boundaries: Discovering New Horizons in Association-Oriented Data Model, by Krystian Wojtkiewicz (Wrocław University of Science and Technology, Poland)

Room: TULIPE 1 / webex 1
Chair: Ali Wali (REGIM-Lab., Tunisia)

Krystian Wojtkiewicz, Wrocław University of Science and Technology, Wrocław, Poland Krystian Wojtkiewicz is an esteemed educator from Wrocław University of Science and Technology in Wrocław, Poland. With a profound computer science, engineering, and management background, Krystian obtained his Diplomas and a PhD from renowned Polish universities. Drawing upon over 20 years of expertise in IT systems modelling, he is a true specialist in business process modelling and optimization. Krystian's scholarly contributions include over 30 research papers, and several book editions, with his insightful work recognized through his involvement as a reviewer in top-class conferences and journals.

Sunday, July 9 14:30 - 16:00 (Africa/Tunis)

DistInSys: 3rd IEEE International Workshop on Distributed Intelligent Systems

Room: ORCHIDEE 5
Chair: Massimo Villari (University of Messina, Italy)
DistInSys.1 Exploring the Performance and Efficiency of Transformer Models for NLP on Mobile Devices
Ioannis Panopoulos and Sokratis Nikolaidis (National Technical University of Athens, Greece); Stylianos Venieris (Samsung AI, United Kingdom (Great Britain)); Iakovos S. Venieris (National Technical University of Athens, Greece)

Deep learning (DL) is characterised by its dynamic nature, with new deep neural network (DNN) architectures and approaches emerging every few years, driving the field's advancement. At the same time, the ever-increasing use of mobile devices (MDs) has resulted in a surge of DNN-based mobile applications. Although traditional architectures, like CNNs and RNNs, have been successfully integrated into MDs, this is not the case for Transformers, a relatively new model family that has achieved new levels of accuracy across AI tasks, but poses significant computational challenges. In this work, we aim to make steps towards bridging this gap by examining the current state of Transformers' on-device execution. To this end, we construct a benchmark of representative models and thoroughly evaluate their performance across MDs with different computational capabilities. Our experimental results show that Transformers are not accelerator-friendly and indicate the need for software and hardware optimisations to achieve efficient deployment.

DistInSys.2 When Robotics Meets Distributed Learning: The Federated Learning Robotic Network Framework
Roberto Marino, Lorenzo Carnevale and Massimo Villari (University of Messina, Italy)

Federated Learning (FL) is a cutting-edge technology for distributed solving of large-scale problems using local data exclusively. The potential of Federated Learning is nowadays clear in different context from automatic analysis of healtcare data to object recognition in video sources coming from public video streams, from distributed search for data breach and finance frauds to collaborative learning of hand typing on mobile phone. Multi-robot systems can also largely benefit from FL concerning resolutions of problems like trajectory prediction, non colliding trajectory generation, distributed localization and mapping or distributed reinforcement learning. In this paper we propose a multi-robot framework that includes distributed learning capabilities by using Decentralized Stochastic Gradient Descent on graphs. First of all we motivate the position of the paper discussing the privacy preserving problem for multi robot systems and the need of decentralized learning. Then we build our methodology starting from a set of prior definitions. Finally we discuss in details the possible applications in robotics field.

DistInSys.3 TEMA: Event Driven Serverless Workflows Platform for Natural Disaster Management
Christian Sicari, Alessio Catalfamo, Lorenzo Carnevale and Antonino Galletta (University of Messina, Italy); Daniel Balouek-Thomert (IMT Atlantique - Nantes Université École Centrale Nantes - INRIA, France & SCI Institute, University of Utah, UT, USA); Manish Parashar (Scientific Computing Imaging Institute, USA & University of Utah, USA); Massimo Villari (University of Messina, Italy)

TEMA project is a Horizon Europe funded project that aims at addressing Natural Disaster Management by the use of sophisticated Cloud-Edge Continuum infrastructures by means of data analysis algorithms wrapped in Serverless functions deployed on a distributed infrastructure according to a Federated Learning scheduler that constantly monitors the infrastructure in search of the best way to satisfy required QoS constraints. In this paper, we discuss the advantages of Serverless workflow and how they can be used and monitored to natively trigger complex algorithm pipelines in the continuum, dynamically placing and relocating them taking into account incoming IoT data, QoS constraints, and the current status of the continuum infrastructure. Therefore we presented the Urgent Function Enabler (UFE) platform, a fully distributed architecture able to define, spread, and manage FaaS functions, using local IOT data managed using the Fiware ecosystem and a computing infrastructure composed of mobile and stable nodes.

Sunday, July 9 16:00 - 16:30 (Africa/Tunis)

Sunday, July 9 16:30 - 17:30 (Africa/Tunis)

ICTS4eHealth - Special Session: Technological Advancements in Artificial Intelligence for Medical and Healthcare Applications (TAAIMHA)

Room: TULIPE 2 / webex 2
Chair: Ziad Kobti (University of Windsor, Canada)
ICTS4eHealth - Special Session.1 SMOTE Oversampling and near Miss Undersampling Based Diabetes Diagnosis from Imbalanced Dataset with XAI Visualization
Nasim Mahmud Nayan (University of Information Technology and Sciences, Bangladesh); Ashraful Islam (Independent University Bangladesh & Center for Computational and Data Sciences, Bangladesh); Muhammad Usama Islam (University of Louisiana at Lafayette, USA); Eshtiak Ahmed (Tampere University, Finland); Mohammad Mobarak Hossain (University of Information Technology and Sciences, Bangladesh); Md Zahangir Alam (Independent University, Bangladesh)

This study investigated the predictive ability of ten different machine learning (ML) models for diabetes using a dataset that was not evenly distributed. Additionally, the study evaluated the effectiveness of two oversampling and undersampling methods, namely the Synthetic Minority Oversampling Technique (SMOTE) and the Near-Miss algorithm. Explainable Artificial Intelligence (XAI) techniques were employed to enhance the interpretability of the model's predictions. The results indicate that the extreme gradient boosting (XGB) model combined with SMOTE oversampling technique exhibited the highest accuracy of 99% and an F1-score of 1.00. Furthermore, the utilization of XAI methods increased the dependability of the model's decision-making process, rendering it more appropriate for clinical use. These results imply that integrating XAI with ML and oversampling techniques can enhance the early detection and management of diabetes, leading to better diagnosis and intervention.

ICTS4eHealth - Special Session.2 Comparative Study of LBP and HOG Feature Extraction Techniques for COVID-19 Pneumonia Classification
Nourin Ahmed, Namarta Vij and Ziad Kobti (University of Windsor, Canada)

In this research, we investigated the potential of effective feature extraction techniques in combination with traditional machine learning algorithms in classifying COVID-19 pneumonia from chest X-rays. In times of heavy pressure on the whole medical system, developing a reliable automated way to distinguish such images from normal X-rays and viral pneumonia is critical to aid physicians in diagnosing possible COVID-19 cases efficiently and reliably. In this work, we present a realistic machine learning-based model for classifying COVID-19 pneumonia-affected lungs, non-covid pneumonia-affected lungs, and healthy lungs from chest X-ray images. A local binary pattern (LBP) with SVM-based model is shown to achieve 100% accuracy, which is better than all state-of-art methods. In multi-class classification, the model has achieved 96% accuracy, which is competitive with the result of most deep learning methods.

ICTS4eHealth - Special Session.3 2-D Numerical Modeling of Fluid Structure Interaction Analysis and Absorption Study of Red Blood Cell Inside the Capillary
Mouna Dhmiri (ENIT, Tunisia); Yassine Manai (National Schools of Engineers of Tunis, Tunisia); Tahar Ezzedine (Enit, Tunisia)

The Beer-Lambert method based on the absorption of wavelength property for blood elements has been increasingly attractive for solving non-invasive medical analysis problems. In this paper, a two-dimensional geometric model of Red Blood Cells (RBC) inside capillary is developed, the blood is considered as a fluid having a laminar and incompressible flow in interaction with one of its elements which is the RBC. This work aims to study the fluid-structure interaction between plasma and a single RBC. The Arbitrary Lagrangian-Eulerian (ALE) formulation is used to describe the geometrical changes of the biofluid domain. Furthermore, the determination of beam photon attenuation through RBC inside the micro-vessel is carried out to investigate its wavelength absorptivity. The examination of velocity and pressure fields in the bifurcation region is simulated by COMSOL Multiphysics software.

Sunday, July 9 16:30 - 18:30 (Africa/Tunis)

MLCID: IEEE Workshop on Machine Learning from Class Imbalanced Data

Room: ORCHIDEE 5
Chairs: Ghada Altarawneh (Mutah University, Jordan), Ahmad Hassanat (Mutah University, Jordan)
MLCID.1 The Effect of Feature Selection on Diabetes Prediction Using Machine Learning
Rania Alhalaseh (Mutah University, Jordan); Dhuha Ali Ghani AL-Mashhadany (Mutah, Jordan); Mohammad Abbadi (Mutah University, Jordan)

The primary goal of this work is enhancing the performance of diabetes prediction using machine learning. Pima Indians Diabetes and Mendeley dataset were used to experiment the performance of the classifiers. Class imbalances is considered the main problem in Mendeley dataset. Therefore, the use of a new data balancing technique called Random Data Partitioning with Voting Rule (RDPVR) is used which outperformed other balancing techniques and imporved the accuracy of diabetic prediction. For better efficiency of machine learning algorithms, feature selection methods used in this work are: Recursive Feature Elimination, Analysis of Variance, Step Forward and using three machine learning algorithms: Logistic Regression, Naive Bayes, Random Forest and ensemble soft voting of three classifiers for diabetes prediction. RFE with ensemble soft voting record higher accuracy 97% using Mendeley and 81% using Pima Random Forest record higher accuracy by 97% when using RFE.

MLCID.2 A Framework for Few-Shot Network Threats Based on Generative Adversarial Networks
Long Chen (Beijing University of Chemical Technology, China & Qi'anxin Group, China); Yanqing Song and Jianguo Chen (China Applied Sciences and Technologies Research, China)

A small amount of malicious logs is confusing and unbalanced for a large number of normal logs. We proposed a framework of network threats sample based on Generative Adversarial Networks(GAN). This paper solves the imbalance problem of multi-dimensional sample data such as logs, traffic, programs, and feature spaces in the field of cyberspace security by generating confrontation networks. We carried out a large-scale confrontation generation experiment of security event logs based on SeqGAN and generated corresponding log text for data enhancement, which effectively solve the problem of few-shot. The results in this section show that the use of the AC-GAN augmentation dataset is enhanced compared to the original non-equilibrium dataset using the artificial synthesis of the SMOTE dataset Network traffic data set to improve the performance of supervised learning classification. It has inestimable effects on threat detection, various types of offensive to defensive, and cryptography algorithms.

MLCID.3 The Jeopardy of Learning from Over-Sampled Class-Imbalanced Medical Datasets
Ahmad Hassanat, Ghada Altarawneh and Ibraheem M. Alkhawaldeh (Mutah University, Jordan); Yasmeen Jamal Alabdallat (Hashemite University, Jordan); Amir Atiya (Cairo University, Egypt); Ahmad Abujaber (Hamad Medical Corp., Qatar); Ahmad S. Tarawneh (Eötvös Loránd University, Hungary)

The usefulness of the oversampling approach to class-imbalanced structured medical datasets is discussed in this paper. In this regard, we basically look into the oversampling approach's prevailing assumption that synthesized instances do belong to the minority class. We used an off-the-shelf oversampling validation system to test this assumption. According to the experimental results from the validation system, at least one of the three medical datasets used had newly generated samples that were not belonging to the minority class as a result of the oversampling methods validated. Additionally, the error rate varied based on the dataset and oversampling method tested. Therefore, we claim that synthesizing new instances without first confirming that they are aligned with the minority class is a risky approach, especially in medical fields where misdiagnosis can have serious repercussions. As alternatives to oversampling, ensemble, data partitioning, and method-level approaches are advised since they do not make false assumptions.

MLCID.4 Human Face Detection Improvement Using Subclass Learning and Low Variance Directions
Soumaya Nheri (Innov'COM Lab / Digital Security Lab Higher School of Communication of Tunis University of Carthage, Tunisia)

In order to increase the face detection rate in complicated images, a novel approach is presented in this work. The suggested method seeks to improve accuracy by utilizing low variance directions for data projection and one-class subclass learning. Previous studies have demonstrated that taking into account the data carried by low variance directions enhances the performance of models in one-class classification. For dispersion data, subclass learning is extremely successful. Through a comparison research in a decontextualized framework and a contextualized evaluation specifically for face identification, the approach is assessed. Results reveal that the suggested method performs better than other methods, demonstrating its potential to develop face identification technologies.

MLCID.5 A Machine Learning Approach for Predicting Lung Metastases and Three-Month Prognostic Factors in Hepatocellular Carcinoma Patients Using SEER Data
Ibraheem M. Alkhawaldeh, Ghada Altarawneh and Mohammad Al-Jafari (Mutah University, Jordan); Mahmoud S. Abdelgalil (Ain Shams University, Egypt); Ahmad S. Tarawneh (Eötvös Loránd University, Hungary); Ahmad Hassanat (Mutah University, Jordan)

Using SEER data, this work seeks to create a machine learning (ML) model to predict lung metastases (LM) and three-month prognostic variables in hepatocellular carcinoma (HCC) patients. The study comprised 34,861 HCC patients, 1,783 (5.11%) of whom had lung metastases, and 859 were suitable for the 3-month prognostic model. The ML models were cross-validated twice, and with an AUC of 1 and an F1 score of 0.997, the random forest (RF) classifier was found to be the best choice for predicting LM. The Easy Ensemble (EE) classifier was utilized to address the dataset's class imbalance problem. The study also indicated that employing resampling approaches such as SMOTE can result in synthetic data, which can reduce model reliability, making EE the recommended choice for addressing the class imbalance. Overall, this study aims to improve clinical decision-making by providing a comprehensive predictive model for HCC patients with LM and a 3-month prognosis.

Sunday, July 9 16:30 - 17:30 (Africa/Tunis)

OWC Keynote: Visible light communication (VLC) for cars, by Takaya Yamazato (Japan)

Room: TULIPE 3 / webex 3
Chair: Chedlia Ben Naila (Nagoya University, Japan)

Sunday, July 9 20:00 - 21:00 (Africa/Tunis)

Monday, July 10

Monday, July 10 9:00 - 9:30 (Africa/Tunis)

Monday, July 10 9:30 - 10:30 (Africa/Tunis)

Keynote: AI-Enabled 6G: Embracing Wisdoms from Classical Algorithms, by Khaled B. Letaief (Hong Kong)

Chair: Azzedine Boukerche (University of Ottawa, Canada)

The past five years have witnessed ever-increasing research interests in artificial intelligence (AI) for the design of 6G wireless systems [1,2]. Despite the unprecedented performance gains, the black-box nature of existing AI algorithms has aroused many crucial concerns, e.g., insufficient scalability, poor generalization, and the lack of theoretical guarantees, which contradict the stringent reliability requirements in practice. By contrast, classical algorithms mostly enjoy well-grounded theoretical analysis. However, built upon simplified signal and system models, their performance tends to be limited in complicated real-world deployments. In this talk, we begin by introducing 6G vision, challenges, and opportunities. Then and by bridging AI with the wisdoms from classical algorithms, we introduce two general frameworks that may offer the best of both worlds, i.e., both competitive performance and theoretical supports. The first framework, called neural calibration, targets low-complexity non-iterative algorithms. Based on the permutation equivariance property, neural calibrated algorithms can scale with the problem size and generalize with varying network settings, making them suitable for dynamic large-scale systems. The second framework, termed fixed point networks, is compatible with general iterative algorithms that are prevalent in wireless transceiver design. Based on fixed point theory, provably convergent and adaptive AI-enhanced iterative algorithms can be constructed in a unified manner. Along with the general frameworks, we also present their applications to CSI feedback, beamforming, and channel estimation, among others, in emerging 6G wireless systems.

[1] K. B. Letaief, Y. Shi, J. Lu, and J. Lu, "Edge Artificial Intelligence for 6G: Vision, enabling technologies, and applications," IEEE Journal on Selected Areas in Communications, vol. 40, no. 1, pp. 5-36, Jan. 2022. [2] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y-J Zhang, "The roadmap to 6G: AI empowered wireless networks," IEEE Communications Magazine, vol. 57, no. 8, pp. 84-90, Aug. 2019.

Monday, July 10 10:30 - 11:00 (Africa/Tunis)

Monday, July 10 11:00 - 13:00 (Africa/Tunis)

ICTS4eHealth - S2: IoT, Edge Computing, and Relational Agents

Room: TULIPE 2 / webex 2
Chair: Anna Maria Mandalari (University College London, United Kingdom (Great Britain))
ICTS4eHealth - S2.1 Internet of Things (IoT) for Elderly's Healthcare and Wellbeing: Applications, Prospects and Challenges
Achraf Othman (Mada Qatar Assistive Technology Center, Qatar); Ahmed Elsheikh (Hamad Bin Khalifa University, Qatar); Amnah Mohammed Al-Mutawaa (Mada - Qatar Assistive Technology Center, Qatar)

The Internet of Things (IoT) is an ever-evolving ecosystem that enables interactions between humans and interconnected objects, including healthcare domains. IoT addresses the lack of access to medical resources, the growing elderly population with chronic conditions, and the rising medical costs. This research analyzes 20 review articles on IoT applications for the well-being and healthcare of the elderly population. IoT can significantly improve well-being and healthcare through remote monitoring, communication, automating tasks, and access to various services and information. However, several limitations and challenges must be considered when employing IoT for the elderly's well-being and healthcare.

ICTS4eHealth - S2.2 Formal Analysis of an IoT-Based Healthcare Application
Maissa Elleuch (Digital Research Center of Sfax, Technopark of Sfax, Tunisia); Sofiene Tahar (Concordia University, Canada)

In the healthcare context, remote monitoring based on the Internet of Things (IoT) technology is a widespread application. Underlying entities are interacting to bring up various services, so that their communication has to be ensured without defects such as deadlocks. The correct validation of these IoT applications is a major concern because of their distributed and concurrent features, as well as, the safety-critical nature of the health context. In this paper, we show how we use a model checking approach to accurately validate the behavior of an IoT-based healthcare application. We then focus on verifying three important classes of properties namely safety, liveness, and absence of deadlock. The verification is guaranteed by means of the UPPAAL model checker.

ICTS4eHealth - S2.3 Design of a Blockchain-Based Secure Storage Architecture for Resource-Constrained Healthcare
Laura Soares, Jéferson Nobre and Gabriel Kerschner (Federal University of Rio Grande do Sul, Brazil)

The Internet of Things (IoT) paradigm can improve a broad range of applications, such as medical sensors, smart cities, industrial monitoring, and so on. In healthcare, IoT can aid in management tasks and improve the quality of life of intensive care patients. However, securing medical devices and data is a crucial task. Not only they handle Electronic Medical Records (EMR) but also disruption can affect a patient's treatment. Blockchain technologies applied in the healthcare context can provide privacy, immutability, decentralization, and easier access and sharing of medical data. Despite the emergence of applications aiming to solve security issues in the Healthcare IoT (HIoT) scenario using blockchain, there is still much to be addressed, mainly regarding throughput, storage of data, and efficient use of resources. This work proposes a blockchain-based storage architecture for HIoT, using a private network and distributed data storage to achieve integrity, accountability, and availability of medical data.

ICTS4eHealth - S2.4 An IoT-Based Solution for Monitoring Young People with Emotional Behavioral Disorders in a Residential Childcare Setting
Bharat Paudyal and Chris Creed (Birmingham City University, United Kingdom (Great Britain)); Sherelle Knowles and Hilton Mutariswa (Sherlock Healthcare Services, United Kingdom (Great Britain)); Ian Williams (Biirmingham City University, United Kingdom (Great Britain))

This paper focuses on the design, development and preliminary evaluation of cloud-based Internet of Things (IoT) system that utilises wearable sensors to remotely monitor the physiological patterns of young person with Emotional Behavioral Disorder (EBD). We report on an exploratory study with caregivers to understand the challenges exhibited by young people with EBD. This informed the design of cloud-based IoT system that retrieves real-time physiological parameters such as Heart Rate, Body Temperature through non-invasive wearable sensors, which enables caregivers to monitor the activity of young person remotely and provide enhanced support based on the collected parameters. To validate system's feasibility, an initial pilot study within a healthcare environment involving children with EBD was conducted. The study confirms the system's viability and identifies important areas for future improvements. By utilizing IoT technology, our work aims to enhance the healthcare sector's ability to support individuals with EBD and improve their overall well-being.

ICTS4eHealth - S2.5 Ambient Sound Analysis for Non-Invasive Indoor Activity Detection in Edge Computing Environments
Cheolhwan Lee, Homin Kang, Yeong Jun Jeon and Soon Ju Kang (Kyungpook National University, Korea (South))

Research on detecting the behavior of residents using sounds generated in living spaces has been conducted by sending the sound data to a server or cloud and utilizing a relatively large artificial intelligence model. However, this method generates excessive data traffic and carries a privacy risk by transmitting sounds unnecessary for behavior detection. In this paper, we explored data processing methods suitable for a non-invasive indoor noisy sound analysis system operating in an edge environment. To achieve this goal, we implemented Mel-spectrogram and Mel-Frequency Cepstral Coefficients (MFCC) based models for classifying environmental sounds, comparing their performance based on different preprocessing parameters and optimizations. Furthermore, we evaluated the computational resource usage and performance of the models in both the Raspberry Pi and microcontroller environments.

ICTS4eHealth - S2.6 Designing Healthcare Relational Agents: A Conceptual Framework with User-Centered Design Guidelines
Ashraful Islam (Independent University Bangladesh & Center for Computational and Data Sciences, Bangladesh); Beenish Moalla Chaudhry (University of Louisiana at Lafayette, USA); Aminul Islam (University of Louisiana at Lafayette, Bangladesh)

This paper presents a conceptual framework for designing relational agents (RAs) in healthcare contexts, developed through the findings from multiple user studies on RAs about their acceptance, efficacy, and usability. The framework emphasizes a user-centered design (UCD) approach that takes into account the unique needs and preferences of patients, non-patient users, and healthcare professionals. Based on the results of these studies, we analyzed and refined the RA designs and proposed a UCD-based conceptual framework for designing effective and user-friendly healthcare RAs. The paper aims to provide an initial resource for researchers, designers, and developers interested in developing RAs for healthcare contexts by thinking of UCD techniques.

Monday, July 10 11:00 - 13:00 (Africa/Tunis)

S1: 5th & 6th Generation Networks and Beyond (hybrid)

Room: TULIPE 3 / webex 3
Chair: Nawel Zangar (ESIEE PARIS, France)
S1.1 RL-CEALS: Reinforcement Learning for Collaborative Edge Assisted Live Streaming
Ilyes Mrad (Qatar University, Qatar); Emna Baccour (Hamad Bin Khalifa University, Qatar); Ridha Hamila (Qatar University, Qatar); Muhammad Asif Khan (Qatar Mobility Innovations Center, Qatar & Qatar University, Qatar); Aiman Erbad and Mounir Hamdi (Hamad Bin Khalifa University, Qatar)

Crowdsourced live streaming services (CLS) present significant challenges due to massive data size and dynamic user behavior. Service providers must accommodate personalized QoE requests, while managing computational burdens on edge servers. Existing CLS approaches use a single edge server for both transcoding and user service, potentially overwhelming the selected node with high computational demands. In response to these challenges, we propose the Reinforcement Learning-based-Collaborative Edge-Assisted Live Streaming (RL-CEALS) framework. This innovative approach fosters collaboration between edge servers, maintaining QoE demands and distributing computational burden cost-effectively. By sharing tasks across multiple edge servers, RL-CEALS makes smart decisions, efficiently scheduling serving and transcoding of CLS. The design aims to minimize the streaming delay, the bitrate mismatch, and the computational and bandwidth costs. Simulation results reveal substantial improvements in the performance of RL-CEALS compared to recent works and baselines, paving the way for a lower cost and higher quality of live streaming experience.

S1.2 A Hybrid Algorithm for Service Bursting Based on GA and BPSO in Hybrid Clouds
Wissem Abbes (REGIM, Tunisia & ENIS, Tunisia); Hamdi Kchaou (REGIM, University of Sfax, Tunisia); Zied Kechaou (REGIM-Lab, ENIS, University of Sfax, Tunisia); Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

Companies need to be creative and flexible, especially regarding customer-specific web applications, because there is a lot of competition, and the market is changing quickly. The hybrid cloud is now a popular choice for businesses that want to make the most of their resources and get things done faster by combining private and public cloud implementations. Some parts of the new apps would be set aside for the private cloud option, while others would be set aside for the public cloud option when the apps were being used. For this, a hybrid algorithm based on GA and BPSO is suggested, which can help keep the successful optimization of service bursting in the hybrid cloud platform. Based on the IBM benchmark, the experiment results show that our advanced algorithm-scored results were less cost-effective than related works experiments.

S1.3 Performance Improvements Through Recommendations for a PLC Network with Collaborative Caching in Remote Areas
Zunera Umar and Michela Meo (Politecnico di Torino, Italy)

The emergence of Power Line Communication (PLC) technology has facilitated the expansion of broadband access networks in remote areas, by utilizing existing wired power infrastructure. However, the growing demand for data, driven by the popularity of communication services, presents a formidable challenge to the underlying PLC technology. Collaborative caching involves the sharing of cached content among neighboring nodes, thereby improving cache hit ratio (CHR), reducing network and backhaul congestion, and ultimately enhancing network performance. Our research proposes a recommendation system integrated into the collaborative caching mechanism on a PLC network that suggests relevant content to the users based on users' preferences and historical usage patterns leading to an increase in CHR and a reduction in network congestion. The results indicate that the proposed system significantly improves network performance by reducing download delay and saving precious backhaul link resources thus making PLC networks more effective for remote areas.

S1.4 Prediction of RTT Through Radio-Layer Parameters in 4G/5G Dual-Connectivity Mobile Network
Stefania Zinno (University Federico II of Naples, Italy); Antonia Affinito (University of Napoli Federico II, Italy); Nicola Pasquino and Giorgio Ventre (University of Naples Federico II, Italy); Alessio Botta (University of Napoli Federico II, Italy)

With E-UTRA-NR Dual Connectivity, terminals can connect to 4G Long-Term Evolution and 5G New Radio networks at the same time. This technology allows using multiple bandwidths belonging to the two radio layers, enhancing the overall system performance. The system also adopts Multiple Input Multiple Output on top of the dual radio layer access. Authors predict application-layer Round-Trip Time with Machine Learning algorithms leveraging radio layer parameters such as received power and signal quality. Binary classification techniques are adopted to predict if Round-Trip Time values are above or below a threshold. The prediction is tested with real data collected in two measurement campaigns. Results show that Random Forest and Decision Tree Classifiers are the best algorithms with a precision score of respectively 0.84 and 0.92 in both measurement setups. They also evidence the radio- and physical-layer information having more importance for predicting application-layer RTT.

Monday, July 10 11:00 - 13:00 (Africa/Tunis)

S2: Artificial Intelligence (AI) in Computers and Communications (onsite)

Room: TULIPE 1 / webex 1
Chair: Michael Kounavis (Meta Platforms Inc., USA)
S2.1 Modeling Digital Twins of Kubernetes-Based Applications
Davide Borsatti, Walter Cerroni and Luca Foschini (University of Bologna, Italy); Genady Ya. Grabarnik (St. John's University, USA); Filippo Poltronieri (University of Ferrara, Italy); Domenico Scotece (University of Bologna, Italy); Larisa Shwartz (IBM Research, USA); Cesare Stefanelli, Mauro Tortonesi and Mattia Zaccarini (University of Ferrara, Italy)

Kubernetes provides several functions that can help service providers to deal with the management of complex container-based applications. However, most of these functions need a time-consuming and costly customization process to address service-specific requirements. The adoption of Digital Twin (DT) solutions can ease the configuration process by enabling the evaluation of multiple configurations and custom policies by means of simulation-based what-if scenario analysis. To facilitate this process, this paper proposes KubeTwin, a framework to enable the definition and evaluation of DTs of Kubernetes applications. Specifically, this work presents an innovative simulation-based inference approach to define accurate DT models for a Kubernetes environment. We experimentally validate the proposed solution by implementing a DT model of an image recognition application that we tested under different conditions to verify the accuracy of the DT model. The soundness of these results demonstrates the validity of the KubeTwin approach and calls for further investigation.

S2.2 A Cognitive Module for Secondary Radios Operating at the 2.5 GHz LTE Band on Indoor Environments
Marilson Duarte Soares, Sr. (Universidade Federal Fluminense, Brazil); Diego Passos (Instituto Politécnico de Lisboa, Portugal & Laboratório MídiaCom, Brazil); Pedro Gonzalez Castellanos (Federal Fluminense University, Brazil)

The 2.5 GHz band is allocated to licensed systems of the LTE. Due to the propagation characteristics in this band, the signal is severely degraded by walls and other similar obstacles, making coverage in indoor environments difficult. While this is an issue for the purpose of providing LTE indoor coverage, it may also produce spectrum opportunities for secondary users. In this paper, we introduce a design of a cognitive module for secondary radios to operate on this band alongside the primary LTE users. This module leverages both statistical and Machine Learning methods to detect idle channel periods and to estimate for how long the secondary user may use the band. Based on real data of LTE channel usage in indoor environments, we propose a mathematical model for the length of idle periods and show that a small set of narrow-band energy sensors is enough to detect transmission opportunities.

S2.3 A Novel Approach of ESN Reservoir Structure Learning for Improved Predictive Performance
Samar Bouazizi (Research Groups in Intelligent Machines Lab, Tunisia); Emna Benmohamed (University of Sfax, Tunisia); Hela Ltifi (REGIM - University of Sfax - Tunisia, Tunisia)

This paper presents a novel method to enhance the predictive performance of the Echo State Network (ESN) model by adopting reservoir topology learning. ESNs are a type of Recurrent Neural Network (RNN) that have demonstrated considerable potential in various applications, but they can be challenging to train and optimize due to their random initialization. To improve the learning capabilities of ESNs and enhance their effectiveness in a broad range of predictive tasks, we utilize a structure learning algorithm. The proposed approach modifies the ESN reservoir's connectivity by applying techniques such as reversing, deleting, and adding new connections. We evaluate our proposal performance using both synthetic and real datasets, and our results indicate that it can substantially improve predictive accuracy compared to traditional ESNs.

S2.4 Effects of Secured DNS Transport on Resolver Performance
Etienne Le Louët and Antoine Blin (Gandi SAS, France); Julien Sopena (Université Pierre et Marie Curie, France); Ahmed Amamou and Kamel Haddadou (GANDI SAS, France)

Designed 40 years ago, DNS is still a core component of internet: billions of DNS queries are processed each day to resolve domain names to IP addresses. Originally designed for performances and scalability, its transport protocol is unencrypted, leading to security flaws. Recently, secure protocols have emerged, but the question of their scalability and sustainability remains open. In this paper we study the cost of switching from the legacy DNS transport to the newer ones, by first characterising the shape of the traffic between clients and secured public resolvers. Then, we replicate said traffic, to measure the added cost of each protocol. We found that, while connections usually stayed open, many closures and openings were made in some cases. Comparing these profiles over different DNS transports, we observe that switching from the legacy protocol to a more secure one can lead to an important performance penalty.

S2.5 Improved Flaky Test Detection with Black-Box Approach and Test Smells
David J. A. Carmo, Luísa Gonçalves and Ana M Dias (Universidade da Beira Interior, Portugal); Nuno Pombo (University of Beira Interior, Portugal)

Flaky tests can pose a challenge for software development, as they produce inconsistent results even when there are no changes to the code or test. This leads to unreliable results and makes it difficult to diagnose and troubleshoot any issues. In this study, we aim to identify flaky test cases in software development using a black-box approach. Flaky test cases are unreliable indicators of code quality and can cause issues in software development. Our proposed model, Fast-Flaky, achieved the best results in the cross-validation results. In the per-project validation, the results showed an overall increase in accuracy but decreased in other metrics. However, there were some projects where the results improved with the proposed pre-processing techniques. These results provide practitioners in software development with a method for identifying flaky test cases and may inspire further research on the effectiveness of different pre-processing techniques or the use of additional test smells.

Monday, July 10 11:00 - 13:00 (Africa/Tunis)

S3: Cloud and Edge Computing (online)

Room: Webex 5
Chair: Chaima Ben Rabah (High School of Communications of Tunis, Tunisia & IMT Atlantique, France)
S3.1 Multi-Stage Flow Table Caching: From Theory to Algorithm
Ying Wan (China Mobile (Suzhou) Software Technology, China); Haoyu Song (Futurewei Technologies, USA); Tian Pan (Beijing University of Posts and Telecommunications, China); Bin Liu (Tsinghua University, China); Yu Jia (China Mobile (Suzhou) Software Technology Co. Ltd, China); Ling Qian (China Mobile (Suzhou) Software Technology Co., Ltd, China)

Flow table capacity in programmable switches is constrained due to the limited on-chip hardware resource. The current mainstream approach is to cache only the popular rules in hardware. Existing works focus on selecting the cache entries for a single flow table to achieve high cache hit-rate, which cannot adapt to multi-stage flow tables. Due to hardware constraints as well as service requirements, it is often necessary to decompose a single flow table to a multi-stage flow table or directly create multiple stages of tables in hardware. For the first time, we abstract and analyze the multi-stage flow table caching problem OMFC, and prove its NP-hardness. Further, we propose a Greedy Caching Algorithm (GCA) for OMFC, which considers both the rule popularity across multiple stages of flow tables and entry popularity within the same stage. The simulation results show GCA achieves a 10~30% higher cache hit-rate than the existing algorithms.

S3.2 Proactive Resource Orchestration Framework for Cloud/Fog Platform
Somnath Mazumdar (Copenhagen Business School, Denmark); Thomas Dreibholz (Simula Metropolitan Centre for Digital Engineering, Norway)

Cloud computing makes complex processing an off-premise activity by offering software- and hardware-based services using standard security protocols over the Internet. It has been seen that the cloud is not ideal for latency-sensitive applications. Thanks to the current growth of network communication and infrastructure, fog adds a computing resource delegation model between the user and the cloud. Fog aims to improve latency-sensitive applications support. Here, we propose one unified, proactive resource orchestration framework from a cloud/fog service provider perspective. The framework consists of a predictor and a resource allocator module. Users subscribe to these resources to execute their applications. The framework is modular and does not require application-specific information. A service provider can customise each module. We have presented the framework prototype by showing each module's simulated performance results using the parameters of our cloud/fog research testbed.

S3.3 Online Bargaining Scheme Based Dynamic Resource Allocation for Soft-Deadline Tasks in Edge Computing
Xuebo Sun, Hui Wang, Sheng Pan and Tongfei Liu (Zhejiang Normal University, China)

To meet the low latency requirements of many Internet of Things (IoT) applications, edge computing has been proposed to migrate computation from the cloud to the network's edge. This paper addresses the resource problem of edge computing, where edge nodes allocate computing and storage resources at the network's edge to satisfy various computing tasks of users with soft deadlines. This paper proposes an efficient two-period bargaining algorithm to optimize resource allocation in edge computing networks. We introduce task value functions to model the varying sensitivities of different tasks to latency. Then, we model the resource allocation problem as an online bargaining problem base on a three-layer edge computing model. We propose a specific two-period bargaining scheme. Corresponding pricing strategies are formulated for different periods to achieve optimal resource allocation and maximization of average utility. Experimental results demonstrate that our algorithm outperforms other bargaining strategies and effectively improves system response speed.

S3.4 Cross-Regional Task Offloading with Multi-Agent Reinforcement Learning for Hierarchical Vehicular Fog Computing
Yukai Hou (Tongji university, China); Zhiwei Wei, Shiyang Liu, Bing Li and Rongqing Zhang (Tongji University, China); Xiang Cheng (Peking University, China); Liuqing Yang (Hong Kong University of Science and Technology, China)

Vehicular fog computing (VFC) can make full use of computing resources of idle vehicles to increase computing capability. However, most current VFC architectures only focus on the local region and ignore the spatio-temporal distribution of computing resources, resulting that some regions have idle computing resources while others cannot satisfy the requirements of tasks. Therefore, we propose a hierarchical VFC architecture, where neighboring regions can share their idle computing resources. Considering that the existing centralized offloading mode is not scalable enough and the high complexity of cooperative task offloading, we put forward a distributed task offloading strategy based on multi-agent reinforcement learning. Moreover, to tackle the inefficiency caused by the multi-agent credit assignment problem, we provide the counterfactual multi-agent reinforcement learning approach which exploits a counterfactual baseline to evaluate the action of each agent. Simulation results validate that the hierarchical architecture and the distributed algorithm improves the efficiency of global performance.

S3.5 A URL-Based Computing Power Scheduling Method and System
Wenjuan Xing (China Telecom Corporation Limited Research Institute, China)

Computing power network needs to manage and allocate various computing power resources. Due to the ubiquitous, multi-level, heterogeneous, and diverse characteristics of computing power resources, and the problem of computing power matching and scheduling in different business scenarios and user requirements, we propose a URL-based computing power scheduling method. The method achieves unified management and maintenance of heterogeneous computing power based on computing power URL identification and extracts user requirement features to match appropriate computing power resources.

S3.6 Collecting Sensor Data from WSNs on the Ground by UAVs: Assessing Mismatches from Real-World Experiments and Their Corresponding Simulations
Bruno Jose Olivieri de Souza (Pontifícia Universidade Católica Do Rio de Janeiro & Laboratory of Advanced Collaboration, Brazil); Thiago de Souza Lamenza (Pontifícia Universidade Católica do Rio de Janeiro, Brazil); Marcelo Paulon J. V. (Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), Brazil); Victor Bastos (IME, Brazil); Vitor G. A. Carneiro (Instituto Militar de Engenharia & Brazilian Army, Brazil); Markus Endler (Pontifícia Universidade Católica do Rio de Janeiro, Brazil)

Communication approaches for autonomous robots in surveillance missions, remotely acting or collecting point-of-interest data, are widely researched. In this line of research, most works address the use of unmanned aerial vehicles because of the mobility flexibility of these vehicles to cover an area. However, these proposals are verified almost exclusively through network simulations. Simulations are efficient for speeding up experiments. In most cases, most experiments are simulated because of the difficulty of validating a proposal in the real world. Real-world experiences are doubly important because they provide much more robust validation to the proposals, real-world tests can be compared to simulated tests, and the gaps between the results can be used to enrich simulated environments that will be used for validations without real-world tests. In this line, this paper presents tests performed in simulated and real-world environments, compares the results of both experiments and presents how enhancement can be applied.

Monday, July 10 11:00 - 13:00 (Africa/Tunis)

S4: Emerging Topics in AI and Machine Learning (online)

Room: Webex 4
Chair: Yassine Hadjadj-Aoul (University of Rennes, France)
S4.1 Attack Analysis on Two-Party Signature and Threshold Signature Based on Dilithium
Xiang Wu, Bohao Li and Boyang Zhang (China University of Geosciences, Wuhan, China); Xiaofan Liu (Huazhong University of Science and Technology, China); Wei Ren (China University of Geosciences (Wuhan), China); Kim-Kwang Raymond Raymond Choo (University of Texas at San Antonio, USA)

With the rapid development of post-quantum cryptography study, the Crystals-Dilithium digital signature algorithm based on the lattice has received much attention, and many extended studies have appeared on its basis. In this paper, we focus on the two-party signature protocol and (t, n)-threshold signature protocol based on Dilithium, and analyze their feasibility and security. After repeated experiments with the official rejection sampling code of the Aigis-sig scheme, it is shown that the two-party signature protocol basically cannot pass rejection sampling verification properly and is not feasible. At the same time, both protocols have a security risk that private key is divulged and internal members forge signatures. Through security analysis and experimental results, we found that there is close to 100% possibility that the internal members can crack the y value and private key information to complete the signature forgery. It will make the security of signatures unable to be guaranteed.

S4.2 A Robust Prototype-Free Retrieval Method for Automatic Check-Out
Huijie Huangfu, Ziyuan Yang, Maosong Ran, Weihua Zhang, Jingfeng Lu and Yi Zhang (Sichuan University, China)

In recent years, automatic check-out (ACO) gains increasing interest and has been widely used in daily life. However, current works mainly rely on both counter and product prototype images in the training phase, and it is hard to maintain the performance in an incremental setting. To deal with this problem, we propose a robust prototype-free retrieval method (ROPREM) for ACO, which is a cascaded framework composed of a product detector module and a product retrieval module. We use the product detector module without product class information to locate products. Additionally, we first attempt the check-out process as a retrieval process rather than a classification process. The retrieval result is considered as the product class by comparing the feature similarity between a query image and gallery templates. As a result, our method require much fewer training samples and achieves state-of-the-art performance on the public Retail Product Checkout (RPC) dataset.

S4.3 Streaming Session Recommendation Based on User's Global Attributes
Xuechang Zhao, Qing Yu and Yifan Wang (Tianjin University of Technology, China)

Session-based recommendations are designed to predict the next item a user will click on. It is able to maintain the session structure of the session data. In practical scenarios, session data is dynamic and rapidly generated, which suggests the streaming nature of session data. Recent studies have shown that graph neural network (GNN) based approaches mainly focus on the current session and cannot process the latest session data. In this paper, we propose a streaming session-based recommendation system (UGNN). We use the Wasserstein model for data sampling. Then, the user configuration is embedded as a global attribute in the session graph and incorporated into the embedding vector of the recommendation system. Meanwhile, since the residual network of the Transformer model can effectively alleviate the pressure during model updates, we innovatively added Transformers to the model. Experiments on two real-world datasets show that our model outperforms state-of-the-art models.

S4.4 Finding Potential Pneumoconiosis Patients with Commercial Acoustic Device
Xuehan Zhang, Zhongxu Bao, Yuqing Yin, Xu Yang, Xiao Xu and Qiang Niu (China University of Mining and Technology, China)

Early symptom monitoring is an essential measure for pneumoconiosis prevention. However, one severe limitation is the high requirement for a dedicated device. This paper proposes P3Warning to realize low-cost warnings for potential pneumoconiosis patients via contactless sensing. For the first time, the designed framework utilizes the inaudible acoustic signal with a pair of commercial speaker and microphone to monitor early symptoms of pneumoconiosis including abnormal respiration and cough. We introduce and address unique technical challenges, such as designing a delay elimination method to synchronize transceiver signals and providing a search-based signal variation amplification strategy to support highly accurate and long-distance vital sign sensing. Comprehensive experiments are conducted to evaluate P3Warning. The results show that it can achieve a median error of 0.52 bpm for abnormal respiration pattern monitoring and an accuracy of 95% for cough detection in total, and support the furthest range of up to 4 m.

S4.5 Traffic Matrix Estimation Based on Denoising Diffusion Probabilistic Model
Xinyu Yuan, Yan Qiao, Pei Zhao, Rongyao Hu and Benchu Zhang (Hefei University of Technology, China)

The traffic matrix estimation (TME) problem has been widely researched for decades of years. Recent progresses in deep generative models offer new opportunities to tackle TME problems in a more advanced way. In this paper, we leverage the powerful ability of denoising diffusion probabilistic models (DDPMs) on distribution learning, and for the first time adopt DDPM to address the TME problem. To ensure a good performance of DDPM on learning the distributions of TMs, we design a preprocessing module to reduce the dimensions of TMs while keeping the data variety of each OD flow. To improve the estimation accuracy, we parameterize the noise factors in DDPM and transform the TME problem into a gradient-descent optimization problem. Finally, we compared our method with the state-of-the-art TME methods using two real-world TM datasets, the experimental results strongly demonstrate the superiority of our method on both TM synthesis and TM estimation.

S4.6 Learning-Based Congestion Control Assisted by Recurrent Neural Networks for Real-Time Communication
Jingshun Du, Chaokun Zhang, Shen He and Wenyu Qu (Tianjin University, China)

In recent years, Real-Time Communication (RTC) has been widely used in many scenarios, and Congestion Control (CC) is one of the important ways to improve the experience of such applications. Accurate bandwidth prediction is the key to CC schemes. However, designing an efficient congestion control scheme with accurate bandwidth prediction is challenging, largely because it is essentially a Partially Observable MDP (POMDP) problem, making it difficult to use traditional hand-crafted methods to solve. We propose a novel hybrid CC scheme LRCC, which combines attention-based Long Short Term Memory (LSTM) and Reinforcement Learning (RL), realizing more accurate bandwidth prediction and congestion control by adding bandwidth memory information provided by the recurrent neural network to the RL decision-making process. Trace-driven experiments show that our proposed method can significantly reduce packet loss and improve bandwidth utilization in various network scenarios, outperforming baseline methods on overall QoE.

Monday, July 10 13:00 - 14:00 (Africa/Tunis)

Monday, July 10 14:00 - 15:00 (Africa/Tunis)

Industrial Keynote: Decarbonising the built environment - a technology perspective, by Sohaib Qamar Sheikh (United Kingdom)

Room: TULIPE 2 / webex 2
Chair: Ilhem Kallel (University of Sfax, Tunisia & Regim-Lab., Tunisia)

Climate Change is a defining issue of our generation, and for generations to come. The built environment contributes around 39% of global energy related carbon emissions. This session explores ways through which the industry is trying to tackle this behemoth through the use of advanced IoT and communication systems, the gaps that still remain and the next steps which are needed to target net zero carbon.

Monday, July 10 14:00 - 16:00 (Africa/Tunis)

S7: Services and Protocols (online)

Room: Webex 5
Chair: Adel S Elmaghraby (University of Louisville, USA)
S7.1 TAMCQF: Hybrid Traffic Scheduling Mechanism Integrating TAS and Multi-CQF in TSN
Hongrui Nie, Shaosheng Li and Yong Liu (Beijing University of Posts and Telecommunications, China)

Time-sensitive networking is considered one of the most promising solutions to address hybrid traffic scheduling. The TSN working group proposed various shaping mechanisms (e.g., time-aware shaper (TAS), credit-based shaper (CBS), and cyclic queuing and forwarding (CQF)). However, scheduling hybrid traffic with different quality of service (QoS) requirements is still not effectively solved, since QoS requirements are hard to meet by standalone mechanism or combined mechanisms with fixed time slot divisions. In this paper, we propose a TAMCQF model to achieve deterministic hybrid traffic scheduling. We formally define the problem of configuring TAMCQF-based networks with different scheduling constraints and time slot division granularity. We develop a mixed linear programming formulation (MILP) and solve it with a state-of-the-art ILP solver. The simulation results show that TAMCQF can achieve zero jitter for TT traffic compared to CQF and reduce time cost by 10^3× level when handling over 250 hybrid flows compared to TAS.

S7.2 DTRadar: Accelerating Search Process of Decision Trees in Packet Classification
Jiashuo Yu, Long Huang, Longlong Zhu and Dong Zhang (Fuzhou University & Quan Cheng Laboratory, China); Chunming Wu (College of Computer Science, Zhejiang University, China)

Packet classification is an essential part of computer networks. Existing algorithms propose a partition process to address the memory explosion problem of the decision tree algorithm caused by the huge number of rules with multiple fields. However, the search process requires traversing multiple trees generated by the partition, which reduces the search efficiency. The existing algorithms take simple approaches to optimize the search process, which is low efficiency or high hardware overhead. In this paper, we propose DTRadar, a framework for expediting the decision tree packet lookup process. Its key idea is building an abstract One-Big-Tree(OBT) for multiple decision trees by establishing the middle data structure. DTRadar considers each decision tree as a splittable tree and organizes these subtrees by intermediate data structures. Extensive experiments show that DTRadar benefits existing decision tree-based solutions in classification time by 61.60%, and the memory footprint only increased by 4.21% on average.

S7.3 P4CTM: Compressed Traffic Pattern Matching Based on Programmable Data Plane
Hang Lin (Fuzhou University & Quan Cheng Laboratory, China); Weiwei Lin and Jing Lin (Fuzhou University, China); Longlong Zhu and Dong Zhang (Fuzhou University & Quan Cheng Laboratory, China); Chunming Wu (College of Computer Science, Zhejiang University, China)

Pattern matching is an important technology applied to many security applications. Most network service providers choose to compress network traffic for better transmission, which brings the challenges of compressed traffic matching. However, existing works focus on improving the performance of uncompressed traffic matching or only realize the compressed traffic matching on end-host that can not keep pace with the dramatic increase in traffic. In this paper, we present P4CTM, a proof-of-concept method to conduct efficient compressed traffic matching on the programmable data plane. P4CTM uses the two-stage scan scheme to skip some bytes of compressed traffic, the 2-stride DFA combines with the compression algorithm to condense the state space, and the wildcard match to downsize the match action tables in the programmable data plane. The experiment indicates that P4CTM skips 83.10% bytes of compressed traffic, condenses the state space by order of magnitude, and reduces most of the table entries.

S7.4 FACC: Flow-Size-Aware Congestion Control in Data Center Networks
Guanglei Chen, Jiangping Han and Xiwen Jie (University of Science and Technology of China, China); Peilin Hong (Dept. EEIS & USTC, China); Kaiping Xue (University of Science and Technology of China, China)

The distribution of traffic shows a characteristic of different flow sizes in Data Center Networks (DCNs), which requires diverse demands for data transmission. However, most existing congestion control schemes treat all the flows equivalently, which cannot meet the diverse demands of applications. In this paper, we propose FACC, a flow-size-aware congestion control scheme. In FACC, we design a distinguished congestion control logic to assign the transmission demands of different kinds of flows. To meet the diverse demands, FACC provides an adaptable congestion window (cwnd) adjustment by assigning customized weights with a well-designed flow-size-aware reward function. Simulation results show that FACC can reduce the average FCT and the 99-th percentile FCT slowdown of short flows by 35% and 23% compared to the state-of-the-art congestion control schemes in DCNs, respectively.

S7.5 F2-HPCC: Achieve Faster Convergence and Better Fairness for HPCC
Xiwen Jie, Hang Wang, Runzhou Li and Guanglei Chen (University of Science and Technology of China, China); Peilin Hong (Dept. EEIS & USTC, China)

In recent years, Remote Direct Memory Access (RDMA) has been widely deployed in data centers to provide low-latency and high-bandwidth services. To ensure high performance in RDMA networks, congestion control manages queue depth on switches to minimize queueing delays. Although HPCC, the state-of-the-art scheme, can significantly reduce the flow completion time (FCT) of short flows, it still suffers from slow convergence and unfairness, which will affect the tail FCT of large flows. In this paper, we first analyze the causes of these defects and then propose an improved scheme called F2-HPCC, which introduces a self-adjusting additive increase algorithm to accelerate converging and a sliding window algorithm to improve fairness. In our evaluation, F2-HPCC achieves faster convergence and fairer allocations without sacrificing queue length and shortens the tail FCT of large flows by up to 33% under real data center workloads.

S7.6 Examining the Centralization of Email Industry: A Landscape Analysis for IPv4 and IPv6
Luciano Zembruzki and Arthur Jacobs (Federal University of Rio Grande do Sul, Brazil); Ricardo J. Pfitscher (Federal University of Santa Catarina, Brazil); Lisandro Z Granville (Federal University of Rio Grande do Sul, Brazil)

Centralization of key Internet services, including email, can result in privacy and security concerns and increase the number of single points of failure. This paper measures and analyzes a large-scale dataset of email providers gathered from MX records of top-level domains. The findings reveal the concentration of email infrastructure providers for each TLD and identify the most significant providers in the market. The paper also demonstrates that the IPv6 adoption increased the centralization of email servers. The research contributes to the state-of-the-art by thoroughly examining email infrastructure centralization and identifying potential areas for future research.

Monday, July 10 14:00 - 16:00 (Africa/Tunis)

S8: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (online)

Room: Webex 4
Chair: Nawel Zangar (ESIEE PARIS, France)
S8.1 An E-Commerce Conversational Virtual Assistant in Service of Mild Cognitive Impairment Patients
Ioannis - Aris Kostis (My Company Projects O. E., Greece); Dimitrios Sarafis (My Company Projects OE, Greece); Konstantinos Karamitsios (My Company Projects O. E., Greece); Magda Tsolaki and Anthoula Tsolaki (Aristotle University of Thessaloniki, Greece)

Conversational Agent-based Virtual Assistants have seen an increase on functionality in the past recent years in E-commerce. Basing their functionality on advanced technology (NLP, ML, DL), they are able to satisfy the majority of the customer service needs of an e-shop, thus increasing user satisfaction while reducing operational costs. In our work, we propose and implement a Conversational Virtual Assistant addressing consumers diagnosed with Mild Cognitive Impairment. We design its features through the prism of their specific needs while navigating e-shop websites, in order to ameliorate their commercial journey. The results drawn from the psychiatric trial conducted indicate a significant improvement in key indicators and metrics, along with stated enhancement of the users' experience. To the best of out knowledge, this is the first research work that addresses the subject within the aforementioned context.

S8.2 BVSNO:Binary Code Vulnerability Detection Based on Slice Semantic and Node Order
Ningning Cui (School of Cyber Security, University of Chinese Academy of Sciences, China); Liwei Chen (Institute of Information Engineering, China); Gang Shi (Institute of Information Engineering,Chinese Academy of Sciences, China)

The proliferation of code reuse and the prevalence of CPU architecture and compilation environment diversity inevitably lead to many similar cross-platform binary vulnerability codes. This paper designs a deep learning model based on slice semantic and node order to detect similarity vulnerabilities. Firstly, it traverses the program dependence graph (PDG) forward and backward from the library/API function node to generate the binary slice and then uses the bidirectional long short-term memory (BLSTM) network and attention mechanism to form the semantic feature vector of the binary slice. Secondly, it extracts the order information of the slice nodes in the PDG and forms the adjacency matrix, which is then fed into the convolutional neural network (CNN) to form the order feature vector. Finally, the semantic and order feature vector are fused and inputted into the siamese network for similarity vulnerability detection. The detection results show that our method can effectively detect vulnerability.

S8.3 Improving Scenic Traffic Prediction Based on Spatio-Temporal Correlation and Causality
Longxiang Xiong, Yuchun Guo, Yishuai Chen and Dongxia Zheng (Beijing Jiaotong University, China)

This study introduces two innovative traffic prediction models, the Spatio-temporal Graph Convolutional Network model with Correlation-based Graph (ST-GCN-CO), and the Spatio-temporal Diffusion Convolutional model with Causality-based Graph (ST-DC-CA). Aimed at predicting traffic patterns in scenic areas, these models address difficulties in characterizing non-neighboring correlations and handling sparse, discrete data. The models use GRU to manage temporal dependencies, while deploying GCN and DC for capturing spatial dependencies in undirected and directed graphs respectively. This marks the first application of time series correlation and causality for predicting traffic in scenic spots. Through empirical testing on real datasets, these models have demonstrated superior performance over existing methods, with the ST-DC-CA model showing a significant 2.089-26.26% improvement in NRMSE compared to baseline methods.

S8.4 A Discriminative Multi-Task Learning for Autism Classification Based on Speech Signals
Xiaotian Yin and Chao Zhang (Anhui University, China); Wei Wang (Xiaomi Corporation, China)

About 70 million people around the world are suffering from autism, which is about one in every 160 children. The causes of autism are complex, and there is no specific drug treatment. However, for an individual with autism, the earlier the age of treatment, the greater the improvement. In this paper, we collected an autism speech dataset and conducted a study on speech feature classification of autistic and normal children. We built a deep neural network, using Convolutional Recurrent Neural Network as the front-end encoder, and added Convolutional Block Attention Module to it. Integrate local features using recurrent neural networks. To prevent overfitting, we add Connectionist Temporal Classification based speech recognition auxiliary task during training. After introducing the loss function in the field of face recognition, the best classification accuracy reached 94.76%.

S8.5 Optimal Walks in Contact Sequence Temporal Graphs with No Zero Duration Cycle
Anuj Jain (University of Florida & Adobe Systems Inc., USA); Sartaj Sahni (University of Florida, USA)

We develop an algorithm to find walks in contact sequence temporal graphs that have no cycle whose duration is zero. These walks minimize any specified linear combination of optimization criteria such as arrival time, travel duration, hops, and cost. The algorithm also accommodates waiting time constraints. When min and max waiting time constraints are specified, the complexity of our algorithm is O(|V|+|E|.delta), where |V| is the number of vertices, |E| is the number of edges, and delta is the maximum out-degree of a vertex in the contact sequence temporal graph. When there are no maximum waiting time constraints, the complexity of our algorithm is O(|V|+|E|). On the test data used by Bentert et al., our optimal walks algorithm provides a speedup of up to 77 over the algorithm of Bentert et al. and a memory reduction of up to 3.2.

S8.6 WELID: A Weighted Ensemble Learning Method for Network Intrusion Detection
Yuanchen Gao (BUPT ( Beijing University of Posts and Telecommunications ), China); Guosheng Xu and Guoai Xu (Beijing University of Posts and Telecommunications, China)

The requirements for intrusion detection technology are getting higher and higher, with the rapid expansion of network applications. There have been many studies on intrusion detection, however, the accuracy of these models is not high enough and time-consuming, making them unavailable. In this paper, we propose a novel weighted ensemble learning method for network intrusion detection (WELID). Firstly, data preprocessing and feature selection algorithms are used to filter out some redundant and unrelated features. Next, anomaly detection is performed on the dataset using different base classifiers, and a layered ten-fold cross-validation method is used to prevent program overfitting. Then, the best classifiers are selected for the use of a multi-classifier fusion algorithm based on probability-weighted voting. We compare the proposed model with many efficient classifiers and state-of-the-art models for intrusion detection. The results show that the proposed model is superior to these models in terms of accuracy and time consumption.

Monday, July 10 14:20 - 16:00 (Africa/Tunis)

S5: Cloud and Edge Computing (onsite)

Room: TULIPE 1 / webex 1
Chair: Antonio Celesti (University of Messina, Italy)
S5.1 AGE: Automatic Performance Evaluation of API Gateways
Pedro M. Moreira (University of Minho & INESC TEC, Portugal); António Ribeiro (University of Minho, Portugal); João Marco C. Silva (University of Minho & INESC TEC, Portugal)

The increasing use of microservices architectures has been accompanied by the profusion of tools for their design and operation. One relevant tool is API Gateways, which work as a proxy for microservices, hiding their internal APIs, providing load balancing, and multiple encoding support. Particularly in cloud environments, where the inherent flexibility allows on-demand resource deployment, API Gateways play a key role in seeking quality of service. Although multiple solutions are currently available, a comparative performance assessment under real workloads to support selecting the more suitable one for a specific service is time-consuming. In this way, the present work introduces AGE, a service capable of automatically deploying multiple API Gateways scenarios and providing a simple comparative performance indicator for a defined workload and infrastructure. The designed proof of concept shows that AGE can speed up API Gateway deployment and testing in multiple environments.

S5.2 MultiTASC: A Multi-Tenancy-Aware Scheduler for Cascaded DNN Inference at the Consumer Edge
Sokratis Nikolaidis (National Technical University of Athens, Greece); Stylianos Venieris (Samsung AI, United Kingdom (Great Britain)); Iakovos S. Venieris (National Technical University of Athens, Greece)

Cascade systems comprise a two-model sequence, a lightweight model processing all samples and a heavier model conditionally refining harder samples to improve accuracy. By placing the light model on the device side and the heavy model on a server, model cascades constitute a widely used distributed inference approach. With the rapid expansion of intelligent indoor environments, the new setting of Multi-Device Cascade is emerging where multiple and diverse devices are simultaneously using a shared heavy model on the same server, typically located close to the consumer environment. This work presents Multi-TASC, a multi-tenancy-aware scheduler that adaptively controls the forwarding decision functions of the devices in order to maximize the system throughput, while sustaining high accuracy and low latency. By explicitly considering device heterogeneity, our scheduler improves the latency service-level objective (SLO) satisfaction rate over state-of-the-art cascade methods in highly heterogeneous setups, while serving over 40 devices, showcasing its scalability.

S5.3 AdapPF: Self-Adaptive Scrape Interval for Monitoring in Geo-Distributed Cluster Federations
Chih-Kai Huang and Guillaume Pierre (Univ Rennes, Inria, CNRS, IRISA, France)

Monitoring plays a vital role in geo-distributed cluster federation environments to accurately schedule applications across geographically dispersed computing resources. However, using a fixed frequency for collecting monitoring data from clusters may waste network bandwidth and is not necessary for ensuring accurate scheduling. In this paper, we propose Adaptive Prometheus Federation (AdapPF), an extension of the widely-used open-source monitoring tool, Prometheus, and its feature, Prometheus Federation. AdapPF aims to dynamically adjust the collection frequency of monitoring data for each cluster in geo-distributed cluster federations. Based on actual deployment in the geo-distributed Grid'5000 testbed, our evaluations demonstrate that AdapPF can achieve comparable results to Prometheus Federation with 5-seconds scrape interval while reducing cross-cluster network traffic by 36%.

S5.4 Collaborative Fuzzy Clustering Approach for Scientific Cloud Workflows
Hamdi Kchaou (REGIM, University of Sfax, Tunisia); Wissem Abbes (REGIM, Tunisia & ENIS, Tunisia); Zied Kechaou (REGIM-Lab, ENIS, University of Sfax, Tunisia); Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

Cloud computing has allowed the sharing of applications with a lot of data, like scientific workflows. Using scientific workflows to process big data is expensive regarding data transfer, execution time, and bandwidth costs. A data placement method based on fuzzy sets is used to cut these costs. It helps optimize data placement and reduce the costs of processing big data. This paper presents a new method for scientific cloud workflow data placement involving fuzzy sets to realize collaborative clustering. The proposed method explores each data center's datasets through data dependencies, clusters them by the clustering algorithm Fuzzy C-Means (FCM), and re-clusters them based on the data collaboration. Our suggested method of using fuzzy sets to realize collaborative clustering can help cope with uncertainties in data and thus reduce the overall data placement amounts, with better results than previous approaches.

S5.5 A NoSQL DBMS Transparent Data Encryption Approach for Cloud/Edge Continuum
Valeria Lukaj, Alessio Catalfamo, Francesco Martella, Maria Fazio, Massimo Villari and Antonio Celeste (University of Messina, Italy)

Edge systems are increasingly popular for data collection and processing. Typically, due to their limited storage capacity, pieces of data are continuously exchanged with Cloud systems which store them in distributed DataBase Management System (DBMS). This scenario, known as Cloud/Edge Continuum, is critical from a data security point of view as it is exposed to many risks. Transparent Data Encryption (TDE) is proposed as a possible solution for encrypting database files. However, current solutions do not suit the Cloud/Edge continuum requirements. In this paper, we aim at fulfilling this gap by proposing a solution to encrypt the data locally at the Edge and transfer them to a distributed database over the Cloud. Our approach allows us to perform queries directly on encrypted data over the Cloud and to retrieve them on the Edge for decryption. Experiments performed on different NoSQL DBMS solutions demonstrate the feasibility of our approach.

Monday, July 10 14:20 - 16:00 (Africa/Tunis)

S6: Security in Computers and Communications (hybrid)

Room: TULIPE 3 / webex 3
Chair: Michael Kounavis (Meta Platforms Inc., USA)
S6.1 Accelerating IDS Using TLS Pre-Filter in FPGA
Vlastimil Kosar (Brno University of Technology, Czech Republic); Lukáš Šišmiš (CESNET, Czech Republic); Jiří Matoušek and Jan Korenek (Brno University of Technology & CESNET, Czech Republic)

Intrusion Detection Systems (IDSes) are a widely used network security tool. However, achieving sufficient throughput is challenging as network link speeds increase to 100 or 400 Gbps. Despite the large number of papers focusing on the hardware acceleration of IDSes, the approaches are mostly limited to the acceleration of pattern matching or do not support all types of IDS rules. Therefore, we propose hardware acceleration that significantly increases the throughput of IDSes without limiting the functionality or the types of rules supported. As the IDSes cannot match signatures in encrypted network traffic, we propose a hardware TLS pre-filter that removes encrypted TLS traffic from IDS processing and doubles the average processing speed. Implemented on an acceleration card with an Intel Agilex FPGA, the pre-filter supports 100 and 400 Gbps throughput. The hardware design is optimized to achieve a high frequency and to utilize only a few hardware resources.

S6.2 Federated Byzantine Agreement Protocol Robustness to Targeted Network Attacks
Vytautas Tumas (Ripple, United Kingdom (Great Britain)); Sean Rivera (University of Luxembourg, Luxembourg); Damien Magoni (University of Bordeaux, France); Radu State (University of Luxembourg, Luxembourg)

Federated Byzantine Agreement protocols applied in the XRP Ledger and Stellar use voting to reach a consensus. Participants of these protocols select whom to trust in the network and effectively communicate with the trustees to reach an agreement on transactions. Most trustees, for example 80% in the XRP Ledger, must agree on the same transactions for them to appear in the blockchain. However, disruptions to the communication between the trustees can prevent the trustees from reaching an agreement. Thus, halting the blockchain.

In this paper, we propose a novel robustness metric to measure the Federated Byzantine Agreement protocol tolerance to node failures. We show that the XRP Ledger Consensus Protocol is vulnerable to targeted attacks. An attacker has to disconnect only 9% of the highest-degree nodes to halt the blockchain. We propose a mitigation strategy which maintains critical XRP Ledger network topology properties whilst increasing the robustness up to 45%.
S6.3 AppBox: A Black-Box Application Sandboxing Technique for Mobile App Management Solutions
Maqsood Ahmad (Università di Trento, Italy); Francesco Bergadano and Valerio Costamagna (Università di Torino, Italy); Bruno Crispo (Università di Trento, Italy); Giovanni Russello (University of Auckland, New Zealand)

Several Mobile Device Management (MDM) and Mobile Application Management (MAM) services have been launched on the market. However, these services suffer from two important limitations: reduced granularity and need for app developers to include third party SDKs. We present AppBox, a novel black-box app-sandboxing solution for app customisation for stock Android devices. AppBox enables enterprises to select any app, even highly-obfuscated, from any market and perform a set of target customisations by means of fine-grained security policies. We have implemented and tested AppBox on various smartphones and Android versions. The evaluation shows that AppBox can effectively enforce fine-grained policies on a wide set of existing apps, with an acceptable overhead.

S6.4 Quick Notification of Block Generation Using Bloom Filter in a Blockchain
Tsuyoshi Hasegawa, Akira Sakurai and Kazuyuki Shudo (Kyoto University, Japan)

Forks in a blockchain sacrifice security. In this paper, we propose a protocol for quickly propagating block generation notifications in the blockchain network quickly to reduce the fork rate. Block generation notifications contain a Bloom filter that represents transactions in the generated block. Thus, when nodes receive a block generation notification, they can start mining the next block. In experiments in which a simulator is used, we compared the propagation time of a block generation notification with that of a block in the existing protocol. As a result, the propagation time of the 50 %ile is 41.1 % of an existing protocol, the 90 %ile is 39.2 % of the existing protocol, and the fork rate calculated from the average propagation time is 40.8 % of the existing protocol.

S6.5 Automated WiFi Incident Detection Attack Tool on 802.11 Networks
Dimitris Koutras (University of Piraeus & UPRC, Greece); Panos Dimitrellos, Panayiotis Kotzanikolaou and Christos Douligeris (University of Piraeus, Greece)

In this paper we propose a methodology for intrusion detection of attacks originating from WiFi networks, along with WiFi-NID, a WiFi Network Intrusion Detection tool, developed to automate the detection such attacks at the 802.11 networks. In particular, WiFi-NID has the ability to detect and trace possible illegal network scanning attacks, which originate from attacks at the WiFi access layer. A penetration testing methodology is defined, in order to discover the environmental security characteristics, related with the current configuration of the devices connected to the 802.11 network. The methodology covers known WiFi attacks such as deauthentication attacks, capturing and cracking WPA-WPA/2 handshake, captive portal and WPA attacks, mostly based on various open source software tools, as well as on specialized hardware. For the validation process, a testbed is set based on realistic scenarios of WiFi network topologies.

Monday, July 10 15:00 - 16:00 (Africa/Tunis)

ICTS4eHealth - Keynote: Artificial Intelligence for Diabetes, by Tomáš Koutný (Czech Republic)

Room: TULIPE 2 / webex 2
Chair: Achraf Othman (Mada Qatar Assistive Technology Center, Qatar)

Diabetes mellitus is a group of heterogeneous, civilization diseases. It is 8th common cause of death. It manifests with elevated blood glucose, which continuously damages various organs and contribute to a development of additional diseases. To treat an advanced form of the disease, we need to manage blood glucose level with insulin. As the management is actually a hard problem, artificial intelligence can explore many decisions to find the most effective ones, which is unfeasible for a patient without a smart device. Actually, there are three areas, in which the artificial intelligence can help - glucose level predictor, artificial pancreas construction and metabolic simulator. Using selected studies, we will present state-of-the-art, artificial-intelligence approaches to each of these three topics to demonstrate how to deal with the design, implementation and verification.

Monday, July 10 16:00 - 16:30 (Africa/Tunis)

PDS1: Poster Session and Coffee Break

Room: MIMOSA
Chair: Ali Wali (REGIM-Lab., Tunisia)
A GRASP-Based Algorithm for Virtual Network Embedding
Amine Rguez (Rennes 1 University, France & EXFO Solutions, France); Yassine Hadjadj-Aoul (University of Rennes, France); Farah Slim (EXFO Solutions, France); Gerardo Rubino (INRIA, France); Asma Selmi (EXFO Europe Limited, United Kingdom (Great Britain))

With the rise of network virtualization, network slicing is becoming a hot research topic. Indeed, network operators must deal with capacity-limited resources while insuring an extreme availability of services. Several approaches exist in the literature to tackle such a problem, some of them converge quickly to a local minimum, while others are not explainable and therefore do not provide the necessary guarantees for their deployment in a real network. In this context, we propose a new approach for Virtual Network Embedding (VNE) based on the Greedy Adaptive Search Procedure (GRASP). Using the GRASP meta-heuristic ensures the robustness of the solution to changing constraints and environments. Moreover, the proposed realistic approach allows a more efficient and directed exploration of the solution space, in opposition to existing techniques. The simulation results show the potential of the proposed method for solving services' placement problems and its superiority over existing approaches.

Device Behavioral Profiling for Autonomous Protection Using Deep Neural Networks
Sandeep Gupta (University of Trento, Italy); Bruno Crispo (Universita di Trento & IT00340520220, Italy)

Demand for autonomous protection in computing devices can not go unnoticed with an enormous increase in cyber attacks. Consequently, cybersecurity measures to continuously monitor and analyze device critical activity, identify suspicious behavior, and proactively mitigate security risks are highly desirable. In this article, a concept of behavioral profiling is described to distinguish between benign and malicious software by observing a system's internal resource usage on Windows devices. We rely on the Windows built-in event tracing mechanism to log processes' critical interactions for a given amount of time that are converted into structured data using a graph data structure. After that, we extract features from the generated graphs to analyze a process behavior using a deep neural network. Finally, we evaluate our prototype on a collected dataset that contains one thousand benign and malicious samples each and achieve an accuracy of 90%.

A Novel Framework for Distribution Power Lines Detection
Damos Ayobo Abongo (ESPRIT Engineering School, Tunisia); Mohamed Gaha (Institut de Recherche d'Hydro-Québec (IREQ) & Hydro-Québec, Canada); Safa Cherif (Esprit School of Engineering, Tunisia); Wael Jaafar (École de Technologie Supérieure, Canada); Guillaume Houle and Christian Buteau (Hydro-Quebec, Canada)

Millions of dollars are spent yearly to trim trees along rights-of-way and guarantee reliable distribution line systems. To reduce these costs, power utilities are embracing a new approach based on light detection and ranging (LiDAR) data. They aim to automatically detect the locations of critical branches/trees and assess their risks. In this paper, we propose a novel and robust power lines detection framework with several LiDAR data processing steps, which combines machine learning and geometric approaches. By combining these methods, we efficiently detect distribution lines with an Intersection-over-Union performance superior to those of deep-learning-based benchmarks, and less complex than most of them. The benefit is that by prescribing the use of geometrical/mathematical approaches for the post-processing of deep-learning/machine-learning outputs, we are able to further improve lines detection. Finally, we expect our novel framework to be generalized to detect various LiDAR objects such as poles, cars, buildings and roads.

IoT Service Composition: Refinement and Verification
Sarah Hussein Toman (University of Monastir, Tunisia); Lazhar Hamel (University of Monastir & ISIMM, France); Mohamed Graiet (University of Monastir, Tunisia)

The Internet of Things (IoT) is a finite set of interconnected devices that can cooperate and interact with each other through the Internet. As the number of IoT devices have increased, the number of services has increased as well, further complicating the process of service composition. In this paper, an Event-B formal model is presented to provide the verification of the IoT Service Composition system based on its specifications. In addition, the proposed model satisfies some critical properties such as availability and interoperability to fulfil the requirements of the IoT service composition. Our model is developed incrementally from abstract level to target level by using the refinement mechanism. A Fire Alarm Detection System is used as a case study for our model. Finally, we use proof obligations and the Rodin platform to validate and proof the correctness of the proposed formal model.

Path Loss Modeling at 26 GHz in Indoor and Outdoor Tunisian Environments for B5G/6G Wireless Networks
Mamadou Bagayogo (University of Carthage, Tunisia); Soumaya Hamouda (Mediatron Lab., Sup'Com, Tunisia & University of Carthage, Tunisia); Rim Barrak (Higher School of Communications of Tunis, Tunisia)

Today, the functionalities of 5G are still not completely implemented by operators. Many think that 5G will reach its full potential within B5G/6G. One of the key technologies that will help achieve full 5G is millimeter waves (mmW). Although many contributions have already investigated the propagation channel modeling at some mmW frequency bands (e.g. 28 GHz, 38 GHz and 60 GHz) during the past decade, other mmW frequency bands still need a special focus in certain countries where these bands will be deployed. In this paper, we present indeed the results of Indoor and Outdoor measurement campaigns at 26 GHz in Tunisia. We use the single frequency Close-In (CI) model to study the propagation channel characteristics. The path loss exponent (PLE), the shadowing factor and the root mean square (RMS) delay spread were estimated for each environment for the LOS (Line Of Sight) and NLOS (None Line Of Sight) scenarios.

Performance Enhancement of Stream-Based Decompression Process by Notifying Compression Buffer Size
Taiki Kato, Shinichi Yamagiwa and Koichi Marumo (University of Tsukuba, Japan)

This paper focuses on a performance enhancement of communication performance by compressing data stream. ASE coding is an effective lossless data compression method for data stream. The software implementation of the coding/decoding method inevitably meets a performance mismatch in memory and storage devices. In the compressor side, it is predictable to decide the size of an original data block and is available to process a flexible buffer memory. However, the decompressor is not able to predict the buffer size because the original data size is not obvious before the decompression. This causes a performance mismatch in the filesystem level. This paper proposes a novel method to address the performance mismatch by applying a notification mechanism of compression size from the compressor. This paper describes the mechanism focusing on the system call usage. Through experimental evaluations, we show the performance improvement of the decompression performance for handling data stream.

A Hybrid 2D-1D CNN for Scanner Device Linking Based on Scanning Noise
Chaima Ben Rabah (High School of Communications of Tunis, Tunisia & IMT Atlantique, France); Gouenou Coatrieux (IMT Atlantique, France); Riadh Abdelfattah (SupCom, Tunisia)

Ensuring the authenticity of scanned documents is of major concern, these ones being often admitted as evidence by organizations. "Is there any way to verify that a document was scanned by a device without having access physically to the source device itself?" is a wide-open question. In this paper, we aim at answering it in the affirmative by means of the first data-driven hybrid machine learning framework that compares image noise features to check if two documents have been digitized with the same scanner or not. Such a problem is known as device linking function. Different comparative experiments conducted on the same and different scanner models on a broad set of administrative documents demonstrate that our method is efficient in linking scanned images even if scanner devices are unknown to the investigator. Our success rate of 96% appears to be the novel state of art reference in such application domain.

Mobile LoRa Gateway for Communication and Sensing on the Railway
Miguel Luis (Instituto Superior Técnico & Instituto de Telecomunicacoes, Portugal); João Correia Soares (Instituto de Telecomunicações, Portugal); Susana Sargento (Instituto de Telecomunicações, Universidade de Aveiro, Portugal)

This work presents a LoRa channel access strategy that allows a single-channel high-speed moving LoRa gateway to discover and receive sensing data from end devices installed in the railroad. This medium access is based on control messages to coordinate the communication between the mobile gateway and static sensors. The proposed approach is tested in a real mobile environment and compared with the well established LoRaWAN protocol. The results show that our solution outperforms LoRaWAN both in packet error rate and throughput, revealing the differences between a non-awareness solution such as LoRaWAN, and a solution where the end node only transmits when receiving a control message alerting that a LoRa gateway is nearby.

Monday, July 10 16:30 - 18:30 (Africa/Tunis)

ICTS4eHealth - S3: eHealth

Room: TULIPE 2 / webex 2
Chair: Linda Senigagliesi (Università Politecnica delle Marche, Italy)
ICTS4eHealth - S3.1 E-Health in Tuscany Inner Areas: The PROXIMITY-CARE Approach
Alessandro Pacini, Francesca Pennucci and Giorgio Leonarduzzi (Scuola Superiore Sant'Anna, Italy); Andrea Sgambelluri (Scuola Superiore Sant'Anna Pisa, Italy); Luca Valcarenghi, Molka Gharbaoui, Piero Castoldi, Gianluca Paparatto, Erica De Vita and Alberto Arcuri (Scuola Superiore Sant'Anna, Italy); Claudio Passino (Scuola Superiore Sant'Anna e Fondazione Toscana Gabriele Monasterio, Italy); Stefano Dalmiani (Fondazione Toscana Gabriele Monasterio, Italy); Michele Emdin (Fondazione Toscana Gabriele Monasterio & Scuola Superiore Sant'Anna, Italy); Sabina Nuti (Scuola Superiore Sant'Anna, Italy)

The widespread connectivity provided by fixed and mobile communication technologies can facilitate the utilization of e-health and mobile-health services that are radically changing the way healthcare may be provided. Such services are particularly important for inner areas, where it is difficult to guarantee physical availability and proximity of services while ensuring the sustainability of the system. However, e-health and m-health service deployment requires careful planning, as connectivity in inner areas can be spotty. This paper reports how multiple e/m-health solutions have been planned in some inner areas of Tuscany within the PROXIMITY-CARE project. In particular, it introduces a newly developed QGIS-based analysis tool, which allows correlating connectivity data with patient needs and healthcare facilities location. Moreover, the paper presents a tele-tutoring system for the emergency service and a mobile application to monitor patient vital parameters. Such tools facilitate the maximization of the reached population, thus improving benefits for the patients.

ICTS4eHealth - S3.2 Low-Mobility Complementary Tool for Patient Follow-Up: A Proof of Concept for E-Health
Aime Cedric Muhoza, Emmanuel Bergeret, Corinne Brdys and Francis Gary (Université Clermont Auvergne, France)

This paper presents the work conducted as part of the E-health, Mobility, and Big Data project (EMOB), which aims to provide a platform for analyzing physical and sedentary activity features and providing an inter-patient interpretation of their healthcare conditions based on signature similarities in their sedentary behavior. Our focus is on designing a dedicated microcontroller-based device to record patients' daily activities. We provide a detailed description of the hardware developed and its role in the project. We also discuss future perspectives on AI integration at the device level and the impact of this project on understanding the relationship between physical activities and improving health conditions, especially for patients with chronic pain and diseases.

ICTS4eHealth - S3.3 Honoring Heritage, Managing Health: A Mobile Diabetes Self-Management App for Native Americans with Cultural Sensitivity and Local Factors
Wordh Ul Hasan, Juan Li, Shadi Alian, Tianyi Liang, Vikram Pandey, Kimia Tuz Zaman and Jun Kong (North Dakota State University, USA); Cui Tao (University of Texas Health Science Center at Houston, USA)

Diabetes has a disproportionate impact on Native Americans (NAs) as a chronic health condition, yet there is a dearth of mobile apps specifically designed for this population. In this paper, we present the design and development of a culturally tailored mobile app for NAs, taking into account their cultural traditions. Our app incorporates NA's traditional foods, food availability, the importance of family and community, cultural practices and beliefs, local resources, and heritage heroes into the app interface and self-management design. The app includes personalized nutrition guidance, family and community-based support, seamless connection to tribal health providers, access to local resources, and integration of cultural elements. By considering the cultural context of NAs, the developed app has the potential to provide culturally sensitive and relevant features that address the unique needs and preferences of NA users, facilitating effective self-management of diabetes.

ICTS4eHealth - S3.4 A Web-Based Application for Screening Alzheimer's Disease in the Preclinical Phase
Flavio Bertini (University of Parma, Italy); Daniela Beltrami (Clinical Neuropsychology Cognitive Disorders and Dyslexia Unit Neurology, Italy); Pegah Barakati and Laura Calza (University of Bologna, Italy); Enrico Ghidoni (Clinical Neuropsychology Cognitive Disorders and Dyslexia Unit Neurology, Italy); Danilo Montesi (University of Bologna, Italy)

As a result of an increasing elderly population, the number of people with age-related diseases is increasing worldwide. Alzheimer's disease is thus becoming an emergency health and social problem. Neuropsychological evaluation and biomarker identification represent the two main approaches to identifying subjects with Alzheimer's. In this paper, we propose a web application designed to be sensitive to the cognitive changes distinctive of the early Mild Cognitive Impairment, which is a condition in which someone experiences minor cognitive problems, and the preclinical phase of Alzheimer's disease. The application is conceived to be self-administered in a comfortable and non-stressful environment. It was designed to be quick to administer, automatic to score, and able to preserve privacy because of the highly sensitive data collected. The preliminary evaluation of the application was done by enrolling 518 subjects characterised by several risk factors and the presence of a family history, which underwent standard neuropsychological screening.

ICTS4eHealth - S3.5 Business Models in Digital Health: Bibliometric Analysis and Systematic Literature Review
Claudio Pascarelli (University of Salento, Italy); Chiara Colucci (National Interuniversity Consortium for Informatics, Italy); Gianvito Mitrano and Angelo Corallo (University of Salento, Italy)

Digital Health is an emerging topic that relates to the use of digital technologies in the healthcare domain. Although in recent years digital transformation in healthcare has led to an increasing application of such technologies, partly due to the recent COVID-19 pandemic, the inadequacy of current business strategies has been recognized as one of the reason for their limited wide-scale adoption. This paper tries to highlight the characteristics of current business models adopted in Digital Health through a bibliometric analysis and a systematic literature review. The study revealed several features for the main elements of a business model framework, namely Value Proposition, Value Capture, Value Network and Value Delivery.

ICTS4eHealth - S3.6 Empowering Caregivers of Alzheimer's Disease and Related Dementias (ADRD) with a GPT-Powered Voice Assistant: Leveraging Peer Insights from Social Media
Kimia Tuz Zaman, Wordh Ul Hasan and Juan Li (North Dakota State University, USA); Cui Tao (University of Texas Health Science Center at Houston, USA)

Caring for individuals with Alzheimer's Disease and Related Dementias (ADRD) is a complex and challenging task, especially for unprofessional caregivers who often lack the necessary training and resources. While online peer support groups have been shown to be useful in providing caregivers with information and emotional support, many caregivers are unable to benefit from them due to time constraints and limited knowledge of social media platforms. To address this issue, we propose the development of a voice assistant app that can collect relevant information and discussions from online peer support groups on social media. This app will use the collected information as a knowledge base and fine-tune a Generative Pre-trained Transformers (GPT) model to facilitate caregivers in accessing shared experiences and practical tips from peers. Initial evaluation of the app has shown promising results in terms of feasibility and potential impact on caregivers.

Monday, July 10 16:30 - 18:10 (Africa/Tunis)

S10: Security in Computers and Communications (hybrid)

Room: TULIPE 3 / webex 3
Chair: Adel S Elmaghraby (University of Louisville, USA)
S10.1 IBAM: IPFS and Blockchain Based Authentication for MQTT Protocol in IoT
Bahadır Karadas and Kubra Kalkan (Ozyegin University, Turkey)

Decentralized systems have proven themselves as a dominant authentication and storage paradigm for IoT systems where smartwatches are integrated with MQTT messaging framework for transmitting medical data to doctors. However, the security concerns with centralized frameworks presented vital security challenges regarding data privacy and network security for healthcare systems. This paper will present an integrated framework involving a reliable and lightweight e-health data-sharing framework that combines the decentralized interplanetary file system (IPFS) and blockchain on a smartwatch platform. Particularly, this framework helps in trustworthy control mechanisms which use smart contracts to achieve authentication and storage for both subscribers and publishers. We presented a simulation using Ethereum blockchain and IPFS in a real data-sharing scenario with the MQTT protocol. Our analysis proved that our approach satisfies lightweight access control requirements since it demonstrates the low latency of the framework and optimized energy consumption with high security and data privacy levels.

S10.2 Locating the Diffusion Source in Networks by Critical Observers
Hamouma Moumen, Badreddine Benreguia and Leila Saadi (University of Batna 2, Algeria); Ahcene Bounceur (University of Sharjah, United Arab Emirates)

This paper tackles the problem of locating the diffusion source in networks. Designating all nodes of the network as observers for locating the diffusion source is very complex and impracticable. To make this problem solvable in a simple manner, only a subset of nodes is used as observers. Choosing these observers can be a challenging task, as the effectiveness of the observers depends on various factors including the network topology, the diffusion process, and the available resources for observing the diffusion source. Several techniques have been proposed for selecting observers citing betweenness centrality, closeness centrality, high degree nodes, and randomization. This paper advances the state of the art by proposing the first solution, to our knowledge, for locating the diffusion source problem where the observers are selected as critical nodes.

S10.3 On the Efficacy of Differential Cryptanalysis Attacks on K-Cipher
Michael Kounavis (Meta Platforms Inc., USA)

K-Cipher is a hardware-efficient bit-length parameterizable cipher, which has been designed to be a flexible component of computing and communication systems. K-Cipher is latency- and area-efficient and can operate on all block lengths from 24 up to 1024 bits. In the paper, we show that the recently published Mahzoun-Kraleva-Posteuca-Ashur attack on K-Cipher [M. Mahzoun, L. Kraleva, R. Posteuca and T. Ashur, Differential Cryptanalysis of K-Cipher, IEEE ISCC 2022] is characterized by complexity significantly higher than 2^n, with n being the block length. This holds for all block lengths specified by the cipher. Whereas the developers of the attack suggest that the key of the 24-bit version of K-Cipher can be successfully recovered at complexity 2^29.7, we show that, in reality, the complexity of this attack is at least 2^46. Similarly, the complexity of attacking the 32-bit version of K-Cipher is at least 2^54 and the complexity of attacking the 64-bit version of K-Cipher is at least 2^86. Our conclusion is that unless the attack is redesigned, K-Cipher cannot be considered broken at this time, and its security needs to be further investigated by the community.

S10.4 Formal Modeling and Verification of ERC Smart Contracts: Application to NFT
Rim Ben Fekih (University of Sousse, Tunisia); Mariam Lahami (University of Sfax & National School of Engineering of Sfax, Tunisia); Mohamed Jmaiel (ENIS, Tunisia); Salma Bradai (National School of Engineering of Sfax & ReDCAD, Tunisia)

Blockchain-based applications are basically built on smart contracts, which are widely different in regards of the encoded logic and the used standards. When talking about Ethereum standards, ERC-721 is a well-known standard interface developed for Non-Fungible Tokens. Even though it is standard-based contracts that are more and more exploited, prior work on smart contracts verification mostly investigates efforts in regards of specific vulnerabilities. To address this gap, this paper introduces a formal modeling and verification approach for Ethereum smart contracts including the standard-based ones. We propose a model checking framework that, according to a Solidity smart contract provided as an input, uses ERC guidelines as a standard template to extract the related security properties. Another added benefit of our proposal consists on modeling ERC contracts using the extended finite state machine formalism. As a proof of concept, we illustrate our model checking approach through an NFT contract.

S10.5 Natural Face Anonymization via Latent Space Layers Swapping
Emna BenSaid (REGIM, University of Sfax, National School of Engineers, Tunisia); Mohamed Neji (University of Sfax, Tunisia); Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

Machine learning is widely recognized as a key driver of technological progress. Artificial Intelligence (AI) applications that interact with humans require access to vast quantities of human image data. However, the use of large, real-world image datasets containing faces raises serious concerns about privacy. In this paper, we examine the issue of anonymizing image datasets that include faces. Our approach modifies the facial features that contribute to personal identification, resulting in an altered facial appearance that conceals the person's identity. This is achieved without compromising other visual features such as posture, facial expression, and hairstyle while maintaining a natural-looking appearance. Finally, Our method offers adjustable levels of privacy, computationally efficient, and has demonstrated superior performance compared to existing methods.

S10.6 Every Time Can Be Different: A Data Dynamic Protection Method Based on Moving Target Defense
Zhimin Tang (Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences, China); Duohe Ma (Chinese Academy of Sciences & State Key Laboratory Of Information Security, China); Xiaoyan Sun (Worcester Polytechnic Institute, USA); Kai Chen and Liming Wang (Chinese Academy of Sciences, China); Junye Jiang (Chinese Academy of Sciences Institute of Information Engineering, China)

Traditional defense methods are hard to change the inherent vulnerabilities of static data storage, single data access, and deterministic data content, leading to frequent data leakage incidents. Moving target defense (MTD) techniques can increase data diversity and unpredictability by dynamically shifting the data attack surface. However, in the existing methods, the data lacks sufficient dynamics due to insufficient shifting space and shifting frequency of attack surface, and legitimate users are inevitably greatly affected. This study proposes a data MTD method that the data changes dynamically based on real-time multi-source user access information. Through the multidimensional user stratification mechanism, we establish a novel dynamic data model that uses the combination of random deception strategies to convert metadata properties and content of data based on the user risk levels, while data remains unchanged for legitimate users. Multiple sets of experiments demonstrate the effectiveness and low consumption of our data dynamic defense approach.

Monday, July 10 16:30 - 18:10 (Africa/Tunis)

S11: 5th & 6th Generation Networks and Beyond (online)

Room: Webex 5
Chair: Stefano Chessa (Universita' di Pisa, Italy)
S11.1 Deep Reinforcement Learning-Based Intelligent Task Offloading and Dynamic Resource Allocation in 6G Smart City
Wang Li (Beijing Information Science and Technology University, China); Xin Chen and Libo Jiao (Beijing Information Science & Technology University, China); Yijie Wang (Beijing Information Science and Technology University, China)

With the successful commercialization of 5G technology and the accelerated research process of 6G technology, smart cities are entering the 3.0 era. In 6G smart cities, Multi-access Edge Computing (MEC) can provide computing support for a large number of computation-intensive applications. However, the randomness of the wireless network environment and the mobility of nodes make designing the best offloading schemes is challenging. In this article, we investigate the dynamic offloading optimization problem of base station (BS) selection and computational resource allocation for mobile users (MUs). We first envision a MEC-enabled 6G Smart City Network architecture, then formulate the minimizing average system user cost problem as a Markov Decision Process (MDP), and propose a deep reinforcement learning-based offloading optimization and resource allocation algorithm (DOORA). Numerical results illustrate that DOORA scheme significantly outperforms the benchmarks and can remarkably improves the quality of experience (QoE) of MUs.

S11.2 Sky's the Limit: Navigating 6G with ASTAR-RIS for UAVs Optimal Path Planning
Shakil Ahmed and Ahmed E. Kamal (Iowa State University, USA)

Using unmanned aerial vehicles (UAVs) enhance network coverage in areas with limited infrastructure, but optimal operation faces challenges such as resource allocation and path planning. Researchers have proposed simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) to serve multiple users, but their potential is limited to only reflecting incident signals, especially in intricate channel conditions with substantial node distance. Motivated by this, we introduce a novel concept named actively simultaneously transmitting and reflecting (ASTAR)-RISs, which can amplify incident signals in addition to reflection. When mounted on UAVs, it can provide improved signal-to-noise ratio. We aim to find the optimal path planning for UAVs to maximize the rate of ASTAR-RIS-assisted wireless networks. The formulated problem is non-convex. An iterative algorithm finds the optimal solution, followed by heuristic approach. Results show the proposed model outperforms existing approaches regarding network performance, highlighting the potential of adding ASTAR-RISs in UAV-assisted systems.

S11.3 A Transmission Power Distribution Method Based on Lyapunov for Scraper Chain Tension Monitoring Network
Xiaodong Yan, Gongbo Zhou, Ping Zhou, Wei Wang and Lianfeng Han (China University of Mining and Technology, China); Zhenzhi He (Jiangsu Normal University, China)

Using WSN technology to monitor the tension of the scraper chain can determine whether there is chain jam, chain broken and other failures, but the scraper and scraper chain are constantly moving, making continuous monitoring of the tension of the scraper chain through the monitoring node with limited energy has become a challenge. To reduce node transmission power consumption and extend the life of the monitoring network, this paper first establishes a tension monitoring network model for scraper chains. Then, based on the motion characteristics of the scraper conveyor, a monitoring node transmission power allocation method based on Lyapunov optimization theory is proposed. The simulation results indicate that the proposed method can ensure the stability of the monitoring network data queue and minimize the transmission power of the monitoring nodes. Compared with the full power allocation method, the proposed TPAL method reduces transmission power consumption by more than 15.6%.

S11.4 Intelligent and Stable Resource Allocation for Delay-Sensitive MEC in 6G Networks
Yalin Zhang (Chongqing University of Posts and Telecommunications, China); Hui Gao (Beijing University of Posts and Telecommunications, China); Bei Liu, Xin Su and Xibin Xu (Tsinghua University, China)

In order to meet the strict quality of service requirements of delay-sensitive networks, this paper studies resource-autonomous decision-making algorithms for 6G MEC networks to improve key performance indicators (KPIs) such as delay, computing rate, and system stability. This paper considers task offloading and resource allocation decisions in multi-user MEC networks with time-varying channels, where user task data arrive randomly. We designed an autonomous decision-making algorithm for Lyapunov Assisted Deep Reinforcement learning (Ly-DRL) that satisfies the data queue stability and average power constraints and maximizes the network computing rate. We construct a dynamic queue of user task data through queue theory and apply Lyapunov optimization theory to decouple the MINLP problem into subproblems for each time slot. Combining DRL and traditional numerical optimization, the subproblems of each slot are solved with low computational complexity. Simulations show that the algorithm performs best while stabilizing the data queue.

S11.5 A Low-Overhead Network Monitoring for SDN-Based Edge Computing
Hou-Yeh Tao (National Taiwan University of Science and Technology, Taiwan); Chih-Kai Huang (Univ Rennes, Inria, CNRS, IRISA, France); Shan-Hsiang Shen (National Taiwan University of Science and Technology, Taiwan)

Using Software-Defined Networking (SDN) in edge computing environments allows for more flexible flow monitoring than traditional networking methods. In SDN, the controller collects statistics from all switches and can communicate with switches to dynamically manage the entire network. However, monitoring per-flow or per-switch mechanisms to obtain the flow statistics from switches may significantly increase bandwidth costs between switches and the control plane. In this paper, we propose Bandwidth Cost First (BCF) algorithm to reduce the number of monitored switches and therefore lower the monitoring cost. The experiment results show that our algorithm outperforms the existing technique reducing the number of monitored switches by 56%, leading to a reduction in bandwidth overhead of 41% and switch processing delay by 25%.

S11.6 Multi-Objective Optimization for Dynamic Service Placement Strategy for Real-Time Applications in Fog Infrastructure
Mayssa Trabelsi and Nadjib Mohamed Mehdi Bendaoud (University of Tunis El Manar, Tunisia); Samir Ben Ahmed (Faculte des Sciences de Tunis, Tunisia)

Fog computing is rising as a dominant and modern computing model that thrives on delivering Internet of Things (IoT) computations. It is an addition to cloud computing, helping it handle services needing a faster response. A well-built dynamic service placement strategy could improve fog performance. This paper proposes a dynamic service placement strategy in a fog infrastructure for real-time applications. The main idea of this work is to perform a multi*objective optimization on response time and available resources in fog networks. Hence, depending on the fog node's response time and available computational resources, the algorithm will choose the fittest node to send the service in real-time. Finally, we model and evaluate our proposed strategy in the iFogSim-simulated Fog infrastructure. Results of the simulation studies demonstrate significant improvement in response time and resource utilization over several other strategies, improving the fog network's performance.

Monday, July 10 16:30 - 18:30 (Africa/Tunis)

S12: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (online)

Room: Webex 4
Chair: Dimitris Koutras (University of Piraeus & UPRC, Greece)
S12.1 DepthWise Attention:Towards Individual Channels Attention
Zhilei Zhu, Wanli Dong, Xiaoming Gao and Anjie Peng (Southwest University of Science and Technology, China)

Human keypoints detection require the capture of long-range spatial constraints and the fusion of channels information. Many studies adopt attention mechanisms to generate feature weights, thereby enhancing the information interaction capability and improving the accuracy of keypoints detection. However, most attention mechanisms currently in use redundantly fuse information across all channel levels, which not only increases the computational cost of the network but also weakens the feature differences between different keypoints, affecting the prediction of heatmaps. In this study, we propose a plug-and play attention module based on separate convolution, called DWA module, which avoids the redundant use of information from different channels and improves the network's ability to capture long-range spatial relationships. Additionally, we adopt a novel feature compression method to reduce errors in single-dimensional compression. Experimental results indicate that our DWA model performs well on COCO and MPII datasets, achieving good accuracy improvements with relatively small computational costs

S12.2 FedEF: Federated Learning for Heterogeneous and Class Imbalance Data
Hongyan Peng, Tongtong Wu, Zhenkui Shi and Xianxian Li (Guangxi Normal University, China)

Federated learning (FL) is a scheme that enables multiple clients to cooperate to train a high-performance machine learning model. However, in real FL applications, class imbalance problem usually arises along with the heterogeneity data across clients, resulting in the poor performance of the global model. In this paper, a novel FL method (we call it FedEF) is designed for heterogeneous data and local class imbalance. FedEF optimizes the local feature extractor representation of individual clients through contrastive learning to maximize the consistency of the feature representation trained by the local client and the central server. Meanwhile, we modified the cross entropy loss in the model, paid more attention to the class with fewer samples in the training process, and corrected the biased classifier, thus can improve the performance of the global model. Experiments show that FedEF is an effective solution to FL model obtained under heterogeneous and local class imbalance.

S12.3 Cluster, Reconstruct and Prune: Equivalent Filter Pruning for CNNs Without Fine-Tuning
Tao Niu, Yinglei Teng, Panpan Zou and Yiding Liu (Beijing University of Posts and Telecommunications, China)

Network pruning is effective in reducing memory usage and time complexity. However, current approaches face two common limitations. 1) Pruned filters cannot contribute to the final outputs, resulting in severe performance drops, especially at large pruning rates. 2) It requires time-consuming and computationally expensive fine-tuning to recover accuracy. To address these limitations, we propose a novel filter pruning method called Cluster Pruning (CP). Instead of directly deleting filters, CP reconstructs them based on their intra-similarity and removes them using the proposed channel addition operation. CP preserves all learned features and eliminates the need for fine-tuning. Specifically, each filter is distinguished by clustering and reconstructed as the centroid to which it belongs. Reconstructed filters are updated to prevent erroneous selections. After convergence, filters can be safely removed through the channel addition operation. Experiments on various datasets show that CP achieves the best trade-off between performance and complexity compared with other algorithms.

S12.4 Detecting Ethereum Phishing Scams with Temporal Motif Features of Subgraph
Yunfei Wang, Hao Wang, Xiaozhen Lu and Lu Zhou (Nanjing University of Aeronautics and Astronautics, China); Liang Liu (Colleage of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China)

In recent years, Ethereum has become a hotspot for criminal activities such as phishing scams that seriously compromise Ethereum transaction security. However, existing methods cannot accurately model Ethereum transaction data and make full use of the temporal structure information and basic account features. In this paper, we propose an Ethereum phishing detection framework based on temporal motif features. By designing a sampling method, we convert labeled Ethereum addresses into multi-directed transaction subgraphs with time and amount to avoid losing structure and attribute information. To learn representations for subgraphs, we define and extract the temporal motif features and general transaction features. Extensive experiments on Support Vector Machine, Random Forest, Logistic Regression, and XGBoost demonstrate that our method significantly outperforms all baselines and provides an effective phishing scams detection for Ethereum.

S12.5 Deep Reinforcement Learning for Joint Service Placement and Request Scheduling in Mobile Edge Computing Networks
Yuxuan Deng and Xiuhua Li (Chongqing University, China); Chuan Sun (Nanyang Technological University, Singapore); Jinlong Hao (Chongqing University, China); Xiaofei Wang (Tianjin University, China); Victor C.M. Leung (SMBU, China & The University of British Columbia, Canada)

Mobile edge computing aims to provide cloud-like services on edge servers located near Mobile Devices (MDs) with higher Quality of Service (QoS). However, the mobility of MDs makes it difficult to find a global optimal solution for the coupled service placement and request scheduling problem. To address these issues, we consider a three-tier MEC network with vertical and horizontal cooperation. Then we formulate the joint service placement and request scheduling problem in a mobile scenario with heterogeneous services and resource limits, and convert it into two Markov decision processes to decouple decisions across successive time slots. We propose a Cyclic Deep Q-network-based Service placement and Request scheduling (CDSR) framework to find a long-term optimal solution despite future information unavailability. Specifically, to solve the issue of enormous action space, we decompose the system agent and train them cyclically. Evaluation results demonstrates the effectiveness of our proposed CDSR on user-perceived QoS.

S12.6 A Model-Driven Quasi-ResNet Belief Propagation Neural Network Decoder for LDPC Codes
Liangsi Ma (Chongqing University of Posts and Telecommunications, China); Bei Liu, Xin Su and Xibin Xu (Tsinghua University, China)

For the Belief Propagation (BP) algorithm of low-density parity-check (LDPC) codes, existing deep learning methods have a limited performance improvement and it is difficult to train deep-level networks. In this paper, a model-driven quasi-residual network (Quasi-ResNet) BP decoding architecture is proposed for LDPC codes to further improve the performance of standard BP decoding. This method feeds the reliable messages calculated in current iteration into the next iteration based on the shortcut connection, and adjusts the weight of shortcut connection based on the error Back Propagation algorithm of neural network to determine the optimal genetic proportion of reliable messages. The decoding architecture is composed of a model-driven deep neural network (DNN) and shortcut connection. Simulation results show that the decoder can not only unfold more layers quickly compared with the DNN-based BP decoder, but also further improve the decoding performance and to mitigate the LDPC code error flattening to some extent.

Monday, July 10 16:30 - 18:30 (Africa/Tunis)

S9: Services and Protocols (hybrid)

Room: TULIPE 1 / webex 1
Chair: Antonio Celesti (University of Messina, Italy)
S9.1 Subcarrier-Index Modulation for OFDM-Based PLC Systems
Aymen Omri (Iberdrola Innovation Middle East, Qatar Science & Technology Park, Qatar); Javier Hernandez Fernandez (Iberdrola, Spain); Roberto Di Pietro (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

In this paper, we investigate and evaluate the performances of a subcarrier-index modulation (SIM) technique within an orthogonal frequency division multiplexing (OFDM)-based narrow-band (NB)-power line communication (PLC) system. The SIM technique has been proposed and used mainly in wireless communications to enhance energy and spectral efficiencies. To evaluate the advantages of this technique in PLC, Monte Carlo simulations were performed using field measurements of PLC noise and channel frequency response (CFR). The results show significant advantages in terms of improving the overall system energy and spectral efficiencies, especially for single-level modulation. For instance, when using the SIM-OFDM technique, with a binary phase-shift keying (BPSK) modulation, an energy gain of 66.66% and a bit gain of 50%, with respect to the standard modulation, can be observed.

S9.2 Metis: Detecting Fake AS-PATHs Based on Link Prediction
Chengwan Zhang, Congcong Miao, Changqing An and Allen Hong (Tsinghua University, China); Ning Wang (University of Surrey, United Kingdom (Great Britain)); Zhiquan Wang and Jilong Wang (Tsinghua University, China)

BGP route hijacking is a critical threat to the Internet. Existing works on path hijacking detection firstly monitor the routes of the whole network and then directly trigger a suspicious alarm if the link has not been seen before. However, these naive approaches will cause false positive identification and introduce unnecessary verification overhead. In this work, we propose Metis, a matching-and-prediction system to filter out normal unseen links. We first use a matching method with three rules to find out suspicious links if there is an unseen AS. Otherwise, we propose using a neural network to make a prediction based on the AS information at each end of the link and further quantify the suspicion level. Our large-scale simulation results show that Metis can achieve precision and recall of over 80% for detecting fake AS-PATHs. Moreover, our deployment experiences show that compared to state-of-the-art system, Metis can save 80% overhead.

S9.3 Limiting the Spread of Fake News on Social Networks by Users as Witnesses
Hamouma Moumen and Badreddine Benreguia (University of Batna 2, Algeria); Ahcene Bounceur (University of Sharjah, United Arab Emirates); Leila Saadi (University of Batna 2, Algeria)

In this paper, we study how users can act as witnesses to limit the spread of fake news. We present a new technique based on consulting a set of users, called witnesses. Before re-sharing a content M by a user A, a set of k users (witnesses) are selected randomly, by the system, from the friends set of A . Witnesses are asked to validate the content M after allowing the user A to re-share it. If there is an authenticated user (AU), among this set of witnesses that validating M, the user A is allowed to re-share it without considering the others (k − 1) responses. In the case where all witnesses are non-authenticated, if at least one witness rejects the content M, the user A is not allowed to re-share it. Note that A has no knowledge about the set of selected witnesses.

S9.4 Downlink Traffic Demand-Based Gateway Activation in LoRaWAN
Shahzeb Javed (Czech Technical University in Prague, Czech Republic); Dimitrios Zorbas (Nazarbayev University, Kazakhstan)

Due to the radio duty cycle restrictions in the sub- GHz unlicensed spectrum imposed in many regions, the downlink resources of the gateways in LoRaWAN are depleted fast even in the presence of moderate uplink traffic. To tackle the high demand for downlink traffic, additional gateways - from a pool of available positions - can be deployed. Not all gateways can initially be activated because of the extra operating costs that this action would cause. On the contrary, this paper studies the problem of dynamically activating the minimum estimated number of gateways in order to adapt the network to the traffic demands, and thus, improve the network reliability and energy consumption. The approach is based on theoretical foundations of downlink traffic as well as on empirical results and simple heuristics to select the gateway positions to be dynamically activated without taking into account the positions of the end-devices. Simulation results show that the proposed methodology in combination with the proposed heuristics exhibit a packet delivery ratio of over 90% even with a single retransmission. The average energy consumption decreases considerably as well. Finally, the proposed approach presents similar results with an existing approach which, however, considers the positions of the end-devices as known.

S9.5 Solving Band Diagonally Dominant Linear Systems Using Gaussian Elimination: Shared-Memory Parallel Programming with OpenMP
Sirine Marrakchi and Heni Kaaniche (University of Sfax, Tunisia)

The Gaussian elimination (GE) is an important direct method that transforms the initial linear system into an equivalent upper triangular system, which is straightforward to solve. To ensure numerical stability and reduce the effects of round-off error that can overcome the solution, most direct methods include a pivoting strategy. Diagonally dominant (DD) matrices are numerically stable during the GE method. Thus, there is no need to incorporate pivoting. In this paper, we propose a new scheduling GE approach for band DD systems which is based on allocating tasks to suitable cores. All cores carry out tasks assigned to them in parallel with dependencies under consideration. Our purposes are to obtain a high degree of parallelism by balancing the load among cores and decreasing the parallel execution time. The effectiveness of our investigation is proved by several experiments which are carried out on a shared-memory multicore architecture using OpenMP.

S9.6 Network Traffic Classification for Detecting Multi-Activity Situations
Ahcene Boumhand (University of Rennes 1 & Orange Labs, France); Kamal Singh (Université Jean Monnet, France); Yassine Hadjadj-Aoul (University of Rennes, France); Matthieu Liewig (Orange Labs, France); César Viho (IRISA / INRIA Rennes & University of Rennes I, France)

Network traffic classification is an active research field that acts as an enabler for various applications in network management and cybersecurity. Numerous studies from this field have targeted the case of classifying network traffic into a set of single-activities (e.g., chatting, streaming). However, the proliferation of internet services and devices has led to the emergence of new consuming patterns such as multi-tasking that consists in performing several activities simultaneously. Recognizing the occurrence of such multi-activity situations may help service providers to design quality of service solutions that better fit users' requirements. In this paper, we propose a framework that is able to recognize multi-activity situations based on network traces. Our experiments showed that our solution is able to achieve promising results despite the complexity of the task that we target. Indeed, the obtained multi-activity detection performance is equivalent or often surpasses state-of-the-art techniques dealing with only a single activity.

Monday, July 10 20:00 - 22:00 (Africa/Tunis)

Tuesday, July 11

Tuesday, July 11 8:30 - 10:30 (Africa/Tunis)

ICTS4eHealth - S4: Machine Learning

Room: TULIPE 2 / webex 2
Chair: Flavio Bertini (University of Parma, Italy)
ICTS4eHealth - S4.1 Bayesian-Based Symptom Screening for Medical Dialogue Diagnosis
Zhong Cao, Yuchun Guo, Yishuai Chen and Daoqin Lin (Beijing Jiaotong University, China)

In a medical dialogue diagnosis system, the selection of symptoms for inquiry has a significant impact on diagnostic accuracy and dialogue efficiency. In a typical diagnosis process, the symptoms initially reported by users are often insufficient to support accurate diagnosis, making it necessary to ask users about other symptoms through dialogue to form a conclusive diagnosis. We propose a disease diagnosis algorithm based on naive Bayesian classification, which simulates the process of doctor's inquiry and diagnosis by dynamically updating the list of diseases to increase the interpretability of diagnosis results. For the symptom interrogation, we propose a symptom screening algorithm based on the difference of symptom sets to exclude diseases with low probability. Through the intersection and union of disease symptom sets, we can screen out the symptoms that can distinguish diseases in fewer inquiring rounds. The experimental results demonstrate the proposed method performs more efficiently than existing state-of-the-art algorithms.

ICTS4eHealth - S4.2 Glioma Tumor's Detection and Classification Using Joint YOLOv7 and Active Contour Model
Amal Jlassi (University of Tunis El Manar & Limtic-Equipe SIIVA, Tunisia); Khaoula El Bedoui and Walid Barhoumi (LIMTIC, Tunisia)

In this paper, a multi-stage deep learning model is proposed for brain glioma tumor detection and segmentation from MRI scans. The model consists of two stages: object detection using YOLOv7 with EfficientNet-B0 backbone, and active contour snake model for boundary refinement and segmentation. The proposed method also includes a customized CNN with feature selection and GRU layer for accurate class label prediction. The proposed model is trained on the BraTS 2020 dataset and achieved state-of-the-art performance in terms of accuracy and efficiency. This proposed method can potentially assist radiologists and clinicians in detecting and segmenting brain tumors in medical images, leading to better diagnosis and treatment planning for patients.

ICTS4eHealth - S4.3 Adopting Machine Learning-Based Pose Estimation as Digital Biomarker in Motor Tele-Rehabilitation
Antonio Celeste, Maria Fazio and Armando Ruggeri (University of Messina, Italy); Fabrizio Celesti (Università degli Studi dell'Insubria, Italy); Massimo Villari (University of Messina, Italy); Mirjam Bonanno and Rocco Salvatore Calabrò (IRCSS Centro Neurolesi Bonino Pulejo, Italy)

Nowadays, tele-rehabilitation has recently emerged as an effective approach for providing assisted living, increasing clinical outcomes, positively enhancing patients' Quality of Life (QoL) and fostering the reintegration of patients into society, also pushing down clinical costs. Cloud computing in combination with Edge Computing and Artificial Intelligence (AI) are the main enablers for tele-rehabilitation. In particular, Edge rehabilitation devices can act as smart digital biomarkers sending quantifiable physiological and behavioural patients' data to the Hospital Cloud. However, due to hardware limitations, it is not clear which Machine Learning (ML) models can be executed in cheap Edge devices. In this paper, we aim to answer this question. In particular, several ML models (i.e., PoseNet, MoveNet and BlazePose) have been tested and assessed on the Edge, identifying the best one and demonstrating the feasibility of such an approach.

ICTS4eHealth - S4.4 Neuro Intel: A System for Clinical Diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) Using Artificial Intelligence
Sotirios Batsakis (Technical University of Crete & University of Huddersfield, Greece); Emmanuel Papadakis (University of Huddersfield, United Kingdom (Great Britain)); Ilias Tachmazidis (University of Huddesrfield, Greece); Tianhua Chen, Grigoris Antoniou and Marios Adamou (University of Huddersfield, United Kingdom (Great Britain))

Attention-Deficit Hyperactivity Disorder (ADHD) is a mental condition characterised by a pattern of inattention, hyperactivity, and/or impulsivity that causes significant impairment across various domains. Delayed diagnosis and treatment for ADHD can be harmful to people, leading to broader mental health conditions. This paper presents a fully functional system for diagnosing Attention-Deficit Hyperactivity Disorder (ADHD) using an Artificial Intelligence (AI) system called Neuro Intel. Positive results from our research has led to the development of Neuro Intel, which incorporates both expert clinician knowledge and historical clinical data using machine learning to assist clinicians in the diagnosis of ADHD in adults.

ICTS4eHealth - S4.5 Predicting Patient Sexual Function After Prostate Surgery Using Machine Learning
Sayna Rotbei and Luigi Napolitano (University of Naples Federico II, Italy); Stefania Zinno (University Federico II of Naples, Italy); Paolo Verza (University of Salerno, Italy); Alessio Botta (University of Napoli Federico II, Italy)

A major health concern for men is prostate cancer. An accurate prediction of patients' conditions after surgery is essential for understanding their quality of life. For improving patient care, medical and healthcare professionals use machine learning for a variety of purposes. By using supervised machine learning algorithms, we aim to identify the most reliable predictors of patient sexual function one year after surgery. We used the EPIC-26 (Expanded Prostate Index Composite-26) questionnaire to assess the patient's quality of life and sexual function. An approximate 500 patient sample was used in our case study to test the effectiveness of this methodology. Based on demographic and clinical data collected prior to surgery, our model predicts patient self-assessment of sexual function with high accuracy one year after surgery. In order to improve and enhance the quality of the patient experience, the methodology presented can support clinical decisions.

ICTS4eHealth - S4.6 Emotional Artificial Intelligence Enabled Facial Expression Recognition for Tele-Rehabilitation: A Preliminary Study
Davide Ciraolo, Antonio Celeste and Maria Fazio (University of Messina, Italy); Mirjam Bonanno (IRCSS Centro Neurolesi Bonino Pulejo, Italy); Massimo Villari (University of Messina, Italy); Rocco Salvatore Calabrò (IRCSS Centro Neurolesi Bonino Pulejo, Italy)

Tele-rehabilitation has recently emerged as an effective approach for providing assisted living, increasing clinical outcomes, positively enhancing patients' Quality of Life (QoL) and fostering the reintegration of patients into society, also pushing down clinical costs. Nowadays, tele-rehabilitation has to face two main challenges: motor and cognitive rehabilitation. In this paper, we focus on the latter. Our idea is to monitor the patient's cognitive rehabilitation by analysing his/her facial expressions during motor rehabilitation exercises with the objective to understand if there is a correlation between motor and cognitive outcomes. Therefore, the aim of this preliminary study is to leverage the concept of Emotional Artificial Intelligence (AI) in a Facial Expression Recognition (FER) system which uses the face mesh generated by the MediaPipe suite of libraries to train a Machine Learning model in order to identify the face expressions, according to the Ekman's model, contained inside images or video captured during motor rehabilitation exercises performed at home. In particular, different AffectNet datasets, face mesh maps and ML models are tested providing an advancement in the state of the art.

Tuesday, July 11 8:30 - 10:30 (Africa/Tunis)

S13: AI in Computers and Communications: Machine Learning (hybrid)

Room: TULIPE 1 / webex 1
Chair: Ilhem Kallel (University of Sfax, Tunisia & Regim-Lab., Tunisia)
S13.1 The Effect of Non-Reference Point Cloud Quality Assessment (NR-PCQA) Loss on 3D Scene Reconstruction from a Single Image
Mohamed Medhat Zayton (University of Alexandria, Egypt); Marwan Torki (Alexandria University, Egypt)

This paper proposes a two-stage approach for 3D scene reconstruction from a single image. The first stage involves a monocular depth estimation model, and the second stage involves a point cloud model that recovers depth shift and the focal length from the generated depth map. The paper investigates the use of various pre-trained state-of-the-art transformer models and compares them to existing work without transformers. The loss function is improved by adding a No-Reference point cloud quality assessment (NR-PCQA) to account for the quality of the generated point cloud structure. The paper reports results on four datasets using Locally Scale Invariant RMSE (LSIV) as the metric of evaluation. The paper shows that transformer models outperform previous methods, and transformer models that took into account NR-PCQA outperformed those that did not.

S13.2 An Automatic Vision Transformer Pruning Method Based on Binary Particle Swarm Optimization
Jihene Tmamna (ReGIM-Lab); Emna Ben Ayed (ReGIM-Lab, Tunisia); Rahma Fourati (University of Sfax, Tunisia); Mounir Ben Ayed (REGIM-Lab)

This paper presents an automatic vision transformer pruning method that aims to alleviate the difficulty of deploying vision transformer models on resource-constrained devices. The proposed method aims to automatically search for the optimal pruned model by removing irrelevant units while maintaining the original accuracy. Specifically, the model pruning is formulated as an optimization problem using binary particle swarm optimization. To demonstrate its effectiveness, our method was tested on the DeiT Transformer model with CIFAR-10 and CIFAR-100 datasets. Experimental results demonstrate that our method achieves a significant reduction in computational cost with slight performance degradation.

S13.3 A Hybrid P4/NFV Architecture for Cloud Gaming Traffic Detection with Unsupervised ML
Joël Roman Ky (Orange Innovation & Universite de Lorraine, France); Philippe Graff (Universite de Lorraine, CNRS, Inria, LORIA, France); Bertrand Mathieu (Orange Innovation, France); Thibault Cholez (Universite de Lorraine, CNRS, Inria, LORIA, France)

Low-latency (LL) applications, such as the increasingly popular cloud gaming (CG) services, have stringent latency requirements. Recent network technologies such as L4S (Low Latency Low Loss Scalable throughput) propose to optimize the transport of LL traffic and require efficient ways to identify it. A previous work proposed a supervised machine learning model to identify CG traffic but it suffers from limited processing rate due to a pure software approach and a lack of generalization. In this paper, we propose a hybrid P4/NFV architecture, where a hardware Tofino based P4 implementation of the feature extraction functionality is deployed in the data plane and a unsupervised model is used to improve classification results. Our solution has a better processing rate while maintaining an excellent identification accuracy thanks to model adaptations to cope with P4 limitations and can be deployed at ISP level to reliably identify the CG traffic at line rate.

S13.4 Deep Learning Approach for Tunisian Hate Speech Detection on Facebook
Mariem Abbes (ISITCOM & REGIM Lab, Tunisia); Zied Kechaou (REGIM-Lab, ENIS, University of Sfax, Tunisia); Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

We have witnessed a sharp increase in violence in Tunisia over the past few years. Violence affecting households, minorities, political parties, and public figures has increased more widely on social media. As a result, it has become easier for extremist, racist, misogynistic, and offensive articles, posts, and comments to be shared. Today, various international and governmental groups vowed to fight internet hate speech. This paper proposes a deep-learning solution to find hateful and offensive speech on Arabic social media sites like Facebook. We introduce two models: a Bi-LSTM based on an attention mechanism with integrating the BERT for Facebook comment classification toward hate speech detection. For this task, we collected 2k Tunisian dialect comments from Facebook. The proposed approach has been evaluated on three datasets, and the obtained results demonstrate that the proposed models can improve Arabic hate detection with an accuracy of 98.89%.

S13.5 An eXplainable Artificial Intelligence Method for Deep Learning-Based Face Detection in Paintings
Siwar Bengamra (University of Tunis El Manar & University of the Littoral Opal Coast, Tunisia); Ezzeddine Zagrouba (University Tunis El Manar & Higher Institute of Computer Science & LIMTIC Lab., Tunisia); André Bigand (University of the Littoral Opal Coast, France)

Recently, despite the impressive success of deep learning, eXplainable Artificial Intelligence (XAI) is becoming increasingly important research area for ensuring transparency and trust in deep models, especially in the field of artwork analysis. In this paper, we conduct an analysis of major research contribution milestones in perturbation-based XAI methods and propose a novel iterative method based guided perturbations to explain face detection in Tenebrism painting images. Our method is independent of the model's architecture, outperforms the state-of-the-art method and requires very little computational resources (no need for GPUs). Quantitative and qualitative evaluation shows effectiveness of the proposed method.

S13.6 The Importance of Robust Features in Mitigating Catastrophic Forgetting
Hikmat Khan and Nidhal Bouaynaya (Rowan University, USA); Ghulam Rasool (Moffitt Cancer Center, USA)

Continual learning (CL) is an approach to address catastrophic forgetting, which refers to forgetting previously learned knowledge by neural networks when trained on new tasks or data distributions. The adversarial robustness has decomposed features into robust and non-robust types and demonstrated that models trained on robust features significantly enhance adversarial robustness. However, no study has been conducted on the efficacy of robust features from the lens of the CL model in mitigating catastrophic forgetting in CL. In this paper, we introduce the CL robust dataset and train four baseline models on both the standard and CL robust datasets. Our results demonstrate that the CL models trained on the CL robust dataset experienced less catastrophic forgetting of the previously learned tasks than when trained on the standard dataset. Our observations highlight the significance of the features provided to the underlying CL models, showing that CL robust features can alleviate catastrophic forgetting.

Tuesday, July 11 8:30 - 10:30 (Africa/Tunis)

S14: Security in Computers and Communications (onsite)

Room: TULIPE 3 / webex 3
Chair: Dimitrios Zorbas (Nazarbayev University, Kazakhstan)
S14.1 Autoencoder-SAD: An Autoencoder-Based Model for Security Attacks Detection
Diana Gratiela Berbecaru and Stefano Giannuzzi (Politecnico di Torino, Italy); Daniele Canavese (CNRS, Italy)

In recent years, a variety of cybersecurity attacks affected national infrastructures, big companies, and even medium size organizations. As countermeasures are implemented, new attack variants appear. Historically, signature-based and anomaly-based Intrusion Detection Systems (IDSs) are used for detecting abnormal network behavior. The signature-based IDS is typically effective against attacks for which attack signatures exist. The anomaly-based IDS instead performs traffic analysis and raises alerts when encountering suspicious network patterns. They can detect attacks without registered signatures through different mechanisms, including machine learning (ML) techniques. We propose Autoencoder-SAD, an anomaly-based detection model, which can individuate new cybersecurity attacks by exploiting the Autoencoder model. Through empirical tests with two datasets (CIC-IDS2017 and TORSEC), we evaluated Autoencoder-SAD against two supervised ML models (Random Forest and Extreme Gradient Boosting) and one semi-supervised Autoencoder-based model. The results are promising since our approach shows an AUC of 0.94 for known attacks and 0.68 for unknown attacks.

S14.2 Pump Up the JARM: Studying the Evolution of Botnets Using Active TLS Fingerprinting
Eva Papadogiannaki (Telecommunication Systems Research Institute, Technical University of Crete, Greece & FORTH, Greece); Sotiris Ioannidis (Technical University of Crete, Greece)

The growing adoption of network encryption protocols, like TLS, has altered the scene of monitoring network traffic. With the advent increase in network encryption, typical DPI systems that monitor network packet payload contents are becoming obsolete, while in the meantime, adversaries abuse the utilization of the TLS protocol to bypass them. In this paper, aiming to understand the botnet ecosystem in the wild, we contact IP addresses known to participate in malicious activities using the JARM tool for active probing. Based on packets acquired from TLS handshakes, server fingerprints are constructed during a time period of 7 months. We investigate if it is feasible to detect suspicious servers and re-identify other similar within blocklists with no prior knowledge of their activities. We show that it is important to update fingerprints often or follow a more effective fingerprinting approach, since the overlapping ratio with legitimate servers rises over time.

S14.3 Exploiting Emercoin Blockchain and Trusted Computing for IoT Scenarios: A Practical Approach
Diana Gratiela Berbecaru and Lorenzo Pintaldi (Politecnico di Torino, Italy)

Traditional Public Key Infrastructures (PKIs) seem not adequate for some Internet of Things (IoT) environments asking for fast, flexible, and secure solutions. Alternatively, IoT devices could generate asymmetric key pairs on their own, and store the relative public keys or X.509 certificates into a blockchain, e.g., Emercoin. Nevertheless, in some contexts, reliable device identification is still required. We extended an Emercoin-based decentralized PKI solution for IoT scenarios, by integrating the device's identification with a TPM (Trusted Platform Module) 2.0 to a specific trusted node in an IoT network named Device Manager (DM). Through experimental tests performed with a TPM 2.0-equipped Raspberry Pi 4 device, we evaluated the time spent registering the IoT devices into the blockchain, or establishing secure (TLS) channels. Even though the Emercoin-based TLS handshake time is higher than the standard one, the proposed solution remains a viable alternative in scenarios requiring flexibility and device identification.

S14.4 Performance Analysis of Physical Layer Security in Power Line Communication Networks
Javier Hernandez Fernandez (Iberdrola, Spain); Aymen Omri (Iberdrola Innovation Middle East, Qatar Science & Technology Park, Qatar); Roberto Di Pietro (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

Due to the broadcast nature of power line communication (PLC) channels, confidential information exchanged on the power grid is prone to malicious exploitation by any PLC device connected to the same power grid. To combat the ever-growing security threats, physical layer security (PLS) has been proposed as a viable safeguard or complement to existing security mechanisms. In this paper, the security analysis of a typical PLC adversary system model is investigated. In particular, we derive the expressions of the corresponding average secrecy capacity (ASC) and the secrecy outage probability (SOP) of the considered PLC system. In addition, numerical results are presented to validate the obtained analytical expressions and to assess the relevant PLS performances. The results show significant impacts of the transmission distances and the used carrier frequency on the overall transmission security.

S14.5 Evaluation of PTP Security Controls on gPTP
Mahdi Fotouhi and Alessio Buscemi (University of Luxembourg, Luxembourg); Florian Jomrich (Honda R&D Europe Germany, Germany); Christian Köbel (Honda R&D Europe, Germany); Thomas Engel (University of Luxemburg, Luxembourg)

In recent years, the scientific community has been focusing on deterministic Ethernet, which has helped drive the adoption of Time-Sensitive Networking (TSN) standards. Precision Time Protocol (PTP), specified in IEEE1588 [1], is a TSN standard that enables network devices to be synchronized with a degree of precision that is noticeably higher than other Ethernet synchronization protocols [2]. Generic Precision Time Protocol (gPTP) [3], a profile of PTP, is designed to have low latency and jitter, which makes it suitable for industrial applications. However, like PTP, gPTP does not have any built-in security measures. In this work, we assess the efficacy of additional security mechanisms that were suggested for inclusion in IEEE 1588 (PTP) 2019 [1]. The analysis consists of implementing these security mechanisms on a physical gPTP-capable testbed and evaluating them on several high-risk attacks against gPTP [4].

Tuesday, July 11 8:30 - 10:30 (Africa/Tunis)

S15: Wireless Networks (online)

Room: Webex 5
Chair: Lobna Hsairi (University of Jeddah, Saudi Arabia)
S15.1 Knocking Cells: Latency-Based Identification of IPv6 Cellular Addresses on the Internet
Ming Wang, Yahui Li, Han Zhang, Allen Hong, Jun He and Jilong Wang (Tsinghua University, China)

IPv6 mobile networks are becoming increasingly important. Many jobs rely on understanding IPv6 mobile networks at the IP level. Previous works on cellular identification suffer from coarse identification granularity, proprietary data, or not working for IPv6. The high latency in mobile networks makes identifying cellular addresses possible using Round-Trip Time (RTT) difference. However, due to the impact of packet loss on the measurement of RTT difference, identifying cellular addresses with less overhead is challenging. In this paper, by triggering the non-zero RTT difference of the cellular /48 subnets with probes, we propose an accurate latency-based method to identify cellular /48 subnets from fixed subnets. Experiments demonstrate that the method can identify cellular subnets with a precision of 93.52% to 99.95% and a recall of 99.96% on a worldwide dataset. The overhead of measuring RTT difference reduces to at least 1/10th compared to the existing methods while robust to packet loss.

S15.2 Seed Node Selection Algorithm Based on Node Influence in Opportunistic Offloading
Qi Tang, Ruijie Hang, Gang Xu, Gaofeng Zhang, Shuai Li and Baoqi Huang (Inner Mongolia University, China)

As a kind of mobile traffic offloading technology, opportunistic triage downloads and distributes data through seed nodes. Therefore, how to efficiently and accurately select suitable seed nodes becomes a key problem of opportunistic streaming technology. To address the problem of insufficient research on seed node content coverage in existing studies, this paper combines the characteristics of opportunity networks and the solution idea of the influence maximization problem and proposes the evaluation model of node influence for the first time. Then proposes a seed node selection algorithm (SNSNI) based on node influence on this basis. Experimental results show that the set of seed nodes selected by SNSNI algorithm can obtain smaller average message transmission delay and more covered nodes in the triage scenario compared with the random algorithm, and fewer seed nodes are required to achieve the same effect.

S15.3 Communication Interference Recognition Based on Improved Deep Residual Shrinkage Network
Xiaojun Wu, Yaya Lu and Zhenghan Tang (Xi'an Jiaotong University, China); Daolong Wu (China Electronics Technology Group Corporation 20th Institute, China); Haitao Xiao and Zhongzheng Sun (Xi'an Jiaotong University, China)

In complex battlefield environments, Flying Ad-hoc NETwork (FANET) faces the challenges of low recognition rate of communication interference under low Jamming Noise Ratio (JNR). To solve these problems, one Simple Non-local Correction Shrinkage (SNCS) module is constructed, which modifies the soft threshold function and embeds it into the neural network. Local Importance-based Pooling (LIP) is introduced to enhance the useful features, and the joint loss function is constructed by combining cross-entropy and center loss. To achieve new class rejection, the acceptance factor is proposed, and the One Class Support Vector Machine Simplified Non-local Residual Shrinkage Network (OCSVM-SNRSN) model is constructed. Experimental results show that the accuracy of the OCSVM-SNRSN model is the highest under low JNR. The accuracy is increased by about 4%-9% compared with other methods, and reaches 99% when the JNR is -6dB. At the same time, the False Positive Rate (FPR) drops to 9%.

S15.4 SCON: A Secure Cooperative Framework Against Gossip Dissemination in Opportunistic Network
Jinlong E (Renmin University of China, China); Chaokun Zhang (Tianjin University, China)

As a proper supplement to traditional wireless communication, opportunistic network provides a feasible and inexpensive way to achieve message delivery, especially in extreme environments. However, the gossip dissemination problem severely influences the network performance, and is hardly tackled due to the network characteristics of more transmission delay and higher mobility. To address this problem, we propose a robust and efficient framework named SCON, which contains a flexible region-based cluster routing algorithm to relieve the gossip impacts, crowd-sourcing prosecution and attacker judgment schemes and node reward and punishment mechanisms to discover and eliminate attackers that disseminate gossips, as well as several buffer maintenance mechanisms to further improve the network performance. Comprehensive evaluations demonstrate the high performance and robustness of our framework compared with the state-of-the-art approaches when gossip dissemination occurs.

S15.5 FindSpy: A Wireless Camera Detection System Based on Pre-Trained Transformers
Zhixin Shi (Chinese Academy of Sciences, China); Hao Wu (Institute of Information Engineering Chinese Academy of Science, China); Jing Zhang (Institute of Information Engineering Chinese Academy of Sciences, China); Meng Zhang (Chinese Academy of Sciences, China); Weiqing Huang (Institute of Information Engineering, Chinese Academy of Sciences, China)

The wireless cameras have become a major concern in cybersecurity due to privacy breaches. Recent flow based methods for wireless cameras detection have achieved promising results. However, these methods require specialized equipment for deployment and massive labeled data for training, which makes them impractical in real-world scenarios. In this paper, we propose FindSpy, a lightweight wireless camera detection method based on Pre-trained Transformers to address the challenge. By utilizing the air interface technique, FindSpy can obtain data without connecting to the wireless network where the camera is located. Additionally, FindSpy learns air interface WiFi traffic representation by pre-training a traffic representation model from large-scale unlabeled data and fine-tuning it on few labeled data. FindSpy can accurately detect wireless cameras with CNN-LSTM classifier. Extensive experiments show that FindSpy outperforms the state-of-the-art methods on few data. Concretely, FindSpy achieves a detection accuracy of over 98% by analyzing just five data packets.

Tuesday, July 11 8:30 - 10:30 (Africa/Tunis)

S16: Internet of Things (IoT) (online)

Room: Webex 4
Chair: Hana Krichene (CEA, France)
S16.1 Energy-Efficient IoT Communications: A Comparative Study of Long-Term Evolution for Machines (LTE-M) and Narrowband Internet of Things (NB-IoT) Technologies
Nassim Labdaoui (IETR & Watteco, France); Fabienne Nouvel (INSA IETR RENNES, France); Stéphane Dutertre (Watteco, France)

This article investigates the power consumption of LTE-M and NB-IoT devices using u-blox SARA-R422S modules complying with the standards of two French operators. Our findings suggest that under certain conditions, these technologies can achieve a 5-year operational lifespan. The size of transmitted data does not have a significant impact on power consumption under favourable coverage conditions, but can quickly affect the battery life under harsh coverage conditions. This article offers insights into the power consumption of LTE-M and NB-IoT devices and provides useful information for those considering the use of these technologies in IoT.

S16.2 LoFall: LoRa-Based Long-Range Through-Wall Fall Detection
Xuehan Zhang, Zhongxu Bao, Yuqing Yin, Xu Yang, Xiao Xu and Qiang Niu (China University of Mining and Technology, China)

Fall detection is an essential measure for the safety of elders. While traditional contact-based methods support acceptable detection performance, the recent advance in wireless sensing could enable contact-free fall detection. However, two severe limitations are short sensing range and weak through-wall capability, which hampers wide applications in smart homes. This paper proposes a novel system LoFall, which is the first time to utilize the LoRa signal to realize contact-free long-range through-wall fall detection. We address unique technical challenges, such as proposing a novel strategy of candidate signal search to reduce the calculation time of fall detection and designing a weighted feature fusion algorithm based on fuzzy entropy to improve the accuracy of through-wall fall detection. Comprehensive experiments are conducted to evaluate LoFall. Results show that it can achieve a total accuracy of 93.3% for through-wall fall detection, and support the furthest detection range of up to 10 m.

S16.3 Physical Layer Identity Information Protection Against Malicious Millimeter Wave Sensing
Zhen Meng, Yiming Shi, Yumeng Liang, Xinzhe Wen, Anfu Zhou and Huadong Ma (Beijing University of Posts and Telecommunications, China)

Gait recognition based on millimeter waves (mmWave) can recognize people's identity information by sensing their walking posture, which has found versatile usages in many fields, such as smart home, intelligent security, and health monitoring. While this technology has gained extensive attention in recent years, its possibility of being misused is also increasing. The snooper who misuses the technology could monitor the victim's identity information, which is imperceptible due to the characteristics of mmWave-based gait recognition. In this paper, we propose an identity protector called WW-IDguard, which disrupts the snooper at the physical level. The key idea is that the protector sends a unique signal to interfere with not only the signal but also the gait feature of the person "seen" by the snooper. Experiments demonstrate that WW-IDguard can significantly reduce the accuracy of the mmWave-based gait recognition used by snoopers. We also perform a measurement analysis on the basic method.

S16.4 BPCluster: An Anomaly Detection Algorithm for RFID Trajectory Based on Probability
Fei Liang (University of Chinese Academy of Sciences, China); Siye Wang (Chinese Academy of Sciences & School of Computer and Information Technology, Beijing Jiaotong University, China); Ziwen Cao (University of Chinese Academy of Sciences, China); Yue Feng (University of Chinese Academy of Sciences Institute of Information Engineering, China); Shang Jiang (University of Chinese Academy of Sciences & Institute of Information Engineering, Chinese Academy of Sciences, China); Yanfang Zhang (Chinese Academy of Sciences, China)

Indoor public places are facing more and more security risks, and need to be monitored to find potential anomalies. Benefiting from the advantages of low cost and high privacy, RFID is widely used in indoor monitoring. At present, it has become a common solution to construct the RFID raw data into time sequence trajectory, and then perform preprocessing and cluster analysis. However, there are redundant and uncertain factors in the RFID raw data, which affect the efficiency of anomaly detection. In this paper, we propose BPCluster, a probabilistic-based RFID trajectory anomaly detection algorithm for indoor RFID trajectories. The algorithm incorporates a probabilistic trajectory model, which reduces the redundancy and uncertainty through the context information of trajectories, and then clusters trajectories by the improved LCS algorithm to find abnormal trajectories. Experiments show that BPCluster has better performance in effectiveness and environmental adaptability, and the average accuracy in various environments reaches 91%.

S16.5 Antenna for Early Detection of Skin Sarcoma by Resonant Frequency Shift
Silue Dozohoua (Innov'Com Laboratory, Sup'Com, University of Carthage & Lastic, ESATIc, Abidjan, Côte d'Ivoire, Tunisia); Mondher Labidi (University of Carthage, Tunisia); Fethi Choubani (Innov'Com Laboratory, SUPCOM, University of Carthage, Tunisia)

In this paper, a small antenna is proposed to diagnose skin sarcoma at an embryonic stage. The antenna has an area of 30.54 x 15.27 mm2, and resonates at 1429 MHz with a reflection coefficient of -17.64 dB. The structure consists of a 35 µm copper sheet etched on a 1.6 mm FR-4 substrate. The diagnosis is based on the resonance frequency shift, and the SAR (Specific Absorption Rate) variation when the antenna is positioned on malignant tissue. For the simulations, a three-layer body phantom (skin, fat, and muscle), and a half-sphere tumor phantom were considered. Simulations of the antenna performance showed that for a tumor of 100 µm, the resonant frequency, and the SAR decrease by 2 MHz and 1.09 mW/Kg, respectively. In addition to sarcoma detection, the antenna's 3.6 dBi gain allows for 124.47 m biomedical communication links in a complex environment.

S16.6 Efficient Privacy-Preserving Multi-Functional Data Aggregation Scheme for Multi-Tier IoT System
Yunting Tao (College of Information Engineering, Binzhou Polytechnic, China); Fanyu Kong and Yuliang Shi (Shandong University, China); Jia Yu and Hanlin Zhang (Qingdao University, China); Huiyi Liu (Shandong Sansec Information Technology Company Limited, China)

The proliferation of Internet of Things (IoT) devices has led to the generation of massive amounts of data that require efficient aggregation for analysis and decision-making. However, multi-tier IoT systems, which involve multiple layers of devices and gateways, face more complex security challenges in data aggregation compared to ordinary IoT systems. In this paper, we propose an efficient privacy-preserving multi-functional data aggregation scheme for multi-tier IoT architecture. The scheme supports privacy-preserving calculation of mean, variance, and anomaly proportion. The scheme uses the Paillier cryptosystem and the BLS algorithm for encryption and signature, and uses blinding techniques to keep the size of the IoT system secret. In order to make the Paillier algorithm more suitable for the IoT scenario, we also improve its efficiency of encryption and decryption. The performance evaluation shows that the scheme improves encryption efficiency by 43.7% and decryption efficiency by 45% compared to the existing scheme.

Tuesday, July 11 10:30 - 11:00 (Africa/Tunis)

PDS2: Poster Session and Coffee Break

Room: MIMOSA
Chair: Lobna Hsairi (University of Jeddah, Saudi Arabia)
Square Tessellation for Stochastic Connected k-Coverage in Planar Wireless Sensor Networks
Kalyan Nakka and Habib M. Ammari (Texas A&M University-Kingsville, USA)

In this paper, we focus on the problem of connected k-coverage in planar wireless sensor networks (PWSNs), where every point in a field of interest (FoI) is covered by at least k sensors simultaneously, while all the participating sensors are mutually connected, where k > 1. To this end, we develop a global framework using a square tessellation that considers both deterministic and stochastic sensing models. Initially, we tessellate a planar FoI into adjacent and congruent square tiles. In each tile of this tessellation, we construct a cusp-square area for sensor placement to achieve k-coverage. Based on this cusp-squared square tile configuration, we compute the minimum sensor density that is required for deterministic and stochastic k-coverage in PWSNs. Then, we establish the necessary relationship that should exist between the sensing and communication ranges of the sensors to maintain network connectivity in k-covered PWSNs. Finally, we propose our stochastic k-coverage protocol for sensor scheduling and substantiate our theoretical analysis with simulation results.

Auto-Scalable Software Defined Networking Control Plane for Internet of Things
Intidhar Bedhief (University of Tunis El Manar, Tunisia & ENIT, Tunisia); Meriem Kassar (University Tunis El Manar, ENIT, Communication Systems Laboratory, Tunisia); Taoufik Aguili (ENIT, Tunisia)

The Internet of Things (IoT) connects billions of heterogenous things via the Internet. Therefore, the IoT network requires many network control methods for dynamic and efficient management. The Software Defined Networking (SDN) paradigm decouples the control plane from the data plane enabling programmability and centralized control and management of the network. Thus, it provides efficient and reliable IoT network management. However, the dynamicity, variability, and scalability of the IoT network topology are still a challenge despite the proposal of the distributed SDN controller as a solution for scalability. More precisely, the distributed controller reveals the distributed states management problem and it is ineffective in the automatization of the highly dynamic and variable IoT network. In this paper, we propose an auto-scalable distributed SDN control plane to manage and control the IoT network. The proposed distributed controller is deployed using Kubernetes. The test results validate the feasibility of our new approach.

RGB-2-Hyper-Spectral Image Reconstruction for Food Science Using Encoder/Decoder Neural Architectures
Robert Alexander Williamson and Jesus Martinez del Rincon (Queen's University Belfast, United Kingdom (Great Britain)); Carlos Reaño (Universitat de València, Spain); Anastasios Koidis Koidis (Queens University Belfast, United Kingdom (Great Britain))

Hyper-spectral imaging captures spatial and spectral information of a subject. This is used for the identification of substances within a scene, and food analysis. Presented is an investigation into the capabilities of encoder/decoder deep learning architectures for hyper-spectral image reconstruction from RGB images. For this analysis state-of-the-art (SOTA) techniques for hyper-spectral image reconstruction and other architectures from different fields have been used. Our approach examines a food science case study, using a CPU-based server and different accelerators. An in-house multi-sensor setup was used to capture the dataset which contains hyper-spectral images of twenty slices of different Spanish Ham in the range of 400-100~nm and their analogous RGB images. The results show no degradation in the output when moving outside of the visual range. This study shows that the SOTA methods for reconstructing from RGB do not produce the most accurate reconstruction of the spectral domain within the range of 400-1000~nm.

Beta Multi-Objective Whale Optimization Algorithm
Ahlem Aboud (REGIM REsearch Groups in Intelligent Machines, University of Sfax & University of Sousse, ISITCom, and High Institute of Applied Science and Technology, Tunisia); Nizar Rokbani (Issat of Sousse, University of Sousse, Tunisia); Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

This paper presents a new β-Multi-Objective Whale Optimization Algorithm, β-MOWOA. The β-MOWOA algorithm uses two profiles to control both exploration and exploitation phases based on the beta function. The exploitation processing step follow a narrow beta distribution, while the exploration phase uses a large Gaussian-like beta. The experimental study focused on 13 Dynamic Multi-Objective Optimization Problems (DMOPs). Comparative results are based on the Wilcoxon signed rank and the one-way ANOVA. Results proven the statistical significance of the β-MOWOA algorithm toward state of art methods for solving DMOPs: 9/13 problems using Inverted General Distance and 10/13 using Hypervolume Difference.

Backhaul Assessment in Dual Band WiFi Mesh
João Francisco Soares (Instituto de Telecomunicações, Portugal); Miguel Luis (Instituto Superior Técnico & Instituto de Telecomunicacoes, Portugal); Duarte Raposo (Instituto de Telecomunicações, Portugal); Pedro Rito and Susana Sargento (Instituto de Telecomunicações, Universidade de Aveiro, Portugal)

Over the years, WiFi became an essential technology. The success of the introduction of wireless devices with WiFi connectivity increased the demand for better WiFi networks. Such networks need better service and better coverage, either in mobile or residential environments. To solve this challenge, WiFi Alliance developed WiFi EasyMesh, a standard for WiFi networks that uses multiple access points that allow an easy setup and compatibility with WiFi certified devices. This work studies the performance of a mesh wireless network with different frequencies in use by the backhaul links (2.4 GHz and 5 GHz). The results can then be used to derive better backhaul steering algorithms to a better Quality-of-Service.

Data Analysis of Electromyostimulation Training Effect on Muscles and Sports Performance
Syrine Kallel and Rahma Fourati (University of Sfax, Tunisia)

This paper investigates the effect of modern lifestyles on physical activity and the rise of time-saving exercise protocols to improve health and performance. Such protocols, Bio Impedance Analysis (BIA), and Electrical Muscle Stimulation (EMS) are explored in detail. BIA is a non-invasive and painless technology that accurately evaluates the amount of water, proteins, minerals, and fat masses in the body. EMS involves applying electrical currents to muscles through electrodes to induce involuntary contractions and is being used for muscle rehabilitation, treating obesity, and improving body shape. The manufacturers of EMS devices claim that 20 minutes of electro-stimulation work is equivalent to 4 hours of traditional sports. The emergence of new technologies such as INTEGRAL-EMS, a combination of portable electro-stimulation with an integrated, wireless, high-performance generator, is making it increasingly accessible for people to achieve their health and fitness goals in a more efficient and convenient manner.

An Integrated Framework for Bird Recognition Using Dynamic Machine Learning-Based Classification
Wiam Rabhi, Fatima El Jaimi, Walid Amara and Zakaria Charouh (Majal Berkane, Morocco); Amal Ezzouhri (Mohammed V University, Morocco); Houssam Benaboud, Moudathirou Ben Saindou and Fatima Ouardi (Majal Berkane, Morocco)

Bird recognition in computer vision poses two main challenges: high intra-class variance and low inter-class variance. High intra-class variance refers to the significant variation in the appearance of individual birds within the same species. Low inter-class variance refers to the limited visual differences between distinct bird species. In this paper, we propose a robust integrated framework for bird recognition using a dynamic machine learning-based technique. Our system is designed to identify over 11,000 species of birds based on multiple components. As part of this work, we propose two public datasets. The first one (E-Moulouya BDD) contains over 13k images of birds for detection tasks. While the second one (LaSBiRD) contains about 5M labelled images of 11k species. Our experiments yielded promising results, indicating the impressive performance of our system in detecting and classifying birds. With a mAP of 0.715 for detection and an accuracy rate of 96% for classification.

A GNN-Based Rate Limiting Framework for DDoS Attack Mitigation in Multi-Controller SDN
EL kamel Ali (Research Lab PRINCE (Tunisia) & Institut Superieur D'informatique Et Des Techniques de Communication- Hammem Sousse, Tunisia)

This paper proposes a proactive protection against DDoS attacks in SDN that is based on dynamically monitoring rates of hosts and penalizing misbehaving ones through a weight-based rate limiting mechanism. Basically, this approach relies on the power of Graph Neural Networks (GNN) to leverage online deep learning. First, an encoder-decoder function converts a time-series vector of a host features to an embedding representation. Then, GraphSAGE uses hosts' embedding vectors to learn latent features of switches which are used to forecast next time-step values. Predicted values are inputted to a multi-loss DNN model to compute two discounts that are applied to weights associated to source edges using mutli-hop SDG-based backpropagation. Realistic experiments show that the proposed solution succeeds in minimizing the impact of DDoS attacks on both the controllers and the switches regarding the PacketIn arrival rate at the controller and the rate of accepted requests.

Tuesday, July 11 11:00 - 12:00 (Africa/Tunis)

Keynote: Leveraging Urban Computing with Smart Internet of Drones, by Azzedine Boukerche (Canada)

Chair: Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

Urban computing (UC) is an interdisciplinary field that seeks to improve people's lives in urban areas. To achieve this objective, UC collects and analyzes data from several sources. In recent years, the Internet of Drones (IoD) has received significant attention from the academia community and has emerged as a potential data source for UC applications. The goal of this talk is to examine how IoD can connect and leverage UC in variety of applications which include: public safety and security, environment, traffic improvement, drone-assisted networks, just to mention a few. In this context, data acquired by IoD can fill gaps in data collected from other sources and provide new data for UC considering the aerial view of drones. Thus, we shall first introduce the relationship between the concepts of UC and IoD, and then discuss our proposed general framework considering the perspective of IoD for UC followed by design guidelines of the Internet of drones location privacy protocols. Last but not least, we shall discuss some key challenges in this emerging area.

Tuesday, July 11 12:00 - 13:00 (Africa/Tunis)

Keynote: Generative Artificial Intelligence: Opportunities and Challenges, by Fakhri Karray (Canada/UAE)

Chair: Adel M. Alimi (REGIM, University of Sfax, National School of Engineers, Tunisia)

The talk presents recent trends and significant advances in Artificial Intelligence (AI), namely Generative Artificial Intelligence (GAI). As demonstrated by impressive accomplishments made in the field (such as ChatGPT, BARD, LLaMA and other generative AI-based engines) and due to fundamental advances in machine learning and artificial intelligence, many predict we are at the cusp of a new technological revolution, the impact of which will affect all humanity. AI is expected to grow the world GDP by up to 20% by 2025. This amounts to more than 15 Trillion dollars of growth over the next few years. These developments have impacted significantly technological innovations in the fields of Internet of Things, self-driving machines, powerful chatbots, virtual assistants, human machine intelligent interfaces, large language models, real-time translators, cognitive robotics, high quality disease diagnosis, remote health care monitoring, financial market prediction, Fintech, to name a few. Although AI constitutes an umbrella of several interrelated technologies, all of which are aimed at imitating to a certain degree intelligent human behavior or decision making, deep learning algorithms are considered to be the driving force behind the explosive growth of AI and their applications in almost every sector of the modern and global economy. The talk outlines major milestones that led to the current growth in AI, and GAI, the role of academic institutions, industry, and government and discusses some of the significant achievements in the field. It also highlights real challenges when these innovations are misused, leading to potential adverse effects on the society and the end-users.

Tuesday, July 11 13:00 - 14:00 (Africa/Tunis)

Tuesday, July 11 14:00 - 16:00 (Africa/Tunis)

ICTS4eHealth - S5: Security and Privacy

Room: TULIPE 2 / webex 2
Chair: Armando Ruggeri (University of Messina, Italy)
ICTS4eHealth - S5.1 A Scalable Intrusion Detection Approach for Industrial Internet of Things Based on Federated Learning and Attention Mechanism
Mudhafar Fadhil Nuaimi (National School of Electronics and Telecommunications of Sfax & Digital Research Center of Sfax (CRNS), Tunisia); Lamia Chaari (University of Computer Science and Multimedia of Sfax & Laboratory of Technologies for Smart Systems at Digital Research Center of Sfax (CRNS), Tunisia); Bassem Ben Hamed (ENET'Com- Sfax - Tunisia, Tunisia)

The Industrial Internet of Things (IIoT) widespread adoption has prompted multiple breaches on IIoT devices by attackers. Thus this threatens the security of data for the end user. Recurrent neural networks (RNNs) have been used for intruder detection in IIoT because IIoT traffics are created consecutively, but they are unable to represent long traffic sequences and cannot be parallelized either. In order to solve these problems in this paper we have introduced the Attention technique which is applied to the encoding layer. We also used a federated learning (FL) approach to reduce the communication overhead of collecting data from each worker node and storing it in the cloud server in the case of a centralized model, thus preserving network scalability. With the use of the Edge-IIo dataset, we test our suggested methodology. The outcomes of our FL experiment enable the system to scale.

ICTS4eHealth - S5.2 An Improved GIFT Lightweight Encryption Algorithm to Protect Medical Data in IoT
Aymen Badr (University of Diyala, Iraq); Lamia Chaari (University of Computer Science and Multimedia of Sfax & Laboratory of Technologies for Smart Systems at Digital Research Center of Sfax (CRNS), Tunisia)

The IoT is made possible by the development of the latest technologies and allows the interconnection of different devices that may collect massive amounts of data. As a result, IoT security requirements are essential. Network authentication, confidentiality, data integrity, and access control are secured via encryption. Traditional encryption protocols are no longer suitable for all IoT scenarios, such as smart cities, due to the many limitations of IoT devices. In order to secure data across IoT networks, academics have proposed a variety of simple encryption methods and protocols. In this article, we proposed a new method of light encryption that was improved to meet the requirements of protecting patient information in the field of health care and took into account the limited capacity of portable medical devices, and the results were compared with the traditional method, which gave better results and a high ability to protect patients' medical files.

ICTS4eHealth - S5.3 Light Automatic Authentication of Data Transmission in 6G/IoT Healthcare System
Sarra Jebri (University of Gabes, Tunisia); Arij Ben Amor (University of Tunis-Elmanar, Tunisia); Mohamed Abid (University of Gabes, Tunisia); Ammar Bouallegue (National School of Engineers of Tunis, Tunisia)

IoT healthcare systems aim to help patients to monitor their own condition and connect and interact with healthcare professionals. However, patients' privacy and sensitive data security are big issues which should be considered. Healthcare systems absolutely depend on 6G technologies due to its important characteristics. This paper provides a light mutual authentication and a secure data transmission in 6G/IoT healthcare system. Our proposal takes into account the emergency cases and non-emergency ones. We introduce a light authentication and a secure health data transmission between health service provider and health user. Then, we proceed a mutual authentication algorithm between health user and health collaborator followed by a light key agreement establishment. We prove the security of the protocol against several knowns' attacks. Then, we compare our proposed solution with other related works, the findings show the performance in terms of communication and computational.

ICTS4eHealth - S5.4 A Blockchain-Based Personal Health Knowledge Graph for Secure Integrated Health Data Management
Juan Li, Vikram Pandey and Rasha Hendawi (North Dakota State University, USA)

The increasing use of electronic health records (EHRs) and wearable devices has led to the creation of massive amounts of personal health data (PHD) that can be utilized for research and patient care. However, managing and integrating various types of PHD from different sources poses significant challenges, including data interoperability, data privacy, and data security. To address these challenges, this paper proposes a blockchain-based personal health knowledge graph for integrated health data management. The proposed approach utilizes knowledge graphs to structure and integrate various types of PHD, such as EHR, sensing, and insurance data, to provide a comprehensive view of an individual's health. The proposed approach utilizes blockchain to ensure data privacy and security. By storing PHD on a decentralized blockchain platform, patients have full control over their data and can grant access to specific entities as needed providing enhanced privacy and security.

ICTS4eHealth - S5.5 PRISM: Privacy Preserving Healthcare Internet of Things Security Management
Savvas Hadjixenophontos (Imperial College London, United Kingdom (Great Britain)); Anna Maria Mandalari (University College London, United Kingdom (Great Britain)); Yuchen Zhao (University of York, United Kingdom (Great Britain)); Hamed Haddadi (Imperial College London, United Kingdom (Great Britain))

Consumer healthcare Internet of Things (IoT) devices are gaining popularity in our homes and hospitals. These devices provide continuous monitoring at a low cost and can be used to augment high-precision medical equipment. However, major challenges remain in applying pre-trained global models for anomaly detection on smart health monitoring, for a diverse set of individuals that they provide care for. In this paper, we propose PRISM, an edge-based system for experimenting with in-home smart healthcare devices.
We develop a rigorous methodology that relies on automated IoT experimentation. We use a rich real-world dataset from in-home patient monitoring from 44 households of people living with dementia over two years. Our results indicate that anomalies can be identified with accuracy up to 99% and mean training times as low as 0.88 seconds. While all models achieve high accuracy when trained on the same patient, their accuracy degrades when evaluated on different patients.

ICTS4eHealth - S5.6 Assessing the Security Risks of Medical Mobile Applications
George Chatzisofroniou, Chris Markellos and Panayiotis Kotzanikolaou (University of Piraeus, Greece)

We conducted a security analysis of 140 medical mobile apps on Android and iOS platforms. Our methodology involved looking for side-channel leaks, assessing support for old OS versions, evaluating device and app integrity protections, and searching for insecure data storage. We also performed traffic analysis to observe API communication. Our findings revealed significant risks still exist, as the majority of apps lacked standard security measures such as root detection and secure local data storage. Four apps communicated over plain HTTP, risking the confidentiality and integrity of patient data. Most apps had side channel leaks, exposing sensitive info. These results underscore the need for better security measures in medical mobile apps, from technical and regulatory perspectives.

Tuesday, July 11 14:00 - 16:00 (Africa/Tunis)

S17: Vehicular & Space Communications (onsite)

Room: TULIPE 3 / webex 3
Chair: Christos Douligeris (University of Piraeus, Greece)
S17.1 High Gain and Compact Microstrip Patch Antenna Array Design for 26 GHz Broadband Wireless Systems
Sirine Ghenjeti (University of Carthage Tunis, Tunisia); Rim Barrak (Higher School of Communications of Tunis, Tunisia); Soumaya Hamouda (Mediatron Lab., Sup'Com, Tunisia & University of Carthage, Tunisia)

Millimeter waves pave the way to new mobile network domains due to their expected very high speed. Several applications such as Vehicle-to-anything (V2X) will benefit from using millimeter bands to offer very high transmission rates with low latency. However, given the high attenuation of millimeter waves in free space, special attention should be paid to antenna design to allow high gain and wide bandwidth with a compact and integrable antenna structure. This paper presents a new 4x8 microstrip patch array antenna operating in 26 GHz band. The proposed antenna is designed on a Rogers Duroid RT5880 substrate with a dielectric constant of 2.2 and a height of 0.508mm. The simulations are conducted with HFSS CAD tool. The antenna simulation results show good performances with a bandwidth of 1.1 GHz, a gain of 21.26 dBi and a compact size of 110x80 mm², which are very promising for 5G V2X communications.

S17.2 Throughput Enhancement in Hybrid Vehicular Networks Using Deep Reinforcement Learning
Badreddine Yacine Yacheur (CNRS-LaBRI U M R 5800, University Bordeaux, Bordeaux-INP, France); Toufik Ahmed and Mohamed Mosbah (CNRS-LaBRI UMR 5800, University Bordeaux, Bordeaux-INP, France)

Cooperative intelligent transportation systems are now being widely investigated along with the emergence of vehicular communication. Services such as collective perception require a robust communication system with high throughput and reliability. However, a single communication technology is unlikely to support the required throughput, especially under mobility and coverage constraints. Thus, we propose in this paper a hybrid vehicular communication architecture that leverages multiple Radio Access Technologies (RATs) to enhance the communication throughput. We developed a Deep Reinforcement Learning (DRL) algorithm to select the optimal hybrid transmission strategy according to the channel quality parameters. We assess the effectiveness of our hybrid transmission strategy by a simulation scenario that shows about 20% throughput enhancement and a 10% reduction of channel busy ratio.

S17.3 On the Network Characterization of Nano-Satellite Swarms
Evelyne Akopyan (INP Toulouse & TeSA, France); Riadh Dhaou (IRIT/ENSEEIHT, University of Toulouse, France); Emmanuel Lochin (ENAC & Université de Toulouse, France); Bernard Pontet (CNES, France); Jacques Sombrin (TéSA Laboratory, France)

Low-frequency radio interferometry is crucial to understanding the universe and its very early days. Unfortunately, most of the current instruments are ground-based and thus impacted by the interferences massively produced by the Earth. To alleviate this issue, scientific missions aim at using Moon-orbiting nano-satellite swarms as distributed radio-telescopes in outer space, keeping them out of Earth interference range. However, swarms of nano-satellites are systems with complex dynamics and need to be appropriately characterized to achieve their scientific mission. This paper presents a methodology based on graph theory for characterizing the swarm network system by computing graph theory metrics around three properties: the node density, network connectivity and ISL availability. We show that these properties are well-suited for highlighting a possible heterogeneity in a network and adapt a routing strategy accordingly. This work is the first milestone in defining the best-suited routing strategy within the swarm from the derived network properties.

S17.4 Evaluation of a Collision Prediction System for VRUs Using V2X and Machine Learning: Intersection Collision Avoidance for Motorcycles
Bruno Ribeiro (University of Minho, Portugal & Algoritmi Centre UMinho VAT502011378, Portugal); Alexandre Santos (University of Minho & Centro Algoritmi, Portugal); Maria João M. R. da C. Nicolau (Universidade do Minho & Centro ALGORITMI, Portugal)

The safety factor of ITS is particularly important for VRUs, as they are typically more prone to accidents and fatalities than other road users. The implementation of safety systems for these users is challenging, especially due to their agility and hard to predict intentions. Still, using ML mechanisms on data that is collected from V2X communications, has the potential to implement such systems in an intelligent and automatic way. This paper evaluates the performance of a collision prediction system for VRUs (motorcycles in intersections), by using LSTMs on V2X data - generated using the VEINS simulation framework. Results show that the proposed system is able to prevent at least 74% of the collisions of Scenario A and 69% of Scenario B on the worst case of perception-reaction times; In the best cases, the system is able to prevent 94% of the collisions of Scenario A and 96% of Scenario B.

S17.5 Infrastructure-Less Long-Range Train-Arrival Notification System
Aida Eduard, Dnislam Urazayev, Aruzhan Sabyrbek, Yermakhan Magzym and Dimitrios Zorbas (Nazarbayev University, Kazakhstan)

This paper presents a portable, inexpensive, wireless and long range train arrival notification system for railway safety applications. The purpose of the system is to notify workers servicing the rails about the arrival of a train in both directions of a railway in order to avoid accidents. The system consists of several components: the train coordinates system, the portable station component, the worker's wearable, and an Android application. A description of each of the components of the system is given focus on the communication mechanisms between those components. The system has been deployed in an experimental environment in Kazakhstan and the first experiments showed a communication range of several kilometers as well as low repeated number of packet losses. Moreover, the proposed protocol for the wearables exhibited an over 99.5% packet reception ratio (PRR) for scenarios without the presence of major obstacles.

S17.6 Edge-Based IPFS for Content Distribution of City Services
José Vicente de Oliveira (Instituto de Telecomunicações & Universidade de Aveiro, Portugal); Duarte Raposo (Instituto de Telecomunicações, Portugal); Susana Sargento (Instituto de Telecomunicações, Universidade de Aveiro, Portugal)

Over the last years, there has been an enormous growth in internet and mobile users. In urban regions, the combination of these events has resulted in a massive flow of network traffic to the city infrastructure and cellular operators. Content Delivery Networks (CDNs) appear as a promising solution: by distributing the content through multiple nodes, they are capable to deliver content with low latency, over intermittent connections, leading to a reduction in the use of the 5G backhaul. This paper focuses on the proposal of a CDN for a smart city, deployed in a private cellular network, with CUPS and MEC, using the IPFS protocol. A comparison between the CDN and direct stream is performed, in controlled and real-world environments, to services already deployed on the Aveiro Tech City Living Lab (ATCLL). Lastly, the impact of IPFS distribution on the edge and core, in CUPS and MEC scenarios is assessed.

Tuesday, July 11 14:00 - 16:00 (Africa/Tunis)

S18: AI in Computers and Communications: Machine Learning (online)

Room: Webex 5
Chair: Michael Kounavis (Meta Platforms Inc., USA)
S18.1 Which2learn: A Vulnerability Dataset Complexity Measurement Method for Data-Driven Detectors
Huozhu Wang (University of Chinese Academy of Sciences, China)

The increasing number of software vulnerabilities on complex programs has posed potential threats to cyberspace security. Recently, many data-driven methods have been proposed to detect such a large number of vulnerabilities. However, most of these data-driven detectors mainly focus on developing different models to improve the classification performance, ignoring the important research question that prior knowledge is learned from vulnerability datasets when training models. In this work, we propose a novel method to determine which dataset is relatively high-complexity for a data-driven detector to learn prior knowledge. Our method is called Which2learn for short. Our dataset complexity measure method is based on the sample's Program Dependence Graph. Experiments show that our dataset measurement method can improve the state-of-the-art GNN-based model's F1-score by about 9.5% in popular memory-related vulnerability detection. Moreover, our dataset measurement method can be easily extended to select training samples in most graph embedding machine learning tasks.

S18.2 Multiple Information Extraction and Interaction for Emotion Recognition in Multi-Party Conversation
Feifei Xu and Guangzhen Li (Shanghai University of Electric Power, China)

Emotion recognition in multi-party conversation (ERMC) has garnered attention in the field of natural language processing (NLP) due to its wide range of applications. Its objective is to identify the emotion of each utterance. Existing models mainly focus on context modeling, while ignoring the emotional interaction and dependency between utterances. In this work, we put forward a Multiple Information Extraction and Interaction network (MIEI) for ERMC that captures emotions by integrating emotional interaction, speaker-aware context, and discourse dependency in a conversation. Emotional interaction is simulated by proposed commonsense emotion modeling. Speaker-aware context is obtained by proposed speaker-aware context modeling with muti-head attention. Discourse dependency is improved to better depict the discourse structures. We verify the superiority of our proposed method by comparing it with existing models and validate the effectiveness of each module through ablation experiments.

S18.3 UAV-Assisted Mobile Edge Computing Task Offloading Strategy for Minimizing Terminal Energy Consumption
Wenjiao Wu, Rongzuo Guo and Xiangkui Fan (Sichuan Normal University, China)

In recent years, the use of Unmanned Aerial Vehicles (UAVs) equipped with Mobile Edge Computing (MEC) servers to provide computational resources to mobile devices(MDs) has emerged as a promising technology. This paper aims to investigate a UAV-assisted Mobile Edge Computing (MEC) system in dynamic scenarios with stochastic computing tasks. Our goal is to minimize the total energy consumption of MDs by optimizing user association, resource allocation, and UAV trajectory. Considering the nonconvexity of the problem and the coupling among variables, we propose a novel deep reinforcement learning algorithm called improved-DDPG. In this algorithm, we employ improved Prioritized Experience Replay (PER) to enhance the convergence of the training process, and we introduce the annealing concept to enhance the algorithm's exploration capability. Simulation results demonstrate that the improved-DDPG algorithm exhibits good convergence and stability. Compared to baseline approaches, the improved-DDPG algorithm effectively reduces the energy consumption of terminal devices.

S18.4 Lightweight Video Frame Interpolation Based on Bidirectional Attention Module
Yige Li and Han Yang (Hunan University, China)

Video frame interpolation can enhance the frame rate and improve video quality by synthesizing non-existing intermediate frames between two consecutive frames. Recently, remarkable advances have been made due to the employment of convolutional neural networks. However, most existing methods suffer from motion blur and artifacts when handling the case of large motion and occlusion. To solve the problem, we propose a lightweight but effective deep neural network which is trained end-to-end. Specifically, the bidirectional attention module is first devised to enhance motion-related features representation in both channel and spatial dimensions. Then the synthesis network estimates kernel weights, visibility map and offset vectors to finally generate the interpolation results. Moreover, to compress the model, we introduce the Ghost module to the synthesis network which is verified to be highly effective. Experiment results demonstrate that our proposed architecture performs favorably against state-of-the-art frame interpolation methods on various public video datasets.

S18.5 Driving into Danger: Adversarial Patch Attack on End-To-End Autonomous Driving Systems Using Deep Learning
Tong Wang (National Key Laboratory of Science and Technology on Information System Security, China); Xiaohui Kuang (Academy of Military Science, China); Hu Li (National Key Laboratory of Science and Technology on Information System Security, China); Qianjin Du (Tsinghua University & Computer Science, Taiwan); Zhanhao Hu (Tsinghua University, China); Huan Deng and Gang Zhao (National Key Laboratory of Science and Technology on Information System Security, China)

Deep learning-based autonomous driving systems have been extensively researched due to their superior performance compared to traditional methods. Specifically, end-to-end deep learning systems have been developed, which directly output control signals for vehicles using various sensor inputs. However, deep learning techniques are vulnerable to security issues, generating adversarial examples that can attack the output of the relevant model. This paper proposes an adversarial example generation method that applies a patch to pedestrians' clothing, which can generate dangerous behaviors when the pedestrian appears within the camera lens, thereby attacking the end-to-end autonomous driving system. The proposed method is validated using the CARLA simulator, and the results demonstrate successful attacks in various weather and lighting conditions, exposing the security vulnerabilities of this type of system. This study highlights the need for further research to address these vulnerabilities and ensure the safety of autonomous driving systems.

S18.6 How Does Oversampling Affect the Performance of Classification Algorithms?
Zhizhen Xiang, Yingying Xu and Zhenzhou Tang (Wenzhou University, China)

To address the issue of imbalanced datasets classification, this study explores how different oversampling algorithms and imbalance ratios affect the performance of classification algorithms. Two oversampling algorithms, including random oversampling and Synthetic Minority Oversampling Technique (SMOTE), are used to adjust the imbalance ratio of the training dataset to 999:1, 99:1, 9:1, 3:1, and 1:1. Four classification methods, including the Convolutional Neural Network, Vision Transformer, XGBoost and CatBoost, are evaluated using performance metrics such as precision, recall, AUC, and F2-Score. We conduct more than 240 experiments and observe that oversampling ratio has a significant positive impact on AUC and recall rate, but a negative impact on precision. The study also identifies the best oversampling algorithm and imbalance ratio for each classification algorithm. It is noteworthy that the Vision Transformer algorithm used in this study has not been employed in previous research on imbalanced data classification.

Tuesday, July 11 14:00 - 16:00 (Africa/Tunis)

S19: Security in Computers and Communications (online)

Room: Webex 4
Chairs: Lobna Hsairi (University of Jeddah, Saudi Arabia), Dimitrios Zorbas (Nazarbayev University, Kazakhstan)
S19.1 Using Long-Short-Term Memory to Effectively Identify Persistent Routes Under Stealthy Link Flooding Attacks in Software-Defined Networks
Wenjie Yu (Hangzhou Institute for Advanced Study, UCAS, China); Boyang Zhou (Zhejiang Lab, China)

On the Internet, for the stealthy link flooding attack, a defender needs to accurately identify the victim persistent routes (PRs) out of links. However, such an issue is unconcerned and challenged by the difficulty in precisely distinguishing the flood flow features in time series from the benign ones. To address the issue, we propose a novel PR identification mechanism (PIM). PIM periodically extracts the flow features of each link, and then identifies each PR by determining the flood proportion in the link via our classification model. The model is LSTM or RNN with customized inner layer parameters, efficiently trained in offline. To evaluate the accuracy, we simulate LFA in a synthesized topology, with benign flows reproduced from the real traces. The results demonstrate that, compared with RNN, PIM with LSTM is more effective in identifying PRs with 99.0% accuracy and 0.991% loss rate, in one epoch convergence time on average.

S19.2 RVDSE: Efficient Result Verifiable Searchable Encryption in Redactable Blockchain
Ruizhong Du (HeBei University, China); Na Liu (Hebei University, China); Mingyue Li (NanKai University, China); Junfeng Tian (Hebei University, China)

To solve the inefficiencies, inflexibility in updates, and high storage costs associated with current result verifiable searchable encryption schemes, we propose an efficient scheme that result verifiable dynamic searchable encryption in a redactable blockchain (RVDSE). By dividing the inverted index into blocks, uploading corresponding verification tags to the blockchain, and using smart contracts to verify query results, we improve query and verification performance. Additionally, we employ blockchain rewriting technology to update tags in the result checklist, thereby improving blockchain data update performance and scalability while maintaining constant storage overhead. Security analysis confirms that our solution guarantees query result accuracy and completeness. Experimental results demonstrate that our approach enhances query and result verification efficiency, even with low-speed blockchain data scale growth, particularly as data collection scales increase.

S19.3 HUND: Enhancing Hardware Performance Counter Based Malware Detection Under System Resource Competition Using Explanation Method
Yanfei Hu (Chinese Academy of Sciences & University of Chinese Academy of Sciences, China); Yu Wen (Chinese Academy of Sciences, China); Xu Cheng (Peking University, China); Shuailou Li (Institute of Information Engineering, Chinese Academy of Sciences, China); Boyang Zhang (Institute of Information Engineering Chinese Academy of Sciences, China)

Hardware performance counter (HPC) has been widely used in malware detection because of its low access overhead and the ability of revealing dynamic behavior during program's execution. However, HPC based malware detection (HMD) suffers from performance decline due to HPC's non-determinism caused by resource competition. Current work enables malware detection under resource competition but still leaves mis-classifications. In this paper, we propose HUND, a framework for improving the detection ability of HMD models under resource competition. Specifically, we first introduce an explanation module to make the program's prediction interpretable and accurate on the whole. We then design a rectification module for troubleshooting HMDMs' errors by generating modified samples and lowering the effects of false classified instances on model decision. We evaluate HUND by performing HMD models two datasets of HPC-level behaviors. The experimental results show HUND explains HMDMs with high fidelity and HUND's effectiveness in troubleshooting the errors of HMDMs.

S19.4 UAG: User Action Graph Based on System Logs for Insider Threat Detection
Xu Wang (University of Chinese Academy of Science & Institute of Information Engineering, Chinese Academy of Science, China); Jianguo Jiang (Institute of Information Engineering, Chinese Academy of Sciences, China); Yan Wang (Chinese Academy of Sciences, China); Qiujian Lv (Institute of Information Engineering, Chinese Academy of Sciences,China); Leiqi Wang (University of Chinese Academy of Science & Institute of Information Engineering, Chinese Academy of Science, China)

Insider threats pose significant risks to the network systems of organizations. Users have diverse behavioral habits within an organization, leading to variations in their activity patterns. Hence, data analysis and mining techniques are essential for modeling user behavior. Current methods analyze system logs and extract user action sequence features; however, they overlook the relationships between different actions, reducing detection accuracy. To address this issue, we propose a novel method called UAG (User Action Graph). UAG transforms user actions into a graph representing their chronological order and interrelationships, facilitating a more accurate and comprehensive understanding of user behavior. By extracting global and local features from the user action graph, UAG offers an extensive and detailed perspective of user behaviors. Ultimately, we develop a lightweight ensemble autoencoder model to detect insider threats. Comprehensive experiments demonstrate that UAG delivers outstanding performance and surpasses existing methods.

S19.5 GCFL: Blockchain-Based Efficient Federated Learning for Heterogeneous Devices
Xiang Ying (School of Computer Software, Tianjin University, China); Chenxi Liu and Dengcheng Hu (Tianjin University, China)

Federated Learning has emerged as a promising machine learning paradigm to protect data privacy. However, the differences between heterogeneous clients and the performance bottleneck of central server limit the efficiency of FL. As a typical decentralized solution, the combination of blockchain and FL has been studied in recent years. However, the use of single-chain blockchain and traditional consensus algorithms in these studies have drawbacks such as high resource consumption, low TPS and low scalability. This paper proposes an efficient solution that combines a DAG blockchain and FL, called GCFL(Graph with Coordinator Federated Learning). GCFL introduces a new block structure that reduces data redundancy. For DAG blockchains, we proposed a two-phase tips selection consensus algorithm that can reduce resource consumption and tolerate a certain proportion of malicious nodes. Simulation experiments show that GCFL has higher stability and fast convergence time for targeted accuracy compared to traditional on-device FL systems.

S19.6 RegexClassifier: A GNN-Based Recognition Method for State-Explosive Regular Expressions
Yuhai Lu (Institute of Information Engineering Chinese Academy of Sciences, China); Xiaolin Wang (Luoyang Institute of Electro-Optical Equipment AVIC, China); Fangfang Yuan (Institute of Information Engineering,Chinese Academy of Sciences, China); Cong Cao (Chinese Academy of Sciences, China); Xiaoliang Zhang (Institute of Information Engineering Chinese Academy of Sciences, China); Yanbing Liu (Chinese Academy of Sciences, China)

Regular expression matching technology has been widely used in various applications. For the sake of low time complexity and stable performance, Deterministic Finite Automaton (DFA) has become the first choice to perform fast regular expression matching. However, DFA has the state explosion problem, that is, the number of DFA states may increase exponentially while compiling specific regexes to DFA. The huge memory consumption restricts its practical applications. A lot of works have addressed the DFA state explosion problem; however, none has met the requirements of fast recognition and small memory image. In this paper, we proposed RegexClassifier to recognize state-explosive regexes intelligently and efficiently. It firstly transforms regexes into Non-deterministic Finite Automatons(NFAs), then uses Graph Neural Network(GNN) models to classify NFAs in order to recognize regexes that may cause DFA state explosion. Experiments on typical rule sets show that the classification accuracy of the proposed model is up to 98%.

Tuesday, July 11 14:00 - 17:00 (Africa/Tunis)

Tutorial 2: Continuum Computing platforms for self-adaptive machine learning based IoT applications, by Nabil Abdennadher (Swizerland)

Room: TULIPE 1 / webex 1
Chair: Ali Wali (REGIM-Lab., Tunisia)

This tutorial presents a comparative study of five technologies used to develop and deploy self-adaptive Machine Learning (ML) based IoT applications on continuum computing platforms (edge-to-cloud solutions). The five technologies are AWS IoT Greengrass, Azure IoT Edge, Google, Balena and Nuvla/NuvlaEdge.

The comparative study focuses on three aspects:

  1. The services supported by the five solutions to develop and deploy self-adaptive IoT applications on edge-to-cloud technologies;
  2. The performance of the five technologies from an application and developer point of view;
  3. The operating costs incurred by the deployment of a given self-adaptive ML based application on a given edge-to-cloud solution. The last part of the tutorial is dedicated to the presentations of two examples of large-scale deployments of self-adaptive ML based IoT applications on an edge-to-cloud platform. The focus will be on the used edge devices and how the edge intelligence is adapted and improved according to the context.

Brief schedule:

Intended audience: PhD Students, engineers, scientists, industrials, and researchers interested in edge-to-cloud IoT applications/platforms, deployment, and development of IoT applications. Prerequisite knowledge or skills required for attendees 1. Cloud basic knowledge 2. Python

Tuesday, July 11 16:00 - 17:00 (Africa/Tunis)

PDS3: Poster Session and Coffee Break

Room: MIMOSA
Chair: Ilhem Kallel (University of Sfax, Tunisia & Regim-Lab., Tunisia)
A Novel Method for Arabic Text Detection with Interactive Visualization
Imene Ouali, Mme. (REGIM-Lab, Tunisia); Rahma Fourati (University of Sfax, Tunisia); Ali Wali (REGIM-Lab., Tunisia)

Text detection, recognition, and visualization from natural scenes are considered significant research topics in the field of information and communication technologies. Written text serves as a crucial source of information that humans rely on in their daily lives. However, detecting text poses several challenges, including variations in writing style, color, size, orientation, and complex backgrounds. In this study, we present a comprehensive application named ATDRV, which aims to address these challenges. For text detection and recognition, we propose a real-time multi-oriented text deep fully convolutional network system trained end-to-end. Additionally, we implement a visualization module based on Augmented Reality to display the results more clearly to users. The output of our application is a large and clear 3D object that enables easy reading of the text. Our approach improves the user experience and facilitates the reading of Arabic text regardless of color, orientation, writing style, or complex backgrounds.

Poster: Conceptual Design for FPGA Based Artifical Intelligence Model for HIL Applications
Farshideh Kordi (Laval University, Canada); Paul Fortier (Laval, Canada); Amine Miled (Laval University, Canada)

Hardware-in-the-Loop (HIL) simulators play a critical role in the automotive industry by providing extensive testing and validation capabilities for electronic control units (ECUs). One of the main challenges faced by HIL simulators involves the task of constructing a virtual environment that accurately replicate the behavior of the actual system. Artificial intelligence (AI) algorithms can be useful in generating precise virtual environments for HIL simulations of complex systems. Moreover, minimal latency is essential for establishing a reliable virtual environment. FPGA (Field Programmable Gate Array) can effectively reduce latency in HIL simulations by providing high-performance computng resources. This paper aims to address these challenge by introducing a machine learning-driven HIL simulator implemented on FPGA. The proposed architecture employs FPGA technology to enhance the computational speed of a temporal convolutional neural network (TCN).

Enhancing Fingerprinting Indoor Positioning Systems Through Hierarchical Clustering and GAN-Based CNN
Hajer Grira (ISTIC LaRINa ENSTAB University of Carthage, Tunisia); Ikbal Chammakhi Msadaa (University of Carthage & LaRINa ENSTAB, Tunisia); Khaled Grayaa (University of Carthage, Tunisia)

The importance of locating objects or people indoors has grown due to the need for asset tracking, worker location in industrial settings, and monitoring passenger flow in transportation hubs. Machine Learning (ML) can enhance the accuracy of indoor positioning systems (IPS) and simplify the laborious offline phase of fingerprinting. Our study aims to explore this potential. We propose dividing the indoor environment into zones using Hierarchical Clustering Analysis (HCA) and applying data augmentation (DA) through Generative Adversarial Networks (GANs) and Convolutional Neural Network (CNN) on transformed RSSI measurements. Converting digital entries into images enables better exploration and analysis of complex data, revealing patterns and relationships in radio properties. We demonstrate the effectiveness and accuracy of this novel IPS approach using a dataset from Western Michigan University's Waldo Library [Moh+18]. Our results show that this approach excels in relatively large indoor environments.

A Stackelberg Game for Multi-Tenant RAN Slicing in 5G Networks
Zeina Awada (Saint Joseph University, Lebanon); Kinda Khawam (Université de Versailles, France); Samer Lahoud (Dalhousie University, Canada); Melhem El Helou (Saint Joseph University of Beirut, Lebanon)

This paper addresses the multi-tenant radio access network slicing in 5G networks. The infrastructure provider (InP) slices the physical radio resources so as to meet differentiated service requirements, and the mobile virtual network operators (MVNOs) then dynamically request and lease isolated resources (from the slices) to their services. In this context, we propose a two-level single-leader multi-follower Stackelberg game to jointly solve the resource allocation and pricing problem. The InP prices its radio resources taking into account MVNO allocations, which in turn depend on the resource cost. Simulation results show that, in comparison with the Static Slicing approach, our solution achieves an efficient trade-off between MVNO satisfaction and InP revenue, while accounting for 5G service diversity and requirements.

Aggregating Multiple Embeddings: A Novel Approach to Enhance Reliability and Reduce Complexity in Facial Recognition
Houssam Benaboud and Walid Amara (Majal Berkane, Morocco); Amal Ezzouhri (Mohammed V University, Morocco); Fatima El Jaimi, Wiam Rabhi and Zakaria Charouh (Majal Berkane, Morocco)

Facial recognition is widely used, but the reliability of the embeddings extracted by most computer vision-based approaches is a challenge due to the high similarity among human faces and the effect of facial expressions and lighting. Our proposed approach aggregates multiple embeddings to generate a more robust reference for facial embedding comparison and explores the distances metrics to use in order to optimize the comparison efficiency while preserving complexity. We also apply our method to the state-of-the-art algorithm that extracts embeddings from faces in an image. The proposed approach was compared with several approaches. It optimizes the Resnet accuracy to 99.77%, Facenet to 99.79%, and Inception-ResnetV1 to 99.16%. Our approach preserves the inference time of the model while increasing its reliability since the number of comparisons is kept at a minimum. Our results demonstrate that our proposed approach offers an effective solution for addressing facial recognition in real-world environments.

A Deep Learning Approach for Airport Runway Detection and Localization from Satellite Imagery
Amine Khelifi (Rowan University, Tunisia); Mahmut Gemici and Giuseppina Carannante (Rowan University, USA); Charles Johnson (Federal Aviation Administration, USA); Nidhal Bouaynaya (Rowan University, USA)

The US lacks a complete national database of private prior permission required airports due to insufficient federal requirements for regular updates. The initial data entry into the system is usually not refreshed by the Federal Aviation Administration (FAA) or local state Department of Transportation. However, outdated or inaccurate information poses risks to aviation safety. This paper suggests a deep learning (DL) approach using Google Earth satellite imagery to identify and locate airport landing sites. The study aims to demonstrate the potential of DL algorithms in processing satellite imagery and improve the precision of the FAA's runway database. We evaluate the performance of Faster Region-based Convolutional Neural Networks using advanced backbone architectures, namely Resnet101 and Resnet-X152, in the detection of airport runways. We incorporate negative samples, i.e., highways images, to enhance the performance of the model. Our simulations reveal that Resnet-X152 outperformed Resnet101 achieving a mean average precision of 76%.

Energy-Efficient Wireless Mesh Networks with IEEE 802.11ba: A New Architecture
Roger Sanchez Vital (Universitat Politècnica de Catalunya, Spain); Carles Gomez (UPC, Spain); Eduard Garcia-Villegas (Universitat Politècnica de Catalunya, Spain)

In traditional IoT applications, energy saving is crucial, whereas the demand for high bandwidth is not frequent. Nonetheless, a new generation of IoT applications show a wider and varying range of requirements, where bandwidth or delay become key performance indicators to consider, which can be satisfied by technologies like Wi-Fi. However, Wi-Fi does not provide a long link range, and it exhibits a high energy consumption. To solve these issues, we propose a Wi-Fi mesh architecture where devices are equipped with a secondary, Wake-up Radio (WuR) interface, based on the new IEEE 802.11ba amendment. A WuR allows the main radio to remain in a power-saving state for long periods. Simulation results demonstrate that this architecture drastically reduces energy consumption, while keeping delay figures similar to those of traditional, single-interface approaches. To our best knowledge, this work pioneers the exploration of WuR's practicality in optimizing energy consumption in Wi-Fi-based mesh networks.

Adaptive Deep Reinforcement Learning Approach for Service Migration in MEC-Enabled Vehicular Networks
Sabri Khamari (CNRS-LaBRI UMR 5800, University Bordeaux, Bordeaux-INP, France); Abdennour Rachedi (Ecole Nationale Supérieure d'Informatique (ESI), France); Toufik Ahmed and Mohamed Mosbah (CNRS-LaBRI UMR 5800, University Bordeaux, Bordeaux-INP, France)

Multiaccess edge computing (MEC) has emerged as a promising technology for time-sensitive and computation-intensive tasks. However, user mobility, particularly in vehicular networks, and limited coverage of Edge Server result in service interruptions and a decrease in Quality of Service (QoS). Service migration has the potential to effectively resolve this issue. In this paper, we investigate the problem of service migration in a MEC-enabled vehicular network to minimize the total service latency and migration cost. To this end, we formulate the service migration problem as a Markov decision process (MDP). We present novel contributions by providing optimal adaptive migration strategies which consider vehicle mobility, server load, and different service profiles. We solve the problem using the Double Deep Q-network algorithm (DDQN). Simulation results show that the proposed DDQN scheme achieves a better tradeoff between latency and migration cost compared with other approaches.

Towards Microservices-Aware Autoscaling: A Review
Mohamed Hedi Fourati (ReDCAD Laboratory, Tunisia); Soumaya Marzouk (ReDCAD Laboratory Tunisia, Tunisia); Mohamed Jmaiel (ENIS, Tunisia)

This research paper elaborates an overview of autoscaling solutions for microservices-based applications deployed with containers. Two main features may characterize the efficiency of an autoscaler: analysis strategy launched to identify the root cause of resource saturation, and resource allocation strategy which selects the eligible components for scaling and calculates the required amount of resources. However, existing solutions do not consider the specificity of microservice architecture in their analysis and resource allocation strategies, which may lead to wrong root cause identification and unnecessary resource allocation.

In this paper, we investigate and classify existing autoscalers dealing with containers in microservice context. We additionally specify the strength and the shortcomings of each category. As a conclusion, we report the challenges of such solutions and provide recommendations for future works enabling the development of microservices-aware autoscalers.

(POSTER) Advanced LTCC-Integrated Technologies for mmWave 5G/Satellite Communication Antennas
Ammar Kouki (École de Technologie Supérieure, Canada)

Ultra low-loss materials, wave guiding structures and circuit fabrication technologies are critical for high performance mmWave antenna arrays destined for 5G and satellite communications. This poster presents recent advances on all three fronts through the use of: (i) ceramic materials and high conductivity silver metallization, (ii) substrate-embedded guiding structures such as rectangular and the ridge waveguides and (iii) a Low Temperature Corifed Ceramics multi-layer fabrication process. The combination of these technologies is leveraged to design and fabricate array antennas with low-loss beamforming networks and very high ceramic resonator radiating elements. Simulation and measurement results covering the 27-31 GHz frequency band are presented to illustrate the achievable performance.

Accelerating Block Propagation with Sender Switchover in a Blockchain
Akira Sakurai and Kazuyuki Shudo (Kyoto University, Japan)

In a public blockchain, the block propagation time has a significant impact on the performance, security and fairness of mining. Reducing the propagation time can increase the transaction processing performance, reduce the fork rate, and increase security. We propose a method to improve block propagation with block sender switchover, even if a node is receiving a block. The method is not vulnerable to eclipse attacks because the neighboring nodes are not changed. Our simulation shows that the proposed method improves the 90th percentile value of the propagation time by up to 18% and the fork rate by up to 7.9%.

A Fast Failure Recovery Mechanism Using On-Premise/Cloud-Based NAS in SDN
EL kamel Ali (Research Lab PRINCE (Tunisia) & Institut Superieur D'informatique Et Des Techniques de Communication- Hammem Sousse, Tunisia)

This paper proposes a Fast Failure recovery mechanism which uses a set of Backup Assistants (BA) deployed in an SDN network to backup packets during the establishment of the recovery path. BAs can be either On-Premise or Cloud-based Network-Attached Storages (NAS). The recovery path is setup using a Fast Parallel Path Computation algorithm (FPPC) which is based on the Breadth-first Search (BFS) approach. Packets which are already stored are sent back once the recovery path is established. Simulations show that the proposed approach efficiently reduces the packet loss ratio compared to existing solutions while maintaining acceptable delay bounds for loss-sensitive applications.

Large-Scale Virtual BBU Pool Implementation: The First Step Towards a Realistic RAN Virtualization
Mihia Kassi (University of Carthage, Tunisia); Soumaya Hamouda (Mediatron Lab., Sup'Com, Tunisia & University of Carthage, Tunisia)

Radio Access Network (RAN) virtualization is a key concept of 5G Networks and beyond. It provides solution to energy cost problems as well as material and radio resource management. Despite the researches made for its apprehension, the RAN virtualization is still not a reality. An important step for its achievement is the virtual BBU Pool realization. In this paper, we propose a large-scale virtual BBU Pool implementation for a realistic V-RAN. More precisely, we introduce new building blocks to enhance OpenAirInterface (OAI) platform and increase the number of virtual BBUs (vBBUs) previously limited. Then, we propose an orchestrator based on the Docker Swarm technology to achieve a full orchestration between these vBBU. The whole system implementation is performed in a new flexible way which represents an advantage for further research on the RAN virtualization and the OAI platform. Simulation results show the vBBU Pool operation and its material resources consumptions.

Tuesday, July 11 17:30 - 23:00 (Africa/Tunis)

Wednesday, July 12

Wednesday, July 12 8:30 - 10:30 (Africa/Tunis)

S20: Artificial Intelligence (AI) in Computers and Communications: Machine Learning (hybrid)

Room: TULIPE 1 / webex 1
Chair: Antonio Celesti (University of Messina, Italy)
S20.1 Reinforcement Learning-Based Approach for Microservices-Based Application Placement in Edge Environment
Kaouther Gasmi (El Manar University & ENIT, Tunisia); Abbassi Kamel (University Tunis El Manar, Tunisia); Lamia Romdhani (Qatar University, Qatar); Olivier Debauche (University of Mons & University of Liège, Belgium)

Edge computing allows for the deployment of applications near end-users, resulting in low-latency real-time applications. The adoption of the microservices architecture in modern applications has made this possible. Microservices architecture describes an application as a collection of separate but interconnected entities that can be built, tested, and deployed individually. Each microservice runs in its own process and exchanges data with others. Instead, edge nodes can independently deploy microservices-based IoT applications. Consistently meeting application service level objectives while also optimizing service placement delay and resource utilization in an edge environment is non-trivial. The present paper introduces a dynamic placement strategy that aims to fulfill application constraints and minimize infrastructure resource usage while ensuring service availability to all end-users of the UE in the edge network.

S20.2 Robustness Analysis of Hybrid Machine Learning Model for Anomaly Forecasting in Radio Access Networks
Sara Kassan (Orange Labs, France & Orange, France); Imed Hadj-Kacem (Orange, France); Sana Ben Jemaa (Orange Labs, France); Sylvain Allio (Orange, France)

Quality of Service in mobile networks is a vigorous necessity that depends on the traffic demand growth and the complex emergence of several new services and technologies. It can be improved by reducing the network failures and avoiding the congestion. As a result, a hybrid model can be used for proactive traffic congestion avoidance to alert the operator thus enhancing the end user perceived QoS. This model consists of a co-clustering algorithm to group cells that have similar behaviour based on key performance indicators and a logistic regression model to predict congestion. The hybrid model is compared to most known deep learning models presented in the literature. We consider a Long Short-Term Memory based on recurrent neural network approach and a Temporal Convolutional Network approach for comparison. The different models are compared using real field data from operational Long Term Evolution networks.

S20.3 Variational Auto-Encoder Model and Federated Approach for Non-Intrusive Load Monitoring in Smart Homes
Shamisa Kaspour and Abdulsalam Yassine (Lakehead University, Canada)

Non-Intrusive Load Monitoring (NILM) is a technique used for identifying individual appliances' energy consumption from a household's total power usage. This study examines a novel energy disaggregation model called Variational Auto-Encoder (VAE) with Federated Learning (FL). Specifically, VAE has a complex structure that resolves the issues in Short Sequence-to-Point (Short S2P) with fewer samples as input windows for each appliance. Short S2P cannot be generalized and might confront some challenges while disaggregating multi-state appliances. To this end, we examine a series of experiments using a real-life dataset of appliance-level power from the UK: UK-DALE. We also investigate additional protection of model parameters using Differential Privacy (DP). The findings show that FL with the VAE model achieves comparable performance to its centralized counterpart and improves all the metrics significantly compared to the Short S2P model.

S20.4 TEBAKA: Territorial Basic Knowledge Acquisition. an Agritech Project for Italy: Results on Self-Supervised Semantic Segmentation
Lorenzo Epifani (Università del Salento, Italy); Vincenzo D'Avino (Distretto Tecnologico Aerospaziale scarl & Università del Salento, Italy); Antonio Caruso (University of Salento, Italy)

Emerging technologies such as remote sensing from satellites and drones, internet of things (IoT), deep learning models, etc, could all be utilized to make informed and smart decisions aimed to increase crop production. We provide an overview of TEBAKA, an Italian national project on Smart Farming and discuss its relevance in the overall scenario of similar projects. We emphasize the project originality, in particular the research activity on new data-driven ML models that better extract relevant knowledge from observations. We presented the task of image semantic segmentation of olive trees or rows of grape plants and show an original self-supervised deep learning network that produce the segmentation with high accuracy. Furthermore, we discuss some idea that would be part of the project activities for the next year.

S20.5 HiNoVa: A Novel Open-Set Detection Method for Automating RF Device Authentication
Luke Puppo, Weng-Keen Wong, Bechir Hamdaoui and Abdurrahman Elmaghbub (Oregon State University, USA)

New capabilities in wireless network security have been enabled by deep learning, which leverages patterns in radio frequency (RF) data to identify and authenticate devices. Open-set detection is an area of deep learning that identifies samples captured from new devices during deployment that were not part of the training set. Past work in open-set detection has mostly been applied to independent and identically distributed data such as images. In contrast, RF signal data present a unique set of challenges as the data forms a time series with non-linear time dependencies among the samples. We introduce a novel open-set detection approach based on the patterns of the hidden state values within a Convolutional Neural Network Long Short-Term Memory model. Our approach greatly improves the Area Under the Precision-Recall Curve on LoRa, Wireless-WiFi, and Wired-WiFi datasets, and hence, can be used successfully to monitor and control unauthorized network access of wireless devices.

Wednesday, July 12 8:30 - 10:30 (Africa/Tunis)

S21: Services and protocols (online)

Room: TULIPE 3 / webex 3
Chair: Panayiotis Kotzanikolaou (University of Piraeus, Greece)
S21.1 High Throughput Routing Path Selection for Payment Channel Network
Qingqing Cai, Gang Sun, Hongfang Yu and Long Luo (University of Electronic Science and Technology of China, China)

The Payment Channel Network (PCN) has emerged as a prominent solution for addressing the scalability limitations of cryptocurrencies. A critical aspect of PCN transactions is the selection of appropriate routing paths between parties. However, the success rate of payment routing in PCNs is significantly influenced by channel capacity and other payments, making path selection in PCNs more challenging than in traditional networks. In this paper, we present a novel path selection algorithm for PCNs that jointly considers path delay, channel capacity, and inter-path impact to achieve high throughput. Our simulation results demonstrate that the proposed algorithm improves throughput by 24.2% compared to the traditional widest path algorithm, and by 5.3% compared to the traditional shortest path algorithm.

S21.2 S-PFC: Enabling Semi-Lossless RDMA Network with Selective Response to PFC
Jiang Shao, Xinyi Li, Minglin Li, Sen Liu and Yang Xu (Fudan University, China)

RoCEv2 (RDMA over Converged Ethernet version 2) is typically used in a PFC-enabled lossless network for high performance, but PFC can cause side effects, such as head-of-line (HoL) blocking and congestion spreading. Optimizing packet loss recovery mechanisms in lossy networks can enhance RDMA network performance, but packet loss can increase FCT of short flows and waste network resources. This paper proposes a novel concept of semi-lossless networks to exploit the advantages of lossless and lossy networks, and reduce their negative effects. Our proposed solution, Selective PFC (S-PFC), implements semi-lossless networks in two dimensions. First, S-PFC ensures no packet loss for short flows while timely dropping long flows. Second, S-PFC guarantees no packet loss at the network edge to prevent premature packet loss and unnecessary resource waste. Typical data center network scenarios and large-scale simulations show that S-PFC can accommodate different traffic demands effectively.

S21.3 6Former: Transformer-Based IPv6 Address Generation
Qiankun Liu and Xing Li (Tsinghua University, China)

Active network scanning in IPv6 is hindered by the vast address space of IPv6. Researchers have proposed various target generation methods, which are proved effective for reducing scanning space, to solve this problem. However, the current landscape of address generation methods is characterized by either low hit rates or limited applicability. To overcome these limitations, we propose 6Former, a novel target generation system based on Transformer. 6Former integrates a discriminator and a generator to improve hit rates and overcome usage scenarios limitations. Our experimental findings demonstrate that 6Former improves hit rates by a minimum of 38.6% over state-of-the-art generation approaches, while reducing time consumption by 31.6% in comparison to other language model-based methods.

S21.4 Before Toasters Rise Up: A View into the Emerging DoH Resolver's Deployment Risk
Yuqi Qiu (Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences, China); Baiyang Li (Institute of Information Engineering, Chinese Academy of Sciences, China); Zhiqian Li (Institute of Information Engineering Chinese Academy of Sciences, China); Liang Jiao and Yujia Zhu (Chinese Academy of Sciences, China); Qingyun Liu (Institute of Information Engineering, Chinese Academy of Sciences, China)

As an encryption protocol for DNS queries, DNS-over-HTTPS (DoH) is becoming increasingly popular, and it mainly addresses the last-mile privacy protection problem. However, the security of DoH is in urgent need of measurement and analysis due to its reliance on certificates and upstream servers. In this paper, we focus on the DoH ecosystem and conduct a one-month measurement to analyze the current deployment of DoH resolvers. Our findings indicate that some of these resolvers use invalid certificates, which can compromise the security and privacy advantages of the protocol. Furthermore, we found that many providers are at risk of certificate outages, which could cause significant disruptions to the DoH ecosystem. Additionally, we observed that the centralization of DoH resolvers and upstream DNS servers is a potential issue that needs addressing to ensure the stability of the ecosystem.

S21.5 Effective Coflow Scheduling in Hybrid Circuit and Packet Switching Networks
RenJie Jiang, Tong Zhang and Changyan Yi (Nanjing University of Aeronautics and Astronautics, China)

Hybrid circuit and packet switching networks combine optical circuit switching with electrical packet switching technologies. It can provide higher bandwidth at a lower cost than pure optical or electrical networks, which can meet different performance goals. Coflow is a superior traffic abstraction that captures applications' networking semantics. To improve the transmission performance at the application level, we focus on reducing coflow completion time (CCT). Unlike the problem of minimizing CCT in traditional networks, hybrid network provides larger coflow scheduling space about through which network the internal flows are transmitted, in addition to priorities and bandwidth allocations of coflows. To solve this problem, this paper proposes an Online Network State based coflow scheduling algorithm (ONS) in hybrid switching networks, which minimizes CCT by combining coflow priorities and internal flow path planning, fully considering different switches' characteristic. Extensive simulations prove that ONS has smaller CCT and higher throughput under different load levels.

S21.6 AlgoID: A Blockchain Reliant Self-Sovereign Identity Framework on Algorand
Andrea De Salve (National Research Council (CNR), Italy); Damiano Di Francesco Maesa (University of Pisa & University of Cambridge, United Kingdom (Great Britain)); Fabio Federico (University of Pisa, Italy); Paolo Mori (IIT, CNR, Italy); Laura Emilia Maria Ricci (University of Pisa, Italy)

The Self-Sovereign Identity (SSI) is a novel paradigm aimed at giving back users sovereignty over their digital identities. Adopting the SSI approach prevents users to have a distinct identity for each service they use, instead, use a unique decentralised identity for all the services they need to access. However, to really benefit from the SSI advantages, an actual decentralised implementation is needed to fit the specific requirements and limits of decentralised architectures, such as blockchain. To this aim, this paper proposes Algorand Identity (AlgoID), a new SSI framework for the Algorand blockchain which differs from the already existing one, because it is fully blockchain based, i.e., it exploits Algorand itself for the storage of the data identity and as the registry location. The proposed framework has been completely implemented and validated through experiments, showing that the time required to execute the framework operations is acceptably low in realistic use cases.

Wednesday, July 12 8:30 - 10:30 (Africa/Tunis)

S22: online Short Papers (online)

Room: Webex 5
Chair: Armando Ruggeri (University of Messina, Italy)
S22.1 A Dual Self-Supervised Deep Trajectory Clustering Method
Junjun Si (Peking University, China); Yang Xiang (DataMajor, China); Jin Yang (Hezhixin (Shandong) Big Data Technology Co., Ltd, China); Li Li (Beijing Information Science and Technology University, China); Bo Tu (Hezhixin (Shandong) Big Data Technology Co., Ltd, China); Xiangqun Chen (Peking University, China); Rongqing Zhang (Tongji University, China)

Trajectory clustering is a cornerstone task in trajectory mining. Sparse and noisy trajectories like Call Detail Records (CDR) have become popular with the rapid development of mobile applications. However, existing trajectory clustering methods' performance is limited on these trajectories. Therefore, we propose a dual Self-supervised Deep Trajectory Clustering (SDTC) method, to optimize trajectory representation and clustering jointly. First, we leverage the BERT model to learn spatial-temporal mobility patterns and incorporate them into the embeddings of location IDs. Second, we fine-tune the BERT model to learn cluster-friendly representations of trajectories by designing a dual self-supervised cluster layer, which improves the intra-cluster similarities and inter-cluster dissimilarities. Third, we conduct extensive experiments with two real-world datasets. Results show that SDTC improves the clustering accuracy by 12.1% (on a noisy and sparse dataset) and 3.8% (on a very sparse dataset) compared with SOTA deep clustering methods.

S22.2 An Assistant Diagnosis System for Parkinson Disease Based on Mutual Information and Genetic Algorithm
Junhong Guo (Hubei University, China); Peiran Wu (South China Agricultural University, China); Lingyun Xiao (China)

With an aggravated aging population, Parkinson disease has become a neurodegenerative disease affecting millions of elderly people, therefore, it is crucial to establish a companion diagnostic system that can provide timely and accurate diagnostic results for patients with Parkinson disease. To address the problems of misdiagnosis and under-diagnosis of Parkinson disease, an auxiliary diagnosis system for Parkinson disease (MI-GA) based on mutual information and genetic algorithm is proposed. In the feature extraction process, genetic algorithm is used to approximate the unimportant features in the clinical features of patients. And the mutual information is introduced to assign weights for each clinical feature, and then the weights are applied to the KNN algorithm to achieve the accuracy of the diagnosis results. The experimental performance indicated that the method can significantly alleviate the noise data disturbance of the clinical features of Parkinson disease and improve the accuracy of Parkinson's disease diagnosis.

S22.3 Semi-Supervised Multivariate Time Series Classification by Subsample Correlation Prediction
Yun Su (Inner Mongolia University of Technology, China)

This paper introduces a novel Semi-supervised learning (SSL) model based on subsample correlation (SSC) to address the challenge of Multivariate Time Series (MTS) classification tasks due to the scarcity of labeled data. The proposed method utilizes the coherence property of sub-samples from identical subjects to devise the pretext task. For labeled time series, SSC conducts supervised classification under the supervision of annotated class labels. For unlabeled time series, SSC uses two subsampling techniques and considers subsamples from the same time series candidate as having a positive relationship and subsamples from different candidates as having a negative relationship. By jointly classifying labeled data and predicting the subsample correlation of unlabeled data, SSC captures useful representations of unlabeled time series. The experimental results on several Multivariate Time Series classification (TSC) datasets demonstrate the effectiveness of the proposed algorithm.

S22.4 Feature Relevance in NAT Detection Using Explainable AI
Reem Nassar, Imad H Elhajj and Ayman Kayssi (American University of Beirut, Lebanon); Samer Salam (Cisco, USA)

Network Address Translation (NAT) was developed to overcome the IPv4 address exhaustion problem. NAT allows multiple devices on a local network to share a single public IP address. However, NAT can result in communication concerns, security flaws, network administration issues, and difficulties locating network faults. This paper addresses the limitations of existing NAT detection techniques through adoption of an innovative machine learning-based approach. The NAT detection model achieves promising results by utilizing novel traffic features. The current techniques used in NAT detection face challenges in terms of explainability and transparency. To address this issue, our proposed method integrates explainable artificial intelligence (XAI) techniques to increase the transparency and interpretability of NAT detection models, thereby improving their efficiency and effectiveness. The results obtained from explainability highlight the significance of incorporating new features in NAT detection. Furthermore, explainable results indicate that the presence of NAT is closely correlated with specific features.

S22.5 Improving Traffic Scheduling Based on Per-Flow Virtual Queues in Time-Sensitive Networking
HaoKai Jing and Tong Zhang (Nanjing University of Aeronautics and Astronautics, China)

Time-sensitive networking (TSN) is a set of standards designed to enhance reliable and real-time transmission for ensuring Quality of Service (QoS) for time-critical applications. The switching architecture is a fundamental component of TSN switches to ensure different requirements. However, in the existing TSN switching architecture, each port has up to 8 queues. Limited number of queues may make traffic scheduling more complex and constrained, degrading scheduling performance. In this paper, we propose a traffic scheduling method based on the dynamic hashing per-flow virtual queue (HPFS) in TSN. On this basis, we develop a new traffic scheduling strategy that fully utilizes the per-flow virtual queue. Extensive simulations show that HPFS is effective for schedulability enhancement and latency reduction while providing more concise scheduling.

S22.6 Deep Reinforcement Learning-Based SFC Deployment Scheme for 6G IoT Scenario
Shuting Long (Chongqing University of Posts and Telecommunications, China); Bei Liu (Tsinghua University, China); Hui Gao (Beijing University of Posts and Telecommunications, China); Xin Su and Xibin Xu (Tsinghua University, China)

To meet the extremely low latency requirements of 6G Internet of Things (IoT) services, 6G network should be able to intelligently allocate the network resources. Based on Mobile edge computing (MEC) and network function virtualization (NFV), the 6G NFV/MEC-enabled IoT architecture will be a viable architecture to enable flexible and efficient resource allocation. The architecture will enable the deployment of service function chains (SFCs) in NFV-enabled network edge nodes. However, due to the heterogeneous and dynamic nature of 6G IoT, it is a challenge to deploy SFCs rationally. Therefore, this paper proposes a knowledge-assisted deep reinforcement learning (KADRL) based SFC deployment scheme. The scheme achieves flexible and efficient resource allocation by deploying SFCs at appropriate edge nodes for the requirements of 6G IoT services. Simulation results demonstrate that KADRL can achieve better convergence performance and can meet the requirements of delay-sensitive IoT services.

S22.7 A Network Function Virtualization Resource Allocation Model Based on Heterogeneous Computing
Hanze Chen and Feng Xiao (Fuzhou University, China); Lingfei Cheng (Zhejiang University, China); Longlong Zhu and Dong Zhang (Fuzhou University & Quan Cheng Laboratory, China)

With the continuous increase in the speed and quantity of network traffic, higher performance requirements are put forward for the NFV system. Traditional virtualization technology is limited by slowing down of increase in CPU performance. There has some research on the use of hardware to accelerate network functions. However, existing method only consider use one hardware for acceleration, but hardware itself has limitation. It is difficult to match the network functions and hardware characteristics by considering the combination of each hardware and CPU discretely. In this paper, we propose a resource allocation model of network function virtualization(NFV) based on heterogeneous computing, which can maximize the resource utilization and obtain the global optimal solution. Our experiments prove that the genetic algorithm solution method we propose can take into account both the solution accuracy and the solution speed, and obtain an accurate Pareto curve under the multi-objectives optimization model.

S22.8 Demo: Implementation of a Train-Arrival Notification System
Aida Eduard, Dnislam Urazayev, Yermakhan Magzym and Dimitrios Zorbas (Nazarbayev University, Kazakhstan)

A train-arrival notification system's goal is to keep safe the personnel working on remotely located railroad when a train is approaching. This demo paper provides technical insights of the open hardware and software developments of this system which scientific merit is presented in [1]. Its main components, circuitry and design are discussed with a main focus on the radio communication parts.

S22.9 Poster: A Simulator for Time-Slotted LoRa Networks
Zhansaya Bagdauletkyzy, Madiyar Galymgereyev, Assel Abzalova, Abylay Kairatbek and Dimitrios Zorbas (Nazarbayev University, Kazakhstan)

In this poster paper, we present a simulator for Time-Slotted (TS) LoRa networks for the Industrial Internet of Things (IIoT). The purpose of the simulator is to assess the performance under different conditions that cannot be easily assessed by prototypes which usually consist of limited number of devices. It allows the user to enter in different parameters such as the number of nodes, the simulation time, and the packet size. The simulator implements several functionalities such as registration, data transmissions in slots, desynchronizations due to clock drift, and a path-loss model. Detailed statistics are generated such as the number of packets lost, the energy consumption, and the average registration time. Its functionality is validated using a small-scale real-world experiment that shows similar results.

S22.10 End-To-End Deep Learning Assisted by Reconfigurable Intelligent Surface for Uplink MU Massive MIMO Under PA Non-Linearities
Ahlem Arfaoui (Innov'Com@SUP'COM, Tunisia)

This paper investigates a novel high-performance autoencoder based deep learning approach for Multi-User massive MIMO uplink systems assisted by a Reconfigurable Intelligent Surface (RIS) in which the users are equipped by Power Amplifiers (PA) and aim to communicate with the base station. Indeed, the communication process is formulated in the form of a Deep Neuronal Network (DNN). To handle these scenarios, we have designed a DNN network that includes two components. the first is an encoder intended to process the nonlinear distortions of the PA. The second is a decoder consisting of two fundamental steps. 1) A classic Minimum Mean Square Error linear decoder to decode the information transmitted by the users. 2) The neural network decoder to minimize interference. The results of the numerical simulation illustrate that the proposed method offers a significant improvement in error performance in comparison with the different basic schemes.

Wednesday, July 12 8:30 - 10:30 (Africa/Tunis)

S23: Security in Computers and Communications (online)

Room: Webex 4
Chair: Intidhar Bedhief (University of Tunis El Manar, Tunisia & ENIT, Tunisia)
S23.1 Semi-CT: Certificates Transparent to Identity Owners but Opaque to Snoopers
Aozhuo Sun (Institute of Information Engineering, Chinese Academy of Sciences, China); Bingyu Li (Beihang University, China); Qiongxiao Wang (Beijing Certificate Authority Co. Ltd., China); Huiqing Wan (Chinese Academy of Sciences, China); Jingqiang Lin (School of Cyber Security, University of Science and Technology of China, Hefei, China); Wei Wang (Chinese Academy of Sciences, China)

Certificate Transparency (CT) enables timely detection of problematic certification authorities (CAs) by publicly recording all CA-issued certificates. This transparency inevitably leaks the privacy of identity owners (IdOs) through the identity information bound in certificates. In response to the privacy leakage, several privacy-preserving schemes have been proposed that transform/hash/encrypt the privacy-carrying part in certificates. However, these certificates conceal identity while also making it opaque to the IdO, which defeats the purpose of CT. To address the contradiction between transparency and privacy, we propose Semi-CT, a semi-transparency mechanism that makes the certificates transparent to IdOs but opaque to snoopers. Inspired by public-key encryption with keyword search (PEKS), Semi-CT based on bilinear pairing enables trapdoor-holding IdOs to retrieve certificates associated with their identity. Semi-CT also addresses protocol deviation detection and trapdoor protection in the malicious model. Finally, through theoretical and experimental analysis, we prove the security and feasibility of Semi-CT for practical applications.

S23.2 An Enhanced Vulnerability Detection in Software Using a Heterogeneous Encoding Ensemble
Hao Sun (Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences, China); Yongji Liu (Chinese Academy of Sciences, China); Zhenquan Ding (School of Cyber Security, University of Chinese Academy of Sciences & Institute of Information Engineering, China); Yang Xiao (Institute of Information Engineering Chinese Academy of Sciences, China); Zhiyu Hao (Zhongguancun Laboratory, China); Hongsong Zhu (Institute of Information Engineering, Chinese Academy of Sciences, China)

Detecting vulnerabilities in source code is essential to prevent cybersecurity attacks. Deep learning-based vulnerability detection is an active research topic in software security. However, existing deep learning-based vulnerability detectors are limited to using either serialization-based or graph-based methods, which do not combine serialized global and structured local information at the same time. As a result, a single method cannot perform well for semantic information that exists in complex source code, leading to low detection accuracy. We present EL-VDetect, a stacked ensemble learning approach for vulnerability detection that eliminates these issues. EL-VDetect enhances feature selection techniques to represent the best relevant vulnerability features with the sliced code and subgraphs, reducing redundant information of vulnerabilities. Our model combines serialization-based and graph-based neural networks to successfully capture the context information of source code, effectively understands code semantics, and focuses on vulnerable nodes based on the attention mechanism to accurately detect vulnerabilities.

S23.3 Komorebi: A DAG-Based Asynchronous BFT Consensus via Sharding
Song Peng, Yang Liu, Jingwen Chen, Jinlong He and Yaoqi Wang (Henan University of Technology, China)

We propose an asynchronous Byzantine fault-tolerant consensus protocol, Komorebi, based on DAG and sharding. The protocol divides the blockchain network into multiple shards, allowing for parallel processing of different transaction sets to enhance the scalability and network partition tolerance of the blockchain, thus avoiding the bottleneck problem of a single chain in the blockchain. Komorebi utilizes structured DAG inside each shard to achieve parallel broadcasting and transaction processing. Nodes only broadcast and store transaction blocks of their local shards, which improves transaction processing efficiency and reduces storage overhead. For inter-shard communication, nodes transmit block events instead of blocks, significantly reducing communication overhead between shards. Furthermore, through inter-shard communication, nodes in different shards can maintain a consistent global block event status, solving security degradation issues caused by sharding. Komorebi achieves a throughput of over 190,000 tx/s with a delay of less than 2 seconds in a 64-node network with 16 shards.

S23.4 CuckChain: A Cuckoo Rule Based Secure, High Incentive and Low Latency Blockchain System via Sharding
Junfeng Tian, Caishi Jing and Jin Tian (Hebei University, China)

Sharding is one of the dominant techniques for designing an efficient blockchain system. However, previous sharding designs have two major limitations. First, a sharding system based on reconstructing all committees can result in significant time overhead during reconfiguration. Second, random-based sharding systems are plagued by poor security. In this paper, we propose CuckChain, a cuckoo rule based secure, high incentive and low latency blockchain system via sharding. CuckChain is reconstructed through the optimization rule of genetic algorithm and cuckoo rule, which enables new nodes to be parallelized join, solving the problem of long reconstruction latency. CuckChain utilizes a reputation scheme based on a reward and punishment mechanism to detect and mark malicious nodes early so that the nodes are evenly distributed. We implemented CuckChain and evaluated its performance. The results show our solution can reduce the reconstruction latency, improve the efficiency of sharding and the security of sharding.

S23.5 Dynamic PBFT with Active Removal
Yulun Wu (University of Chinese Academy of Sciences & Institute of Information Engineering, Chinese Academy of Sciences, China); Zhujun Zhang, Dali Zhu and Wei Fan (Chinese Academy of Sciences, China)

The Practical Byzantine Fault Tolerance (PBFT) consensus in permissioned blockchains is known for its strong consistency and engineering feasibility. However, research on dynamic node join/leave processes and robustness is limited. This study presents an active removal dynamic PBFT algorithm, utilizing atomic broadcast technology for a provably secure dynamic broadcast primitive and a K-Nearest Neighbors (KNN) based malicious node classification and removal protocol. This enables PBFT consensus to accommodate dynamic requirements while mitigating voting power attacks and malicious nodes. Results show maintained system performance during dynamic node changes, accurate malicious behavior detection, and prompt node removal, addressing current PBFT limitations and enhancing blockchain network robustness.

S23.6 Malicious Relay Detection for Tor Network Using Hybrid Multi-Scale CNN-LSTM with Attention
Qiaozhi Feng, Xia Yamei, Wenbin Yao, Tianbo Lu and Xiaoyan Zhang (Beijing University of Posts and Telecommunications, China)

With the widespread use of the Tor network, attackers who control malicious relays pose a serious threat to user privacy. Therefore, identifying malicious relays is crucial for ensuring the security of the Tor network. We propose a malicious relay detection model called hybrid multi-scale CNN-LSTM with Attention model (MSC-L-A) for the Tor network. The MSC layer uses one-dimensional convolutional neural networks with different convolution kernels to capture complex multi-scale local features and fuse them. The LSTM layer leverages memory cells and gate mechanisms to control the transmission of sequence information and extract temporal correlation information. The attention mechanism automatically learns feature importance and strengthens the weights of parameters that have a substantial impact on the results. Finally, the Sigmoid function is used to classify the data. Experimental results demonstrate that our proposed model achieves higher prediction accuracy and more accurate classification of Tor relays compared to other baseline models.

Wednesday, July 12 8:30 - 10:30 (Africa/Tunis)

S24: Artificial Intelligence (AI) in Computers and Communications (online)

Room: TULIPE 2 / webex 2
Chair: Mohamed Neji (University of Sfax, Tunisia)
S24.1 DRDoSHunter: A Novel Approach Based on FDA and Inter-Flow Features for DRDoS Detection
Yifei Cheng (Institute of Information Engineering & University of Chinese Academy of Sciences, China); Yujia Zhu (Chinese Academy of Sciences, China); Rui Qin (Zhongguancun Laboratory, China); Jiang Xie (Chinese Academy of Sciences, China); Yitong Cai (Institute of Information Engineering, China)

In recent years, Distributed Reflective Denial of Service (DRDoS) attacks have emerged as a major threat to network security, utilizing IP spoofing and amplification mechanisms to drain network bandwidth. Existing approaches for DRDoS detection lack sophistication in feature selection and focus primarily on detection rather than fine-grained classification and targeted mitigation. In this paper, we propose DRDoSHunter, a novel approach that addresses these limitations. DRDoSHunter employs Frequency Domain Analysis (FDA) and inter-flow features to extract effective and robust features from continuous time series data. By utilizing a deep residual network model, our approach achieves accurate and efficient classification of DRDoS attacks at a fine-grained level. Experimental results on the CIC-DDoS2019 public dataset demonstrate that DRDoSHunter outperforms popular detection models, achieving an F1-Score of over 98.44% for DRDoS attack detection and classification.

S24.2 Facing Unknown: Open-World Encrypted Traffic Classification Based on Contrastive Pre-Training
Xiang Li (School of Cyberspace Security, University of Chinese Academy of Sciences, China); Beibei Feng (University of Chinese Academy of Sciences, China); Tianning Zang (Institute of Information Engineering Chinese Academy of Sciences, China); Xiaolin XU (CNCERT/CC, China); Shuyuan Zhao (Chinese Academy of Sciences, China); Jingrun Ma (University of Chinese Academy of Sciences, China)

Traditional Encrypted Traffic Classification (ETC) methods face a significant challenge in classifying large volumes of encrypted traffic in the open-world assumption, i.e., simultaneously classifying the known applications and detecting unknown applications. We propose a novel Open-World Contrastive Pre-training (OWCP) framework for this. OWCP performs contrastive pre-training to obtain a robust feature representation. Based on this, we determine the spherical mapping space to find the marginal flows for each known class, which are used to train GANs to synthesize new flows similar to the known parts but do not belong to any class. These synthetic flows are assigned to Softmax's unknown node to modify the classifier, effectively enhancing sensitivity towards known flows and significantly suppressing unknown ones. Extensive experiments on three datasets show that OWCP significantly outperforms existing ETC and generic open-world classification methods. Furthermore, we conduct comprehensive ablation studies and sensitivity analyses to validate each integral component of OWCP.

S24.3 Revisiting Data Poisoning Attacks on Deep Learning Based Recommender Systems
Zhiye Wang (NingBo University, China); Chennan Lin (Ningbo University, China); Xueyuan Zhang (NingBo University, China)

Deep learning based recommender systems(DLRS) as one of the up-and-coming recommender systems, and their robustness is crucial for building trustworthy recommender systems. However, recent studies have demonstrated that DLRS are vulnerable to data poisoning attacks. Specifically, an unpopular item can be promoted to regular users by injecting well-crafted fake user profiles into the victim recommender systems. In this paper, we revisit the data poisoning attacks on DLRS and find that state-of-the-art attacks suffer from two issues: user-agnostic and fake-user-unitary or target-item-agnostic, reducing the effectiveness of promotion attacks. To gap these two limitations, we proposed our improved method Generate Targeted Attacks(GTA), to implement targeted attacks on vulnerable users defined by user intent and sensitivity. We initialize the fake users by adding seed items to address the cold start problems of fake users so that we can implement targeted attacks. Our extensive experiments on two real-world datasets demonstrate the effectiveness of GTA.

S24.4 Spatiotemporal-Enhanced Recurrent Neural Network for Network Traffic Prediction
Zhiyong Chen (University of Electronic Science and Technology of China, China); Junyu Lai (University of Electronic Science and Technology of China & Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory Sichuan Province, China); Junhong Zhu, Wanyi Ma, Lianqianng Gan and Tian Xia (University of Electronic Science and Technology of China, China)

Network traffic prediction can serve as a proactive approach for network resource planning, allocation, and management. Besides, it can also be applied for load generation in digital twin networks (DTNs). This paper focuses on background traffic prediction of typical local area networks (LANs), which is vital for synchronous traffic generation in DTN. Conventional traffic prediction models are firstly reviewed. The challenges of DTN traffic prediction are analyzed. On that basis, a spatiotemporal-enhanced recurrent neural network (RNN) based approach is elaborated to accurately predict the background traffic matrices of target LANs. Experiments compare this proposed model with four baseline models, including LSTM, CNN-LSTM, ConvLSTM, and PredRNN. The results turn out that the spatiotemporal-enhanced RNN model outperforms the baselines on accuracy. In particular, it can decrease the MSE of PredRNN more than 18%, with acceptable efficiency degradation.

S24.5 MESCAL: Malicious Login Detection Based on Heterogeneous Graph Embedding with Supervised Contrastive Learning
Weiqing Huang (Institute of Information Engineering, Chinese Academy of Sciences, China); Yangyang Zong (University of Chinese Academy of Sciences & Institute of Information Engineering, Chinese Academy of Sciences, China); Zhixin Shi (Chinese Academy of Sciences, China); Puzhuo Liu (School of Cyber Security, University of Chinese Academy of Sciences & Institute of Information Engineering, CAS, China)

Malicious logins via stolen credentials have become a primary threat in cybersecurity due to their stealthy nature. Recent malicious login detection methods based on graph learning techniques have made progress due to their ability to capture interconnected relationships among log entries. However, limited malicious samples pose a critical challenge to the detection performance of existing methods. In this paper, we propose MESCAL, a novel approach based on heterogeneous graph embedding with supervised contrastive learning to solve this challenge. Concretely, we construct authentication heterogeneous graphs to represent multiple and interconnected log events. Then, we pre-train a feature extractor with supervised contrastive learning to capture rich semantics on the graphs from limited malicious samples. Based on this, cost-sensitive learning is adopted to distinguish malicious logins on imbalanced data. Extensive evaluations show that the F1 score of MESCAL based on the imbalance dataset is 94.63%, which outperforms state-of-the-art approaches.

S24.6 Anomaly Detection in Heterogeneous Time Series Data for Server-Monitoring Tasks
Rui Yu (Tsinghua University, China); Fei Xiao (State Grid Shanghai Electric Power Company Qingpu Power Supply Company, China); Zhiliang Wang and Jiahai Yang (Tsinghua University, China); Dongqi Han (Beijing University of Posts and Telecommunications, China); Zhihua Wang and Minghui Jin (State Grid Shanghai Municipal Electric Power Company, China); Chenglong Li, Enhuan Dong and Shutao Xia (Tsinghua University, China)

When conducting anomaly detection on server monitoring data, it is important to consider the heterogeneity of the data, which is characterized by the diverse and irregular nature of events. The event values can vary widely, encompassing both continuous and discrete values, and there may be a multitude of randomly occurring events. However, many commonly used anomaly detection methods tend to overlook or discard this heterogeneous data, resulting in a significant loss of valuable information. As such, we propose a novel method, called Heterogeneous Time Series Anomaly Detection (HTSAD), to overcome this difficulty. The approach introduces event gates in the Long Short-Term Memory (LSTM) model while using unsupervised learning to overcome the challenges mentioned above. The results of our experiments on real-world datasets show that HTSAD could achieve an f-score of 0.958, which demonstrates the effectiveness of our approach in detecting anomalies in heterogeneous time series data.

Wednesday, July 12 10:30 - 11:00 (Africa/Tunis)

Wednesday, July 12 11:00 - 13:00 (Africa/Tunis)

S25: Cyber Physical Systems and Internet of Things (IoT) (onsite)

Room: TULIPE 1 / webex 1
Chair: Michael Kounavis (Meta Platforms Inc., USA)
S25.1 A Novel Low Power and High Speed 9-Transistors Dynamic Full-Adder Cell Simulation and Design
Myasar Tabany (University of Hertfordshire, United Kingdom (Great Britain))

In this paper, a novel Full-Adder cell, named pseudo dynamic has been proposed and designed through an intensive simulation. The circuit has only 9 transistors and no internal nodes connected to ground. The circuit has been designed based on a "floating full adder cell," which has 3 inputs and a clock signal. Simulations have been performed using HSPICE 2008.03, based on 90nm CMOS technology, BSIM4 (level 54) version4.4 model. The cell layout was designed and extracted by Tanner Research's L-Edit Version 13.00. Results show that the proposed new Full-Adder cell has considerable low power-delay product (PDP). The resulting PDP of a Multi-Output design with a 1.2V power supply, 10fF load capacitors, and a maximum input frequency of 200 MHz was 15.3405fJ. The maximum propagation delay was 25pS, which shows the cell would work properly at high speeds. The proposed scheme outperforms the conventional dynamic logic's logic functions and loading conditions

S25.2 CANL LoRa: Collision Avoidance by Neighbor Listening for Dense LoRa Networks
Guillaume Gaillard (Universite de Pau et des Pays de l'Adour, E2S UPPA, LIUPPA, Anglet, France); CongDuc Pham (Universite de Pau et des Pays de l'Adour, E2S UPPA, LIUPPA, Pau, France)

The current medium access in LoRa, involving strategies very similar to early ALOHA systems, does not scale for future denser LoRa networks, subject to many collisions. Semtech's Channel Activity Detection (CAD) feature enables to implement a carrier sense (CS) in LoRa WANs, but its unreliability at short distance dramatically decreases its efficiency for classical CS strategies. We present CANL, a novel LoRa channel access approach based on an asynchronous collision avoidance (CA) mechanism and operating without the CAD procedure. Extensive simulations using an extended LoRaSim confirm the performance of CANL in a wide range of configurations. The results are promising and show that the proposed CA approach can greatly increase the delivery ratio in dense LoRa networks compared to a classical CS strategy while keeping the energy consumption at a reasonable level.

S25.3 A Correct by Construction Model for CBPS Systems Verification
Sarah Hussein Toman (University of Monastir, Tunisia); Aida Lahouij (ISIMM University of Monastir & Polytechnique Sousse, Tunisia); Lazhar Hamel (University of Monastir & ISIMM, France); Zinah Hussein Toman (University of Al-Qadisiyah, Iraq); Mohamed Graiet (ISIMM Monastir University, Tunisia)

The Internet of Things (IoT) comprises a group of interconnected devices that communicate through the internet, necessitating a robust infrastructure and security protocols. Within this ecosystem, the Content-based Publish-Subscribe (CBPS) messaging paradigm enables devices to subscribe to specific data or events. However, as the number of messages transmitted increases, ensuring reliable transmission and accurate event matching becomes a challenging task. To tackle this issue, this study proposes an Event-B formal model for verifying the correctness of IoT CBPS communication. The model aims to guarantee that messages are delivered to the intended devices while ensuring the appropriate use of context information for filtering messages. Furthermore, the model's consistency has been verified using Event-B tools.

S25.4 The Sight for Hearing: An IoT-Based System to Assist Drivers with Hearing Disability
Osman Salem (University of Paris Cité, France); Ahmed Mehaoua (Universite Paris Cite, France); Raouf Boutaba (University of Waterloo, Canada)

The objective of this paper is to propose a new system to assist drivers with hearing disability, deaf or unfocused persons by recognizing and transforming audible signals, such as emergency vehicle sirens or honks into alerts displayed in the dashboard. Such conversion from audio to alert messages attracts the attention of unfocused drivers to hear the honking of other cars, and enhances the safety and the quality of life for deaf or hard-of-hearing drivers. We develop an IoT based system to identify, denoise and translate any significant voice signal around the driver's car into alert messages. The signal acquired by sensors is processed to identify the source and display the associated message through the use of several machine learning models with majority voting. Our experiments results show that the proposed solution is able to achieve a 95% accuracy when trained and validated against a real dataset of 600 files.

S25.5 Asynchronous and Heterogeneous Wake-Up Schedules for IoT Neighbor Communication
Alexandre de Faria Cardoso (Universidade Federal Fluminense, Brazil); Diego Passos (Instituto Politécnico de Lisboa, Portugal & Laboratório MídiaCom, Brazil); Célio Vinicius Neves de Albuquerque and Cledson O de Sousa (Universidade Federal Fluminense, Brazil)

For Internet of Things devices, one of the main power demands comes from the radio interfaces. Hence, duty cycling, i.e., activating and deactivating the radio, becomes essential to energy savings. In asynchronous scenarios, scheduled-based duty cycle methods stand out for their low deployment cost. Recently, the literature reports many studies on asymmetric scheduled-based methods --- i.e., different nodes operating under different duty cycles. In this work we propose an extension of this concept: the heterogeneous duty cycling. It allows nodes to operate under schedules generated by distinct methods, resulting in a wider range of duty cycle choices and also a better coexistence between devices from different manufactures. In particular, we study which combinations of methods present the rotation closure property, and what are the average and maximum latencies for those pairs. We also show that heterogeneous duty cycling can improve performance if schedules are properly selected.

S25.6 Simulation of Optimized Cluster Based PBFT Blockchain Validation Process
Rabeb Ben Othmen (University of Manouba, Tunisia); Wassim Abbessi (University of Manouba & University of Carthage, Tunisia); Sofiane Ouni (University of Manouba, Tunisia); Wafa Wafa Badreddine (UPJV, France); Gilles Dequen (Université de Picardie - Jules Verne, France)

Recently, blockchain technology has emerged as a revolutionary innovation with its distributed ledger feature that allows applications to store and transmit data in a secure, transparent, and immutable manner. One of the main ideas of blockchain technology is the consensus mechanism for reaching an agreement on the state of the distributed ledger. In this context, Practical Byzantine Fault Tolerance (PBFT) is one of the most popular algorithms. However, it has a high communication overhead and poor scalability. To overcome these limits, this paper presents the effectiveness of our algorithm Random Cluster PBFT (RC-PBFT) through an evaluation of its performance under various network conditions. Our algorithm runs on randomly selected clusters, and the results are then broadcast on the blockchain network. The effectiveness of RC-PBFT is demonstrated by implementations on the NS-3 network simulator, and the tests performed show that our approach achieves significant improvements compared to the original PBFT algorithm.

Wednesday, July 12 11:00 - 13:00 (Africa/Tunis)

S26: Short Papers (online)

Room: Webex 5
Chair: EL kamel Ali (Research Lab PRINCE (Tunisia) & Institut Superieur D'informatique Et Des Techniques de Communication- Hammem Sousse, Tunisia)
S26.1 Threat Modeling with Mitre ATT&CK Framework Mapping for SD-IOT Security Assessment and Mitigation
Wissem Chorfa (University of Carthage, Tunisia); Nihel Ben Youssef (IEEE Member GresCom Laboratory, Tunisia); Abderrazak Jemai (INSAT Institute, Tunisia & Researcher at SERCOM/EPT, Tunisia)

The integration of software-defined network (SDN) and Internet of Things (IoT) networks offers solutions to IoT network issues. However, this integration also introduces new security challenges and increases the attack surface of IoT networks. Existing studies on the security of SD-IoT networks lack structure and real-world descriptions of attack vectors. To address these limitations, our paper proposes a formal methodology for evaluating SD-IoT framework security through threat modeling. Our approach classifies attack vectors using the STRIDE model, describes their Tactics, Techniques, and Procedures (TTPs) using the Mitre ATT&CK framework, and proposes countermeasures. We demonstrate the potential of our methodology by applying it to a use case of modeling security for Software-Defined Vehicle Networks (SDVNs) in intelligent transport systems (ITS). Our preliminary findings are very promising towards a formal methodology for evaluating SDIoT framework security, which could serve as a foundation for developing a new SD-IoT security standard.

S26.2 CCSv6: A Detection Model for DNS-Over-HTTPS Tunnel Using Attention Mechanism over IPv6
Liang Jiao and Yujia Zhu (Chinese Academy of Sciences, China); Xingyu Fu (Institute of Information Engineering, China); Yi Zhou (National Computer Network Emergency Response Technical Coordination Center, China); Fenglin Qin (Shandong University, China); Qingyun Liu (Institute of Information Engineering, Chinese Academy of Sciences, China)

In this paper, we first show DNS-over-HTTPS (DoH) tunneling detection methods verified to be effective over IPv4 can be applied to IPv6, and then propose a new model called CCSv6, using attention-based convolution neural network to build classifiers with flow-based features to detect DoH tunneling over IPv6, achieve 99.99% accuracy on the IPv6 dataset. In addition, we discuss the influence of various factors such as locations or DoH resolvers on the detection results in detail over IPv6. All the more important, our model shows better transfer learning ability, which can achieve the F1-score of 96% when trained on the IPv6 dataset and tested on the IPv4 dataset.

S26.3 Still Not Aware of the Loophole of Unintentional Access to Docker? A Proof of Concept
Luyi Li (Xi'an Jiaotong-Liverpool University, China); Yueyang Li (National University of Singapore, Singapore); Ruxue Luo (Georgia Institute of Technology, China); Yuzhen Chen and Wenjun Fan (Xi'an Jiaotong-Liverpool University, China)

Due to the ease of management and the high performance of the containerization, many services have been deployed on container, e.g., Web server running in Docker. However, the Docker implementation suffers several fatal loopholes. In this paper, we study a persistent security problem of Docker, i.e., the port mapping statement results in a wrong IPTABLES rule, which has been disclosed for a while but is still not solved. Therefore, we are motivated to provide a technical primer as well as a proof of concept for this issue. Nevertheless, we discuss several methods to mitigate the security problem. Further, we apply our network testbed for demonstrating the loophole and the effectiveness of the defense methods. The experimental results show that our approach not only increase the time cost for the attacker to identify the target but also bring negligible overhead for deploying the countermeasures.

S26.4 Contrastive Learning with Attention Mechanism and Multi-Scale Sample Network for Unpaired Image-To-Image Translation
Yunhao Liu (Shanghai University, China); Songyi Zhong (Artificial Intelligence Research Institute, Shanghai University, China); Zhenglin Li and Yang Zhou (Shanghai University, China)

The aim of unpaired image translation is to learn how to transform images from a source to a target domain, while preserving as many domain-invariant features as possible. Previous methods have not been able to separate foreground and background well, resulting in texture being added to the background. Moreover, these methods often fail to distinguish different objects or different parts of the same object. In this paper, we propose an attention-based generator (AG) that can redistribute the weights of visual features, significantly enhancing the network's performance in separating foreground and background. We also embed a multi-scale multilayer perceptron (MSMLP) into the framework to capture features across a broader range of scales, which improves the discrimination of various parts of objects. Our method outperforms existing methods on various datasets in terms of Fréchet inception distance. We further analyze the impact of different modules in our approach through subsequent ablation experiments.

S26.5 ESCORT: Efficient Status Check and Revocation Transparency for Linkage-Based Pseudonym Certificates in VANETs
Huiqing Wan (Chinese Academy of Sciences, China); Qiongxiao Wang (Beijing Certificate Authority Co. Ltd., China); Cunqing Ma and Yajun Teng (Chinese Academy of Sciences, China); Jingqiang Lin (School of Cyber Security, University of Science and Technology of China, Hefei, China); Dingfeng Ye (Chinese Academy of Sciences, China)

The security, privacy, and trust are the critical concerns in Vehicular Ad Hoc Networks (VANETs). The Security Credential Management System (SCMS) is one of the most promising PKI-based solutions, which adopts linkage-based pseudonym certificates and has been standardized by IEEE. This paper aims to address the issues of certificate revocation in the SCMS architecture. We propose ESCORT, achieving: (i) a privacy-preserving revocation transparency for linkage-based pseudonym certificates, enhancing the reliability of the SCMS architecture; (ii) an efficient certificate status check for vehicles, eliminating complicated computations in checking CRLs. We analyze the security and feasibility of ESCORT, and experimental results indicate its superior efficiency.

S26.6 A Covert TLS Encryption Transmission Method Based on Network Covert Channel
Weikang Yao and Tian Song (Beijing Institute of Technology, China)

TLS 1.2 protocol, as one of the most essential secure communication protocols, is widely used for web services. However, it has been exposed to many vulnerabilities so far. In order to exploit these vulnerabilities to carry out attacks, the attacker must possess the necessary information. Based on this principle, we proposed a covert TLS encryption transmission method in this paper, which uses a storage network covert channel to transmit important handshake information. The network covert channel hides the true TLS handshake information, thereby improving the security of the entire transmission process. We conducted extensive experiments to evaluate its performance. The experimental results show that our covert channel can guarantee high covertness without delay. Meanwhile, vulnerability testing shows that our scheme can resist most attacks.

S26.7 Optical Intra- and Inter-Rack Switching Architecture for Scalable, Low-Latency Data Center Networks
Georgios Drainakis (National Technical University of Athens, Greece); Peristera A. Baziana (University of Thessaly, Greece); Adonis Bogris (University of West Attica, Greece)

In this paper we propose a DC network (DCN) architecture that interconnects servers in the intra-rack and inter-rack domain, utilizing optical switching at each domain. The proposed interconnection techniques are studied as an intermediate step before migrating the entire DCN to all-optical schemes. Unlike other studies, we study the server-to-server communication across the whole DCN. For the performance evaluation we produce numerical results for throughput and end-to-end delay for three traffic classes co-existing in DCNs. The numerical analysis reveals that bandwidth utilization reaches 90% and 100% in the intra- and inter- domain respectively. Meanwhile, the maximum end-to-end delay for the highest priority packets under congested load is lower than 0.56 and 0.41 μs for the two examined intra-rack capacity scenarios of 400 and 600 Gbps respectively. A comparative study shows that our solution can effectively interconnect up to 10000 servers with lower environmental footprint and end-to-end delay than other DCNs.

S26.8 Poster: Link-Level Performance Analysis of a LoRa System
Christos Milarokostas, Katerina Giannopoulou and Ioannis Fotis (National and Kapodistrian University of Athens, Greece); Dimitris Tsolkas (Fogus Innovations and Services & National and Kapodistrian University of Athens, Greece); Nikos Passas and Lazaros Merakos (University of Athens, Greece)

During the last decade, various Internet of Things (IoT) networks have emerged as an effort to support the constantly increasing need for IoT applications. Focusing on IoT networks that use unlicensed spectrum bands, Low Power Wide Area Networks (LPWANs) are being developed widely. For these networks, one of the most effective technologies is the combination of Long Range (LoRa) / LoRa Wide Area Network (LoRaWAN) protocol stack. In this context, we set up a LoRa/LoRaWAN network in a campus environment and we conducted thorough measurements that allowed us to produce link-level performance analysis. More precisely, the relation between key link-level performance metrics, namely the Received Signal Strength Indication (RSSI) and the Signal to Noise Ratio (SNR) was derived, considering i) the heterogeneous terrain where the measurements were conducted, and ii) the configuration parameters selected, such as the Spreading Factor (SF).

S26.9 Demo: A LoRaWAN Emulator Testbed
Inabat Akanova, Dnislam Urazayev, Yerassyl Kadirzhanov and Dimitrios Zorbas (Nazarbayev University, Kazakhstan)

The LoRa technology is becoming a matter of great interest in the research field of the Internet of Things (IoT). Due to the versatility of its technical characteristics, the range of its functionality may be calibrated. However, while many existing LoRa testbed schemes attempt to embrace the low-cost, low-power characteristics of the radio technology, many fail to recognize the need for scalability and easiness for the purposes of IoT research developments. In this demo, a LoRaWAN emulator testbed is introduced and its main components and functionalities are briefly explained.

S26.10 Spatio-Temporal Behavior in Cyber-Physical Systems from a Natural Phenomena Perspective
Houda Khlif (Faculty of Economics and Management of Sfax Tunisia, Tunisia); Hatem HadjKacem (REDCAD & FSEG, University of Sfax, Tunisia); Saul Eduardo Pomares Hernandez (Instituto Nacional de Astrofísica, Óptica y Electrónica & LAAS-CNRS, Mexico); Ojilvie Avila-Cortes (INAOE, Mexico)

Currently in computer science there is a new type of emerging Systems called Cyber-Physical Systems (CPS). CPSs are found in several vital domains such as health-care, agriculture, energy, manufacturing, defense, transportation, aerospace, among others. The main characteristic of this system, unlike the rest, is that CPSs interact with the environment through sensors and actuators. The latter introduces a new type of spatio-temporal constraints and/or dependencies. Hence, it is necessary to try to understand, identify and model intrinsic spatio-temporal behaviors of these systems. In this paper, we present a study that identify, characterize and classify spatio-temporal phenomena that exist in nature. We claim that such spatio-temporal behaviors there exist in CPSs and that we can take advantage of them to design new and better solutions that will be more efficient and more effective. Finally, it is presented an emergency message system as case study to illustrate spatio-temporal behaviors in vehicular networks.

Wednesday, July 12 11:00 - 13:00 (Africa/Tunis)

S27: Security in Computers and Communications (online)

Room: Webex 4
Chair: Christos Douligeris (University of Piraeus, Greece)
S27.1 Multi-Client Searchable Symmetric Encryption in Redactable Blockchain for Conjunctive Queries
Ruizhong Du (HeBei University, China); Na Liu, Mingyue Li and Caixia Ma (Hebei University, China)

Sharing and searching encrypted data securely in outsourced environments poses a challenge due to possible cooperation between compromised users and untrusted servers. This paper studies the problem of multi-client dynamic searchable symmetric encryption, where a data owner stores encrypted documents on an untrusted remote server and selectively allows multiple users to access them through keyword search queries. The paper proposes a practical multi-client conjunctive searchable symmetric encryption scheme in a redactable blockchain to address this challenge. This scheme achieves multi-client sublinear conjunctive keyword search, and the data owner can authorize clients to access the documents. The scheme combines encryption primitives with novel access control techniques and constructs a redactable blockchain ζ-oblivious group cross tags for sublinear search. The system's security is proven in a simulation-based security model. A prototype implementation using a blockchain-based approach is developed and evaluated on a real-world database containing millions of documents to demonstrate its practicality.

S27.2 A Comprehensive Evaluation of the Impact on Tor Network Anonymity Caused by ShadowRelay
Qingfeng Zhang (Chinese Academy of Sciences, China); Jiawei Zhu (National Computer Network Emergency Response Technical Team Coordination Center, China); Muqian Chen (National Internet Emergency Center, China); Xuebin Wang (Institute of Information Engineering, Chinese Academy of Sciences, China); Qingyun Liu (Chinese Academy of Sciences, China); Jinqiao Shi (Beijing University of Posts and Telecommunications, China)

As a distributed anonymous network run by volunteers, Tor relays are often manipulated by operators to achieve their goals. Our work reveals that some relays, named ShadowRelay, are bound to hidden nodes and actively forward user traffic to the next-hop relay or target without the user's knowledge. To detect ShadowRelays, we developed HiddenSniffer based on client and Tor relay collusion, and found 162 hidden nodes distributed across 22 countries, along with 85 ShadowRelays which account for 2.08% of the total relay bandwidth. Additionally, there exists a family relationship among the ShadowRelays, with the largest family containing 24 members. The experimental results indicate that ShadowRelays have increased the number of ASes capable of sniffing user traffic by 27.6%, and improved the ability of 14.7% of attackers to launch traffic confirmation attacks. Furthermore, ShadowRelays adversely impact the Tor network's availability by introducing increased transmission delay within the circuits.

S27.3 Balancing Robustness and Covertness in NLP Model Watermarking: A Multi-Task Learning Approach
Long Dai, Jiarong Mao, Liaoran Xu, Xuefeng Fan and Xiaoyi Zhou (Hainan University, China)

The popularity of ChatGPT demonstrates the immense commercial value of natural language processing (NLP) technology. However, NLP models are vulnerable to piracy and redistribution, which harms the economic interests of model owners. Existing NLP model watermarking schemes struggle to balance robustness and covertness. Robust watermarking require embedding more information, which compromises their covertness; conversely, covert watermarking are challenging to embed more information, which affects their robustness. This paper proposes an NLP model watermarking framework that uses multi-task learning to address the conflict between robustness and covertness in existing schemes. Specifically, a covert trigger set is established to implement remote verification of the watermark model, and a covert auxiliary network is designed to enhance the watermark model's robustness. The proposed watermarking framework is evaluated on two benchmark datasets and three mainstream NLP models. The experiments validate the framework's excellent covertness, robustness, and low false positive rate.

S27.4 DAMUS: Adaptively Updating Hardware Performance Counter Based Malware Detector Under System Resource Competition
Yanfei Hu (Chinese Academy of Sciences & University of Chinese Academy of Sciences, China); Boyang Zhang (Institute of Information Engineering Chinese Academy of Sciences, China); Shuai Li (Institute of Information Engineering, China)

Hardware performance counter based malware detection (HMD) model that learns HPC-level behavior has been widely researched in various application scenarios. However, program's HPC-level behavior is easily affected due to system resource competition, which leaves HMD model out-of-date. In this paper, we propose DAMUS, a distribution-aware model updating strategy to adaptively update HMD model. Specifically, we first design an autoencoder with contrastive learning to map existing samples into a low-dimensional space. Second, the distribution characteristics are calculated for further judging the drift of testing samples. Finally, based on the total determined drifts of testing samples and a threshold, a decision could be given on whether the counter based malware detection model needs to be updated. We evaluate DAMUS by testing HMD model on two datasets collected under different resource types or pressure levels. The experimental results show the advantages of DAMUS over existing updating strategies in promoting model updating.

S27.5 Toward Unknown/Known Cyberattack Detection with a Causal Transformer
ZengRi Zeng (Hunan University of Humanities, Science and Technology & National University of Defense Technology, China); Wei Peng and Baokang Zhao (National University of Defense Technology, China)

The existing detection methods either can only classify the known types of cyberattacks or can only distinguish network anomalies to identify whether unknown cyberattacks are present; they are unable to distinguish both known and unknown cyberattack types. To solve these problems, a causal transformer-based cyberattack detection method is proposed. This method aims to eliminate the false associations caused by noise features through causal attention to obtain an intelligent interpretable detection method that can classify known attacks and unknown attack types. Validation is performed on two broad and representative datasets. The results show that the proposed causal transformer detection method can not only correctly classify known attacks but also achieve a 100% success rate in terms of identifying cyberattacks on some datasets. In addition, more than 99% of unknown attack types can be effectively identified and classified, providing timely and effective guidance for cybersecurity defense.

S27.6 Toward Intelligent Attack Detection with a Causal Explainable Method for Encrypting Traffic
ZengRi Zeng (Hunan University of Humanities, Science and Technology & National University of Defense Technology, China); Wei Peng and Baokang Zhao (National University of Defense Technology, China)

To address the cybersecurity problems caused by encrypted traffic, current attack detection systems (ADSs) focus on non-decryption-based detection. However, non-decryption systems face problems including large training sample imbalances, which seriously affect ADS performance. To address these problems, we propose a structural causal model (SCM) for cyberattacks and define detection tasks such as eliminating noise features and providing interpretability. First, the influence of causal features on the results is enhanced by weighting the causal features, and low-weight noise features are removed to improve the causal interpretability of the detection results. Furthermore, samples are generated through a Wasserstein generative adversarial network (WGAN) that learns the causal feature distribution to balance the training samples. Finally, the detection performance is evaluated through the F1 score and interpretability indices. Our method achieves F1 score improvements on all datasets, and the causal relationships between cyberattacks and feature anomalies are explained through causal effects.

Wednesday, July 12 11:00 - 13:00 (Africa/Tunis)

S28: Artificial Intelligence (AI) in Computers and Communications (online)

Room: TULIPE 2 / webex 2
Chair: Zakaria Charouh (Majal Berkane, Morocco)
S28.1 A Novel Malware Classification Method Based on Memory Image Representation
Wenjie Liu and Liming Wang (Chinese Academy of Sciences, China)

Malware classification methods based on memory image representation have received increasing attention. However, the characteristics of the memory management mechanism and efficiency of the classification model are not well considered in previous works, which hinders the classifier from extracting high-quality features and consequently results in poor performance. Motivated by this, we propose a novel malware classification method. First, we add an Efficient Convolutional Block Attention Module (E-CBAM) to select important features with fewer parameters and less computational cost. Then, we integrate our attention module into a pre-trained EfficientNet-B0 to extract features efficiently. Moreover, data augmentation and label smoothing are adopted to mitigate model overfitting. Finally, extensive experiments on a realistic dataset testify to the performance and superiority of our method in both known and unknown malware classification.

S28.2 Color-Coded Attribute Graph: Visual Exploration of Distinctive Traits of IoT-Malware Families
Jiaxing Zhou (National Institute of Information and Communications Technology, Japan & Tokyo Denki University, Japan); Tao Ban (National Institute of Information & Communications Technology, Japan); Tomohiro Morikawa (University of Hyogo, Japan); Takeshi Takahashi and Daisuke Inoue (National Institute of Information and Communications Technology, Japan)

This study investigates the use of explainable artificial intelligence (XAI) to identify the unique features distinguishing malware families and subspecies. The proposed method, called the color-coded attribute graph (CAG), employs XAI and visualization techniques to create a visual representation of malware samples. The CAG utilizes the feature importance scores (ISs) obtained from a pre-trained classifier model and a scale function to normalize the scores for visualization. The approach assigns each family a representative color. The features are color-coded according to their relevance to the malware family. This work evaluates the proposed method on a dataset of 13,823 Internet of Things malware samples and compares two approaches for feature IS extraction using Linear Support Vector Machine and Local Interpretable Model-Agnostic Explanations. The experimental results demonstrate the effectiveness of the CAG in interpreting machine learning-based methods for malware detection and classification, leading to more accurate analyses.

S28.3 MFG-R: Chinese Text Matching with Multi-Information Fusion Graph Embedding and Residual Connections
Liu Gang, Wang Tongli and Yichao Dong (Harbin Engineering University, China); Kai Zhan (PricewaterhouseCoopers, Australia); Yang Wenli (Harbin Engineering University, China)

Chinese text matching is an important task in natural language processing research, but the current techniques have problems in text feature extraction, such as insufficient word information extraction and lack of deep information in graph convolution networks. In this paper, we propose a model MFG-R for Chinese text matching with multi-information fusion graph embedding and residual connection. The model fuses the word embedding representation of the text obtained by graph convolution network with character-level information and word weight information to extract text features. At the same time, in order to perform deep interaction matching, we construct a word-level similarity interaction matrix between text pairs, and build a text interaction and feature extraction model based on residual network on this basis. Experiments show that MFG-R has excellent performance on two common Chinese datasets, Ant Financial Question Matching Corpus(AFQMC) and Large-scale Chinese Question Matching Corpus(LCQMC).

S28.4 Deep Knowledge Tracking Based on Double Attention Mechanism and Cognitive Difficulty
Junhong Guo and Ding Yonggang (Hubei University, China); Ying Li (Wuhan Marine Communication Institute & Department of Software radio, China); Lingling Zheng and Xinyue Ma (Hubei University, China); Lingyun Xiao (China)

Knowledge tracking is a key technique of artificial intelligence-assisted education to predict learners' future answers based on their historical learning interaction data. Current studies on knowledge-tracking models are well-established, but often with limited accuracy in tracking learners' knowledge acquisition. This paper proposes a knowledge tracking model based on double attention mechanism and cognitive difficulty, to achieve a better understanding of the relationships among knowledge concepts, their influence on learners' knowledge acquisition and the impact of absolute difficulty and the relative difficulty of the exercises based on learners' cognitive levels. We designed a mechanism to explore the correlation between exercises-knowledge concepts and knowledge concepts-knowledge concepts, and dynamically analyze the cognitive difficulty of exercises. The model also introduces a multi-headed self-attentive mechanism to capture the long-term dependence of learners' knowledge acquisition. The results suggest that the model can effectively track learners' knowledge acquisition.

S28.5 Cryptanalysis of a Lightweight Privacy Enhancing Authentication Scheme for Internet of Vehicles
Arijit Karati and Li-Chun Chang (National Sun Yat-sen University, Taiwan)

Recent attention has been focused on using authentication and key agreement (AKA) protocols to allow secure vehicle-to-everything (V2X) communication. Typically, such security solutions are effective against passive eavesdroppers who tap the lines and attempt to decipher the message. It has been noted, however, that an improperly designed protocol could be susceptible to an active saboteur, who could impersonate another vehicle or alter the broadcast message. In this paper, we conduct cryptanalytic attacks on a recently proposed energy-efficient authentication protocol. To do this, we employ hybrid techniques to solve the algebraic systems that occur naturally while mounting multiple forgeries. Based on cryptanalysis, we demonstrate that the algorithms and characterizations utilized in this protocol are susceptible to several security flaws, such as inappropriate anonymity, session-independent key agreement, exposing multiple keys, replay, and vehicle impersonation. In addition, this paper provides precepts for constructing a safe V2X authentication system.

S28.6 Multi-Agent Deep Reinforcement Learning Based Computation Offloading Approach for LEO Satellite Broadband Networks
Junyu Lai (University of Electronic Science and Technology of China & Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory Sichuan Province, China); Huashuo Liu, Yusong Sun, Junhong Zhu, Wanyi Ma and Lianqianng Gan (University of Electronic Science and Technology of China, China)

Conventional computation offloading approaches are originally designed for ground networks, and are not effective for low earth orbit (LEO) satellite networks. This paper proposes a multi-agent deep reinforcement learning (MADRL) algorithm for making multi-level offloading decisions in LEO satellite networks. Offloading is formulated as a partially observable Markov decision process based multi-agent decision problem. Each satellite as an agent either conducts a received task, forwards it to neighbors, or sends it to ground clouds based on its own policy. These agents are independent and their deep neural networks to make offloading decisions share identical parameter values and are trained by using the same replay buffer. A centralized training and distributed executing mechanism is adopted to ensure that agents can make globally optimized offloading decisions. Comparative experiments demonstrate that the proposed MADRL algorithm outperforms the five baselines in terms of task processing delay and bandwidth consumption with acceptable computational complexity.

Wednesday, July 12 11:00 - 13:00 (Africa/Tunis)

S29: AI in Computers and Communications (Online)

Room: TULIPE 3 / webex 3
Chair: Sirine Marrakchi (University of Sfax, Tunisia)
S29.1 DRSDetector: Detecting Gambling Websites by Multi-Level Feature Fusion
Yuxin Zhang (University of Chinese Academy of Sciences, China); Xingyu Fu (Institute of Information Engineering, China); Rong Yang (Chinese Academy of Sciences, China); Yangxi Li (National Computer Network Emergency Response Technical Team/Coordination Center, China)

With the development of the Internet, online gambling has gradually replaced the traditional way of gambling and became a popular way of making money for illegal organizations. In many countries, online gambling is prohibited by law. But in some countries, these online gambling activities can still attract a variety of victims through their secret promotion channels. In this paper, we propose a gambling website detection method DRSDetector, which combines domain features, resource features, and semantic features. And this method uses the idea of ensemble learning to fuse different modules. The experimental results show that the performance of DRSDetector is better than the traditional website detection methods. In addition, we also investigated the promotion channels of gambling websites and took China as an example to reveal the 10 major entertainment companies behind these websites. These will help the government to combat online gambling activities more accurately and effectively.

S29.2 Hunting for Hidden RDP-MITM: Analyzing and Detecting RDP MITM Tools Based on Network Features
Hao Miao (University of Chinese Academy of Sciences & Institute of Information Engineering, Chinese Academy of Sciences, China); Zhou Zhou (Institute of Information Engineering, Chinese Academy of Sciences, China); Renjie Li (University of Chinese Academy of Sciences & Institute of Information Engineering, China); Fengyuan Shi (University of Chinese Academy of Sciences, China & Institute of Information Engineering, Chinese Academy of Sciences, China); Wei Yang and Shu Li (Chinese Academy of Sciences, China); Qingyun Liu (Institute of Information Engineering, Chinese Academy of Sciences, China)

Remote Desktop Protocol (RDP) is commonly used for remote access to windows computers.The latest way to threaten RDP security is RDP man-in-the-middle (MITM) tools which realize the MITM function in an RDP connection by downgrade and forged certificates.RDP MITM tools can automate the MITM attack process, significantly reducing the difficulty of network attacks.At the same time, RDP MITM tools can be used for high-interaction RDP honeypots.

In order to mitigate this risk, we present the first in-depth study of RDP MITM tools in this paper.By analysis and experiment, we identify network features that can be used to detect RDP MITM tools effectively.Based on packet latency and TLS handshake, we propose a machine learning classifier that can detect RDP MITM tools for securing RDP connections. Finally, we analyze the deployment of RDP MITM tools in the wild and effectively measure the RDP MITM tools using our proposed detection approach.

S29.3 FedCrowdSensing: Incentive Mechanism for Crowdsensing Based on Reputation and Federated Learning
Jianquan Ouyang (xiangtan University, China); Wenke Wang (Xiangtan University, China)

In recent years, crowdsensing has become a hot topic in contemporary research. However, the traditional crowdsensing model has some issues, such as low-quality data uploaded by users, privacy and security issues, and a lack of incentive for user participation. To address these challenges, we propose a crowdsensing framework that combines blockchain and federated learning to build a decentralized security framework. Our framework enables each participant to upload model gradient data to the crowdsensing platform for aggregation while ensuring user privacy and security. And we proposed a model aggregation method based on reputation value. In addition, we also designed a reverse auction algorithm based on historical reputation to filter the set of candidates who want to participate in the task, to obtain a higher quality set of participants. Security analysis and experimental results show that this model guarantees data quality and data privacy, and enhances user participation motivation.

S29.4 GoGDDoS: A Multi-Classifier for DDoS Attacks Using Graph Neural Networks
Yuzhen Li (University of Chinese Academy of Sciences & Institute of Information Engineering, China); Zhou Zhou (Institute of Information Engineering, Chinese Academy of Sciences, China); Renjie Li (University of Chinese Academy of Sciences & Institute of Information Engineering, China); Fengyuan Shi (University of Chinese Academy of Sciences, China & Institute of Information Engineering, Chinese Academy of Sciences, China); Jiang Guo (Chinese Academy of Sciences, China); Qingyun Liu (Institute of Information Engineering, Chinese Academy of Sciences, China)

Distributed Denial of Service (DDoS) attacks are rising, evolving and growing sophistication. Multi-vector which leverages more than one methods is prevalent recently. To cope with multi-vector DDoS attack, it is necessary to classify DDoS attacks for taking robust measures. However, existing ML-based approaches for DDoS traffic multi-classification barely leverage relationships between packets and flows, which are crucial information that can significantly improve multi-classification performance. This paper proposes GoGDDoS, a multi-classifier for DDoS attacks. Concretely, we construct GoG traffic graph to clearly compress relationships between packets and flows. It merges relationship graphs of packets and flows by using graph of graph. Then, we build a two-level Graph Neural Network model to mine potential attack patterns from GoG traffic graph. The experiments with well-known datasets show that GoGDDoS performs better than its counterparts.

S29.5 Analysis of One-Bit DAC for RIS-Assisted MU Massive MIMO Systems with Efficient Autoencoder Based Deep Learning
Ahlem Arfaoui (Innov'Com@SUP'COM, Tunisia); Maha Cherif (Innov'Com Lab, Tunisia); Ridha R. Bouallegue, B. (Ecole Supérieure des Communications de Tunis, Tunisia)

This paper proposes an autoencoder-based deep learning approach for multiuser massive multiple-input multiple-output (mMIMO) downlink systems assisted by a reconfigurable intelligent surface (RIS) whose base station is equipped with an antenna array with 1-bit digital-to-analog converters (DACs) to serve multiple user terminals. RIS has introduced today one of the most revolutionary techniques to improve spectrum and energy efficiency for the 6G of wireless networks. First, we present an analytical study on the effects of 1-bit DAC on the system under consideration for a Rician fading channel. Then, the transmission system assisted by the proposed RIS design is presented, which allows network operators to control the signal propagation environment. To further improve our system, we propose the deep learning technique to compensate for the signal degradation caused by 1-bit DACs. Numerical simulations demonstrate that the compensation technique considered with the RIS presence achieves competitive performance compared to the existing literature.

S29.6 Differentially Private Functional Mechanism for Broad Learning System
Jingwen Li and Yingpeng Sang (Sun Yat-Sen University, China); Hui Tian (Griffith University, Australia)

To avoid the complex structure of deep learning models and significant training costs associated with them, the Broad Learning System (BLS) based on Random Vector Functional Link Neural Network was developed. BLS only has one input layer and one output layer and uses ridge regression theory to approximate the pseudoinverse of input, greatly simplifying the network and reducing training expenses. Despite its benefits, an attacker may use membership inference attacks to determine if some sample belongs to the model's training data when model parameters are revealed, leading to information leakage. Till now there is no related work on protecting BLS against membership inference attacks. To overcome this privacy issue we present a Privacy-Preserving Broad Learning System (PPBLS), by perturbing the objective function based on Functional Mechanism (FM) in differential privacy. We theoretically prove that PPBLS can satisfy ϵ-differential privacy, and also demonstrate its effectiveness on both regression and classification tasks.

Wednesday, July 12 13:00 - 14:00 (Africa/Tunis)

Wednesday, July 12 14:00 - 15:00 (Africa/Tunis)

Keynote (Online): Some Methods to Improve IoT Performance and Cybersecurity, by Erol Gelenbe (UK)

Chair: Michael Kounavis (Meta Platforms Inc., USA)

The relative simplicity and lightweight nature of many IoT devices, and their widespread connectivity via the Internet and other wired and wireless networks, raise issues regarding both their performance and vulnerability. Indeed, their own connectivity patterns based on the need to frequently forward and receive, data has given rise to the « Massive Access Problem (MAP) of the IoT » which is a form of congestion caused by the IoT's synchronized and repetitive data transmission patterns. On the other hand, the opportunity that IoT devices present to malicious third parties for generating highly contagious distributed denial of service (DDoS) and Botnet attacks, is also a subject of concern which is widely studied. Thus this presentation will discuss our recent results and research directions that address both of these issues. Regarding the MAP, we will outline the Quasi-Deterministic Transmission Policy (QDTP), and its main theoretical result, and present trace driven measurements, which show that QDTP can effectively mitigate MAP. We will also show how a Machine Learning approach using novel Auto-Associative Dense Random Neural Networks can detect DDos attacks with a high degree of accuracy, and discuss the potential of « low cost » online learning to protect IoT gateways and devices against Cyberattacks. The speaker gratefully acknowledges research funding from the EC as part of the H2020 GHOST, SerIoT and IoTAC projects.

Wednesday, July 12 15:00 - 15:30 (Africa/Tunis)