Program for Fourth International Conference on Computing and Network Communications (CoCoNet'20) & International Conference on Applied Soft Computing and Communication Networks (ACN'20)

Wednesday, October 14

Wednesday, October 14 9:30 - 10:30 (Asia/Calcutta)

Keynote: Secure and Trustworthy Machine Learning and Artificial Intelligence for Emerging Systems and Applications: The Triumph and TribulationDetails

Speaker: Dr. Danda B. Rawat, Howard University, Washington, DC, USA

Title of Talk: Secure and Trustworthy Machine Learning and Artificial Intelligence for Emerging Systems and Applications: The Triumph and Tribulation

This keynote focuses on both AI for cybersecurity and cybersecurity for AI for emerging systems and applications. Lately, ML algorithms and AI systems have been shown to be able to create machine cognition comparable to or even better than human cognition for some applications. Machine learning algorithms are now regarded as very useful cybersecurity solutions for different emerging applications. However, because ML algorithms and AI systems can be controlled, dodged, biased, and misled through flawed learning models and input data, they need robust security features and trustworthy AI. It is very important to design and evaluate/test ML algorithms and AI systems that produce reliable, robust, trustworthy, explainable, and fair/unbiased outcomes to make them acceptable by diverse users. The keynote covers applications and use cases of secure and trustworthy ML/AI and their success and pitfalls.

Wednesday, October 14 11:00 - 12:30 (Asia/Calcutta)

Conference Inauguration

Chief Guest: Dr G. Satheesh Reddy, Secretary, Department of Defence R&D and Chairman, DRDO

Guest of Honour: Dr.Shailja Vaidya Gupta, Senior Adviser/ Scientist 'H', Office of the Principal Scientific Adviser to the Government of India

Wednesday, October 14 13:30 - 14:20 (Asia/Calcutta)

Keynote: Big Data Graph Dimensionality Challenges and Solutions

Speaker: Dr. Jemal H. Abawajy, Deakin University, Australia

Abstract: Graphs, with more expressive power and rich analytic abilities, have increasingly become prevalent in a variety of emerging applications in business, science, and engineering domains. Although big graph data provides tremendous flexibility in representing highly interrelated data as well as easily connect diverse types of related information, it poses a number of serious challenges ranging from efficient processing to security and privacy issues. In this presentation, we will discuss at some of the application domains that use graphs. We will also discuss various challenges and some approaches we have developed to address these challenges.

Wednesday, October 14 14:25 - 15:15 (Asia/Calcutta)

Keynote: Artificial Intelligence in Massive IoT NetworksDetails

Speaker: Dr. Arumugam Nallanathan, Queen Mary University of London, United Kingdom

Wednesday, October 14 15:20 - 18:15 (Asia/Calcutta)

ACN-01: ACN-01: International Conference on Applied Soft Computing and Communication Networks (ACN'20) - Regular Papers

ACN-01.1 15:20 A Novel Approach to Preserve DRM for Content Distribution over P2P Networks
Yogita Borse (K J Somaiya College of Engineering, India); Purnima Ahirao (K. J. Somaiya College of Engineering, India); Meet Mangukiya and Shreeya Patel (K J Somaiya College of Engineering, India)
The Digital revolution has made the sharing of information very easy. Peer-to-Peer (P2P) technology has found an important and significant role through which a large number of files are being exchanged by millions of users concurrently. However, due to the significant growth of P2P file sharing, even copyrighted files are actively exchanged among the users. This caused illegal sharing of copyrighted contents and hence violating the copyright law. So, copyright infringement has become a serious issue. In this paper, a DRM protected content distribution system is proposed that uses a Web-Browser based P2P system for data transfer. The system provides hassle-free transfer of digital media content such as Audio/Video/Movies through the use of Web Real-Time Communications (WebRTC) API. This system aims to provide the best user experience by appropriate selection of the peer in terms of minimizing the latency in digital content sharing while imposing secured and restricted distribution to enforce Digital Rights Management (DRM).
ACN-01.2 15:35 Profile Verification Using Blockchain
Rishab Jain and Sarasvathi V (PES University, India)
Blockchain technology opens up opportunities to deliver new business models.The blockchain technology in this era opens doorways to deliver secure and immutable documentation. Across the globe, a huge number of students graduate every year from universities. Their work, both curricular and extracurricular, needs secure storage and verification so that it remains protected and the students can highlight it anywhere without further rounds of verification. The paper presents a way which can ease up the process of doing the verification by using blockchain. A verification protocol is devised using the concept of mentors. A blockchain based verification system is also set up to demonstrate the working of the verification protocol.
ACN-01.3 15:50 Secure Multimodal Biometric Recognition in Principal Component Subspace
Ragendhu S P (Indian Institute of Information Technology and Management Kerala, India); Tony Thomas (Indian Institute of Information Technology and Mangement - Kerala, India)
In the case of most of the biometric recognition systems, the type of features extracted from the biometric template depends on the biometric modality. A generalized feature representation is convenient and efficient in multimodal biometric system as the complexity involved in fusing or protecting the heterogeneous features corresponding to different biometric modalities can be avoided. The dimensionality of the features can also be reduced effectively if we can extract generalized feature vectors without compromising the matching performance. This work proposes two schemes: (i) a generalized yet efficient matching technique based on the cosine similarity of the principal component subspaces (ii) a protection scheme by fusing the feature vectors of multiple modalities in principal components domain and matching in that protected domain. The key feature of proposed protection scheme over existing schemes is that it does not require a user specific key for ensuring the irreversibility. User specific key is used only for ensuring the revocability of the protected template. Results show that the proposed method is able to achieve a matching performance of 98.57% for a multimodal system with face, fingerprint and finger vein templates in the fused domain.
ACN-01.4 16:05 Intrusion Detection Using Deep Neural Network with AntiRectifier Layer
Ritika Lohiya and Ankit Thakkar (Institute of Technology, Nirma University, India)
Data security is regarded to be one of the crucial challenge in this fast growing internet world. Data generated through internet is exposed to various types of vulnerabilities and exploits. Security mechanisms such as Intrusion Detection System (IDS) are designed to detect various types of vulnerabilities and attacks. Various Machine Learning (ML) and Deep Learning (DL) techniques are used for building IDS. In this paper we aim to build Deep Neural Network (DNN)-based IDS for attack detection and classification. DNN technique has certain challenges such as complex network structure, co-adaptation of feature vectors, over-fitting, to name a few. We aim to address these challenges by using AntiRectifier layer and variants of dropout namely, Standard dropout, Gaussian dropout, and Gaussian Noise. In this paper, we have evaluated DNN-based IDS using NSL-KDD, UNSW_NB-15, and CIC-IDS-2017 dataset. The experimental results show that DNN-based IDS with AntiRectifier layer outperforms in terms of accuracy, precision, recall, f-score, and false positive rate.
ACN-01.5 16:20 Distributed Denial of Service (DDoS) Attacks Detection: A Machine Learning Approach
Premson Singh Samom (North Eastern Regional Institute of Science and Technology); Amar Taggu (North Eastern Regional Institute of Science and Technology, India)
Providing a secure communication environment is one of the greatest problem on the Internet. Distributed Denial of Service (DDoS) attacks are rapidly becoming a challenge to the global Internet. DDoS limits network capacity, which leads to loss of bandwidth. The research discussed in this paper demonstrates the utilization of Machine learning approaches for four types of DDoS attacks using valid network traffic data in the development of detection and classification model for DDoS attacks. In this research, the most recent dataset(i.e. CICDDoS-2019) which has current DDoS reflective attacks is used. For comparison, we have used another dataset, UNSW-NB15 dataset. The proposed model is evaluated using four performance metrics viz., accuracy, precision, recall, F1 and prediction time. Experimental results of the model built demonstrated 99.92667% accuracy, 0.22 s predicting time in CICDDoS2019 and 96.2% accuracy, 0.12 s predicting time in UNSW-NB15 dataset in the instance of Random Forest classifier which is higher than other classification algorithms.
ACN-01.6 16:35 An Asynchronous Leader Based Neighbor Discovery Protocol in Static Wireless Ad Hoc Networks
Jose Vicente Sorribes (Universitat Politecnica de Valencia, Spain); Lourdes Peñalver (Valencia Politecnic University, Spain); Jaime Lloret (Universitat Politecnica de Valencia, Spain)
Wireless ad hoc networks are characterized by a lack of a communications infrastructure after their deployment, and the nodes have limited range radio transceivers to carry out communications. Therefore, neighbor discovery techniques are necessary so that the nodes get to know their one-hop neighbors, that is, the nodes within transmission range. In this article, we proceed to present a new randomized leader based neighbor discovery protocol for static one-hop networks, which manages to discover all the neighbors with probability 1, know when to terminate the process, following more realistic assumptions. To evaluate the performance of the protocol presented, we rely on Castalia 3.2 simulator, and we also compare the proposal with the Hello protocol chosen from the literature and an existing deterministic leader based protocol. We found that the proposal presents better results than the Hello protocol regarding 4 metrics (Neighbor Discovery Time, Number of Discovered Neighbors, Energy consumption and Throughput). In addition, our proposal presents reasonable results in comparison to the deterministic leader based protocol regarding time, energy and throughput results, and it also allows its use in an asynchronous way.
ACN-01.7 16:50 EEG Based Emotion Recognition and Its Interface with Augmented Reality
Gali Amoolya, Ashna KK, Gadde Sai Venkata Swetha, Ganga Das, Jalada Geeta and Sudhish N George (National Institute of Technology, Calicut, India)
Automatic detection of emotions determines the mental state of a person and makes him better understandable. The paper proposes a reliable system for emotion detection using brain signals and interfacing the data with the virtual world which gives a secured view of the information. The most notable feature about this work is the creation of readily accessible and easy-to-use App with augmented reality and uploading the information to the App enhances portability and facilitates the user to perceive the virtual information added to the real world. Happiness and sadness are detected practically and DEAP dataset is used for other emotions. A depressed individual can get out of his sad mental state with the aid of this system. Angry, excitement, pleasant and sad emotions are classified with the aid of Support Vector Machine and respective soothing videos for each emotion are played in virtual world.
ACN-01.8 17:05 Stability Certification of Dynamical Systems: Lyapunov Logic Learning Machine
Maurizio Mongelli and Vanessa Orani (National Research Council of Italy, Italy)
The stability of dynamical system is associated to the concept of Region of Attraction (ROA), whose accurate estimation opens the door to multidisciplinary approaches involving control theory and machine learning. Lyapunov theory provides sufficient conditions for stability and it can applied to derive the ROA. However, finding appropriate Lyapunov functions for accurate ROA estimation often is a major issue. The inherent region may be overly tight or, in case of multi-dimensional dynamical system, be difficult to understand in virtue of the inherent mathematical complexity (e.g., polynomials with high degree). The use of explainable machine learning overcomes this issue, by exploiting the model intelligibility to describe the ROA in terms of states. In this perspective, explainable machine learning and Liapunov stability theory are jointly studied to let the ROA be intelligible and to simplify the optimization procedure for constructing positively invariant estimates of the ROA. Results on the Van der Pol oscillator show how this may lead to larger ROAs than via traditional methods.
ACN-01.9 17:20 Two Dimensional Angle of Arrival Estimation Using L-Shaped Array
Santhosh Thota (Sierra Wireless, Canada); Pradip Sircar (Indian Institute of Technology Kanpur, India)
We present a novel technique to estimate the two-dimensional angles of arrival (2D-AOAs), namely, azimuth and incidence (complementary to elevation) angles of multiple narrowband sources using an L-shaped array. The approach lies in forming a polynomial from the received data matrices. From the roots of the polynomial, the AOAs can be calculated. The proposed method uses the singular value decomposition (SVD) to reduce the effect of noise. We derive an expression for the Cramer-Rao bound (CRB) of the 2D-AOA estimation problem. A performance analysis of the proposed method through simulation shows that it performs better than an existing method. We show how the proposed method compares with the CRB.
ACN-01.10 17:35 Modelling Video Frames for Object Extraction Using Spatial Correlation
Vinayak Ray (AMD India Pvt. Ltd. & Indian Institute of Technology Kanpur, India); Pradip Sircar (Indian Institute of Technology Kanpur, India)
Object extraction forms a critical part of the object-based video processing. However, most of the techniques available concentrate only on surveillance and tracking. Normal video sequence does not have steady background and hence these techniques cannot be applied to them. In our work, we propose an elegant method to model background and foreground based on histogram data. We use 2D continuous wavelet transform to spatially localize object and create object mask to approximate silhouette. With available histogram for object-pixels and background-pixels, we obtain probability density function by normalizing the area under histogram. In order to retain smoothness in our density function, we use curve-fitting techniques to approximate the probability density function.
ACN-01.11 17:50 Algebraic Modelling of a Generic Fog Scenario for Moving IoT Devices
Pedro Juan Roig (Miguel Hernández University, Spain); Salvador Alcaraz and Katja Gilly (Miguel Hernandez University, Spain); Carlos Juiz (Universitat de les Illes Balears, Spain)
Moving IoT devices may change their positions at any time, although they may always need to have their remote computing resources as close as possible due to their intrinsic restrictions. This condition makes Fog computing an ideal solution, where hosts may be distributed over the Fog domain, and additionally, the Cloud domain may be used as support. In this context, a generic scenario is presented and the most common actions regarding the management of virtual machines associated with such moving IoT devices are being modelled and verified by algebraic means, focusing on the message exchange among all concerned actors for each action.

Wednesday, October 14 15:20 - 18:30 (Asia/Calcutta)

CoCoNet-S0: CoCoNet-S0: Selected Papers

CoCoNet-S0.1 15:20 Automatic Detection of Parkinson Speech under Noisy Environment
Lalitha Sreeram (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Janani Jayashree, Sneha Ganesh and Sanjana Karanth (Amrita Vishwa Vidyapeetham, India)
This work primarily aims to automatically detect patients who are suffering from Parkinson's disease (PD) in comparison to the individuals who are healthy, through voice samples under clean and different noisy environmental conditions. The dataset was subjected to colored noises, electronic noises, and natural noise. A feature vector comprising seven mean spectral features and two mean temporal features have been extracted. The performance of the PD detection model, configured by different classifiers of K- nearest neighbor (KNN), Extreme Gradient Boost, and Classification and Regression Trees (CART) have been analyzed under varying noisy environments. The proposed model for PD detection offers 97.01% accuracy for a noise-free dataset with KNN classifier and it also performs optimally even in the presence of varying noises. All colored noise samples gave superior classification accuracy with the KNN classifier and all electronic and natural noises gave the best accuracy with the Extreme Gradient Boost classifier.
CoCoNet-S0.2 15:35 Fake News Detection using Passive-Aggressive Classifier and other Machine Learning Algorithms
Nagashri K and Sangeetha J (M S Ramaiah Institute of Technology, India)
Fake news means false facts generated for deceiving the readers. The generation of fake news has become very easy which can mislead people and cause panic. Therefore, fake news detection is gaining prominence in research field. As a solution, this paper aims at finding the best possible algorithms to detect fake news. In this paper, Term Frequency-Inverse Document Frequency (TFIDF) as well as Count Vector techniques is used separately for text preprocessing. Six machine learning algorithms namely Passive-Aggressive Classifier (PAC), Naive Bayes (NB), Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM) and Stochastic Gradient Descent (SGD) are compared using evaluation metrics such as Accuracy, Precision, Recall and F1 Score, The results have shown that the TFIDF is a better text preprocessing technique. PAC and SVM algorithms show the best performance for the considered dataset.
CoCoNet-S0.3 15:50 Modelling a Plain N-Hypercube Topology for migration in Fog Computing
Pedro Juan Roig (Miguel Hernández University, Spain); Salvador Alcaraz and Katja Gilly (Miguel Hernandez University, Spain); Carlos Juiz (Universitat de les Illes Balears, Spain)
Fog Computing deployments need consolidated Data Center infrastructures in order to get optimal performances in those special environments. One of the key points in attaining such achievements may be the implementation of Data Center topologies with enhanced features for a relatively small number of users, although ready for dealing with occasional traffic peaks. In this paper, a plain N-Hypercube switching infrastructure is modelled in different ways, such as using arithmetic, logical and algebraic ways, focusing on its capabilities to manage VM migrations among hosts within such a topology.
CoCoNet-S0.4 16:05 Providing Software Asset Management Compliance in Green Deployment Algorithm
Noëlle Baillon (Orange, France); Eddy Caron (ENS-Lyon, France); Arthur Chevalier (Univ Lyon, Inria, LIP & Orange S.A., France); Anne-Lucie Vion (Université Grenoble-Alpes & Orange, France)
Today, the use of software is generally regulated by licenses, whether they are free or paid and with or without access to their sources. The world of licenses is very vast and unknown. Often only the public version is known (a software purchase corresponds to a license). For enterprises, the reality is much more complex, especially for main software publishers. Very few, if any, deployment algorithm takes Software Asset Management (SAM) considerations into account when placing software on Cloud architecture. This could have huge financial impact on the company using theses software. In this article, we present the SAM problem more deeply, then, after expressing our problem mathematically, we present GreenSAM, our multi-parametric heuristic handling performance and energy parameters as well as SAM considerations. We will then show the use of this heuristic on two realistic situations, first with an Oracle Database deployment and second with a larger scenario of managing a small OpenStack platform deployment. In both cases, we will compare GreenSAM with other heuristics to show how it handles the performance/energy criteria and the SAM compliance.
CoCoNet-S0.5 16:20 Using AUDIT Scores to Identify Synbiotic Supplement Effect in High Risk Alcoholics
Vachrintr Sirisapsombat (Mae Fah Luang University, Thailand); Chaiyavat Chaiyasut (Chiang Mai University, Thailand); Phuttharaksa Phumcharoen and Parama Pratummas (Mae Fah Luang University, Thailand); Sasithorn Sirilun (Chiang Mai University, Thailand); Thamthiwat Nararatwanchai and Phakkharawat Sittiprapaporn (Mae Fah Luang University, Thailand)
Chronic alcohol drinking results increased intestinal permeability leading to translocation of gut-derived bacterial products. Elevated levels of these products in plasma can induce neuroinflammation probably linking to alcohol's effects on brain function. Prior literature suggests that administration synbiotic may provide intestinal microbial balance and improve gut health. It may show the capacity to ameliorate brain functions in chronic alcohol drinkers. Twenty-one male patients with Alcohol Use Disorders Identification Test (AUDIT) score of 8 or above were administered with synbiotic preparation containing seven probiotics species and three prebiotics once a day before bedtime for eight weeks. There was significantly improved total AUDIT scores (p = 0.001) as well as the data showed significant decreases in scores of the frequency of consuming and blackouts problems from alcohol drinking (p=0.011 and 0.014 respectively). No other differences were observed between trials (p > 0.05). These findings suggested that synbiotic consumption could improve the alcohol consumption and addiction levels. The synbiotic may help to prevent and treat alcoholic illnesses. Further investigations for the synbiotic supplement effect on the gut-brain axis lessening the degree of alcohol induced neuroinflammation in high-risk alcoholics should be studied.
CoCoNet-S0.6 16:35 Generative Adversarial Network based Language Identification for Closely Related Same Language Family
Ashish Kar and P G Sunitha Hiremath (KLE Technological University, India); Shankar Gangisetty (KLE Technological University, Hubballi, India)
The discrimination between similar languages is one of the main challenges in automatic language identification. In this paper, we address this problem by proposing a generative adversarial network based language identification method for identifying the sentences from closely related languages of same language family. The proposed method works on dual-reward feedback learning comprising of generator to generate nearly close language sentences, discriminator for determining how similar the generated sentences are to that of the training data and classifier for optimal prediction of the correct label. We evaluate the proposed model for pairs of languages and overall testing data comparison on Indo-Aryan languages dataset [12]. The effectiveness of our method is demonstrated in comparison to other existing state-of-the-art methods.
CoCoNet-S0.7 16:50 Wearable PIFA for Off-Body Communication: Miniaturization Design and Human Exposure Assessment
Sandra Costanzo, Adil Masoud Qureshi and Vincenzo Cioffi (University of Calabria, Italy)
A miniaturized Planar Inverted-F Antenna (PIFA) design tailored for wearable devices is presented in this work. The proposed antenna operates in the ISM band (from 2.4 to 2.5 GHz) used by common wireless communication standards. A felt textile substrate is used to allow easy integration into everyday clothing. A side-fed coaxial cable is also adopted to give a low profile. To assess human exposure, SAR analysis is conducted on the designed antenna and simulated results are presented. The SAR level of the antenna is successfully limited to comply with international guidelines, by introducing significant modifications on the antenna parameters.
CoCoNet-S0.8 17:05 A Data Driven Approach for Peer Recommendation to Reduce Dropouts in MOOC
Manika Garg (University of Delhi, India); Anita Goel (Dyal Singh College, University of Delhi, India)
Massive open online course (MOOC) is an online mode for learning aimed at unlimited participation. A characteristic feature of MOOC is reduced availability of social interaction, which is often responsible for learners feeling isolated. Although to facilitate interaction, MOOC has functionalities like discussion forum, group assignment and peer grading, however, to use these functionalities, the learner has to extensively search for the right person to interact from a large pool of learners. The isolation among learners is one of the significant factors contributing to high learner dropout rate, a major concern for MOOC. In this paper, we present an approach to reduce the dropout rate of MOOC by solving the problem of isolation. A potential solution to this problem is to encourage peer learning, by supporting learners to find other learners for interaction purposes. In this paper, we propose a user-similarity based peer recommendation approach that makes use of learners' scores and their demographic attributes, to provide recommendations on potential learning peers. To date, however, the main focus of traditional approaches for peer recommendations is on providing recommendations to all learners, including the ones who were not feeling isolated. Furthermore, these approaches provide peer recommendations to learners without considering their actual cause of isolation. To overcome these limitations, we use adaptive interventions to first identify the isolated learners, and then recommend peer learners based on their cause of isolation. The proposed approach for peer recommendation is evaluated on the basis of scalability and coverage. The publicly available MIT Harvard database has been used for experimental purpose.
CoCoNet-S0.9 17:20 Haze Removal Using Generative Adversarial Network
Amrita Sanjay and Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India); Gopakumar G (Amrita Institute, India)
The problem of haze removal has been addressed in many computer vision researches. Haze removal is the process of eliminating the degradation present in hazy images and getting the clearer counterpart. The presence of haze distorts the image and as a result, it will be difficult to apply various image processing techniques on such images. The challenging aspect in haze removal arises due to the lack of depth information in images degraded by haze. The earlier methods for haze removal includes various hand-designed priors, usage of the atmospheric scattering model or estimation of the transmission map of the image. The limitation with these models is that they are heavily dependent on the assumption of a good prior. In the recent years, various models have been proposed which effectively remove the degradation caused by haze in images using various Convolutional Neural Network architectures. This paper reviews a model which performs haze-removal on a single image using Generative Adversarial Networks (GAN). The main advantage of this method is that it does not require the transmission map of the image to be explicitly calculated. The model was evaluated using NYU depth dataset and 0-Haze dataset. The model was able to significantly enhance the quality of the images by generating the corresponding haze-free counterpart. The model was evaluated using the Peak-Signal-to-Noise ratio and Structural Similarity Index.
CoCoNet-S0.10 17:35 CRAWL: Cloud based Real-time interconnections of Agricultural Water sources using LoRa
P Sree Harshitha (IIIT Sricity, India); Raja VaraPrasad (Indian Institute of Information Technology (IIIT) Sricity, India); Hrishikesh Venkataraman (Indian Institute of Information Technology (IIIT) & Center for Smart Cities, India)
Water wells are traditional sources of water for agricultural needs. Particularly, due to ever-increasing demand from the exponentially growing population, there arises a need to balance the water demand and conserve water resources. The current systems adopted are very rudimentary and cannot be scaled. Hence, a major challenge is to investigate equitable allocation of water from excess sources to deficit ones. In this regard, this work proposes an IoT (Internet of Things) based technique to effectively manage and utilize water resources by connecting wells, ponds, lakes, etc; with a smart network of pipelines. The interconnected wells are configured with a sensors-actuation mechanism and communication devices that sense the water scarcity among wells in a network and then redistribute water accordingly. A low-cost and low-power IoT technology is used for data acquisition from sensors to auto control the actuators. A long-range wireless communication between water sources is achieved by deploying LoRa (Long Range) modules, a prototype is developed and a cloud based App is deployed. The CRAWL - Cloud based Real-time interconnections of Agricultural Water sources using LoRa system is scalable and hence, is capable of being developed as a rugged and robust system that can solve problems of floods, burst of rains and water shortages at not only panchayat/Taluka level, but also scaled upto district and state levels in the country.
CoCoNet-S0.11 17:50 Voice Conversion using Spectral Mapping and TD-PSOLA
Srinivasan K, Pooja Raju and Sai Madhav (PESIT Bangalore South Campus, India); Shikha Tripathi (PES University, India)
In this paper, we propose a novel approach for a voice con- version system that makes effective use of spectral characteristics and excitation information, to optimally morph voice. This work addresses some key issues that are not adequately addressed in reported literature and achieves a more holistic voice conversion system. This is achieved using a strategic combination of Line Spectral Frequencies (LSFs) to minimize the effects of over smoothing, a neural network for perform- ing non linear spectral mapping and Time Domain Pitch Synchronous Overlap Add to account for the interaction of excitation signal with the vocal tract. Within this proposed system, two different methods of pitch modification have been suggested and the performance of these are compared with existing models of comparable complexity. The proposed methods have an average LSF Performance Index of 0.4082 and 0.4008 respectively, which is higher than existing similar work reported.

Wednesday, October 14 15:20 - 18:15 (Asia/Calcutta)

CoCoNet-S1: CoCoNet-S1:Main Track - Image and Signal Processing/Machine Learning/Pattern Recognition - Regular Papers

CoCoNet-S1.1 15:20 Iris Recognition using Integer Wavelet Transform and Log Energy Entropy
Jincy J Fernandez (VIT University, India); Nithyanandam Pandian (VIT University, Chennai Campus, India)
As the technology is reaching its next level day by day, the concerns over information or data security are also creeping up. Biometric systems have been widely used in many real-world applications in order to provide more security to the data. Iris recognition system has become a widely used system for human identification from the last few decades. In this paper, an efficient iris recognition system is proposed where iris localization is carried out by first finding the pupil-iris boundary using the Connected Component analysis approach. And then by considering the pupil center as the reference point, it traverses through the virtual outer boundary to detect the iris-sclera boundary. After applying normalization on the iris region, the iris region is partitioned into non-overlapping blocks. Further, a combination of Integer Wavelet Transform (IWT) with Log Energy Entropy (LEE) is applied on each block to extract the unique iris code as the feature vector. The experiments have been conducted using the multimodal biometric database, SDUMLA-HMT. The proposed system has succeeded in achieving a low false acceptance rate and a very low false rejection rate. Also, the uniqueness of the iris patterns is evaluated in terms of Degrees of Freedom and is found to be a promising one.
CoCoNet-S1.2 15:35 Generalized symbolic dynamics approach for characterization of time series
Various non-linear methods have been developed to analyze the underlying dynamics of a non-linear time series. Dynamic Characterization using Symbolic Dynamics approach has been found to be a good alternative for the analysis of chaotic time series. As per this method, the given time series is first transformed into a single bit binary series. The single-bit encoding limits its ability to capture the dynamics faithfully. This paper aims to provide a generalization of the symbolic dynamics method for better capturing the dynamical characteristics such as Lyapunov exponents of a time series. The effectiveness of the generalized method is demonstrated by employing a logistic map. The results of the analysis indicate that higher-order encoding can capture the bifurcation diagram more effectively compared to the original single bit encoding used in symbolic dynamics.
CoCoNet-S1.3 15:50 Early Detection of Covid-19 on CT Scans Using Deep Learning Techniques
Limna Das P and Akondi Sai Manoj (National Institute of Technology Calicut, India); Jayaraj P B (National Institute Of Technology Calicut, India); Sachin Sharma (Institute of Advanced Research, Gandhinagar, India)
The novel coronavirus 2019 (COVID-2019) which began from China, further spread to all over the world and was announced as a pandemic by WHO. It has blocked our daily lives and world economy to a large extent. In the absence of any specific vaccine for present pandemic COVID-19, it is necessary to detect the disease at an early stage and isolate these infected patients to prevent from the further spread. The current diagnosis of COVID -19 is being done using Polymerase Chain Reaction(PCR) but there are some cases of false interpretation. Rapid Antibody Test also has faulty/wrong implications. Till now we have witnessed the global shortage of testing labs and testing kits for COVID-19. There is an urgent need for developing fast and reliable tools that can assist physicians in diagnosing COVID-19. Developing a computer-based COVID detection tool will be very useful as it can screen the positive cases from a mass collection. Radiological imaging like Computed Tomography(CT) scans can be used for the early diagnosis. With the invention of AI algorithms, we can apply learning algorithms for early detection of COVID-19. 2016 on wards, Deep learning, a deep neural network-based leaning technique is widely applied in biomedical problems. In this paper, we propose a fast and accurate diagnostic tool using deep learning algorithms for detecting this pandemic. We have built 2 models for this purpose; one with an Efficient Net architecture and other one built using ResNet by Custom Vision AI of Microsoft Azure. The loss fuction used for Efficient Net is focal loass and a GradCam heatmap for testing its reliability is used. Data was collected from different sources and the highly scaled Efficient Net architecture outperformed the Resnet architecture of MS Azure for classifying the COVID CT scans by an increase in accuracy of 10\%. We are planning to deploy this software in the form of a chatbot. Also, our model continuously learns from data regularly and would attain better accuracy in future.
CoCoNet-S1.4 16:05 Towards Protein Tertiary Structure Prediction using LSTM/BLSTM
Jisna Antony, Akhil Penikalapati and J Vinod Kumar Reddy (National Institute of Technology, Calicut, India); Pournami P N (NIT Calicut, India); Jayaraj P B (National Institute Of Technology Calicut, India)
Determining the native structure of a protein, given its primary sequence is one of the most demanding tasks in computational biology. Traditional protein structure prediction methods are laborious and involves vast conformation search space. Contrarily, deep learning is a rapidly evolving field with outstanding performance at problems where there are complicated relationships between input features and desired outputs. Various deep neural network architectures such as recurrent neural networks, convolution neural networks, deep feed-forward neural networks are becoming popular for solving problems in protein science. This work mainly concentrates on prediction of three dimensional structure of proteins from the given primary sequences using deep learning techniques. Long Short Term Memory(LSTM) and Bidirectional LSTM(BLSTM) neural network architectures are used for predicting protein tertiary structures from primary sequences. The result shows that single layer BLSTM networks fed with primary sequence and position specific scoring matrix data gives better accuracy compared to LSTM and two layer BLSTM models. This study may get benefited to the computational biologists working in the area of protein structure prediction.
CoCoNet-S1.5 16:20 Semantic Retrieval of Microbiome Information Based on Deep Learning
Joshy Alphonse (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, Amrita University, India); Anokha N Binosh, Sneha Raj, Sanjay Pal and Nidheesh Melethadathil (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, India)
About 71% of the Earth's surface is water, in which only 2.5% is fresh water. Therefore, it is imperative to recycle or repurpose the existing volume. Scientists are constantly in search of new methods of wastewater treatment which are sustainable and easier for administration, as the world population and industrial development are increasing as ever. Inside a microbiome, in our case a sewage microbiome, there may be a dominant or keystone species that interact with other microbes in the microbiome. If we can knock out this species, the other related microbes may also die significantly reducing the bacterial population. Hence to learn more about this microbiome and the interactions within, our researchers need to find out the most relevant academic or scientific papers and articles. But, in the biomedical field alone, there are more than 6 million papers in the PubMed database, but finding what's relevant is always a challenging task. Hence, in our study, we intend to validate an improvised semantic information retrieval tool in the microbiome dataset based on deep learning methods using Bio NLP and Named entity recognition.
CoCoNet-S1.6 16:35 Computational Reconstructions of Extracellular Action Potentials and Local Field Potentials of a Rat Cerebellum using Point Neurons
Arathi Rajendran (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, India); Naveen Kumar Sargurunathan, Varadha Sasi Menon, Sneha Variyath, Satram Dayamai Sai and Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)
One of the main challenges in computational modeling of neurons is to reconstruct the realistic behavior of the neurons of the brain under different functional conditions. At the same time, simulation of large networks is time-consuming and requires huge computational power. The use of spiking neuron models could reduce the computational cost and time. In this study, the extracellular potentials were reconstructed from a single point neuron model of cerebellum granule neuron and was local field potentials (LFP) were modeled. Realistic reconstruction of cerebellum Crus II evoked post-synaptic local field potentials using simple models of granule neurons help to explore emergent behavior attributing patterns of information flow in the granule layer of the cerebellum. The modeling suggests that the evoked extracellular action potential (EAP) arises from the transmembrane currents correlating spiking activity and conductive properties of the extracellular medium to the LFP. The computation study reproduces experimentally observed in vitro N2a and N2b evoked LFP waves and can be used to test the scaling of models developed from a bottom-up approach.
CoCoNet-S1.7 16:50 Deep Learning Based Approach for Skin Burn Detection with Multi-level Classification
Jagannatha Karthik (Visveshwaraiah Technological University, India); Gowrishankar S Nath (Dr Ambedkar Institute of Technology, India); Veena A (Visveswaraiah Technological University, India)
In most recent years, Convolutional Neural Network (CNN) model is the detail of craftsmanship form fruitful for photograph investigation. In this exploration, we are incorporating CNN models for classification of skin burn based on visual investigation. The aim of this paper is to develop a computerized mechanism in classifying the burn based on severity and compare the accuracies of various CNN algorithms for the same. Rapid development in deep learning enable automated learning of semantics, deep features that are easily learnt which addresses the problems of existing traditional image processing. The proposed method uses Deep Neural Network, Recurrent Neural Network and CNN model. The training is performed using dataset of 104 images classified into degree 1, degree 2 and degree 3 depending on the severity of the burn. Experimental analysis is also provided to compare the accuracies of different methods and identify the best model with better accuracy. The proposed computerized model can aid the medical experts in diagnosing the wound and suggest appropriate treatment depending on the severity of the skin burn. The proposed model could encourage telemedicine practice with the help of modern technology to remotely diagnose the patients especially in rural areas where there could be shortage of physicians.
CoCoNet-S1.8 17:05 Learning based Macro Nutrient Detection Through Plant Leaf
Amit Singh (KLE TEchnological University, India); Suneeta Budihal (Vishweshrayya Technological University, India)
The paper proposes a deep learning framework using two Deep Learning architectures, Keras and Pytorch to analyse the three macro nutrients present in the plants basically Nitrogen (N), Phosphorous (P), Potassium (K), i.e., NPK by the Convolutional Neural Network (CNN). Agriculture is the backbone for the economy of a country, especially in the developing nations. Demand for food increases with an increase in population. To meet the increasing need for food, farmers need to maximize the productivity and balance the economy to reduce the losses. The plants require various minerals and nutrients for healthy growth and fruit development. Plant nutrients should be in proper proportion to keep plant healthier and less susceptible to pests. The nutrient analysis can be done by two techniques invasive and non-invasive techniques with their own advantages and disadvantages. Invasive or traditional methods are time consuming and are costly, whereas non-invasive methods have proved its significance in recent years. The proposed methods are cost effective and consume less time compared to conventional methods. The proposed framework provides an accuracy of 91\% using Keras and 95\% using Pytorch.
CoCoNet-S1.9 17:20 Applications of RSSI Preprocessing in multi-domain wireless networks: A Survey
Tapesh Sarsodia (Institute of Engineering & Technology, Devi Ahilya University, Indore, India); Uma Rathore Bhatt (Institute of Engineering & Technology, Devi Ahilya University, Indore, India); Raksha Upadhyay (IET DAVV Indore, India)
Today's age of communication has been looking for technologies and techniques to support high data rate applications with required quality of services. Advanced communication network architectures like Internet of Things (IoT), 5th Generation (5G) and Long Term Evolution (LTE) with supporting high end transmission and reception processes have evolved to meet present requirements. It is also observed that to further enhance network performance, incorporation of Received Signal Strength Indicator (RSSI) / Channel State Information (CSI) based preprocessing techniques have been exhibiting substantial impact. Physical layer key generation in wireless networks, localization of nodes in wireless networks, signal identification, human activity recognition etc are few such applications, using RSSI/CSI preprocessing for their performance improvement in multi-domain wireless networks. Hence this paper describes above mentioned applications using different preprocessing techniques of RSSI, which is not investigated comprehensively in literature so far. Therefore the purpose of this paper is to reveal the impact of RSSI preprocessing techniques in system performance enhancement as per the need of application. As an outcome we find the possibility of applying other preprocessing techniques in existing and upcoming applications in future to achieve desired system performance.
CoCoNet-S1.10 17:35 Characteristics of Karawitan Musicians' Brain: sLORETA Investigation
Indra Wardani, Djohan Djohan and Fortunata Tyasrinestu (Indonesian Institute of the Arts Yogyakarta, Indonesia); Phakkharawat Sittiprapaporn (Mae Fah Luang University, Thailand)
The fast development of music research prompts numerous interdisciplinary issues. In the neuroscience field of study, music is being examined related to its impact on the cognitive process or the psychological process behind it. These investigations inspire music's integration to numerous subjects, for example, neuroscience and neuropsychology. A few previous studies showed the distinction between musicians and non-musicians regarding brain structure and brain activities. Instead of differentiating brain activity between musician and non-musician, the present study demonstrated the different brain activity while musicians listened to music regarding their musical experience. Applying the electroencephalography (EEG) recording and source localization in the exploratory methodology toward Karawitan musicians (N=20), the outcomes demonstrated higher brain activities in tuning in to recognizable music, Gendhing Lancaran, Javanese traditional music. In addition, the dominant brain activities happened in the temporal lobe while Karawitan musicians listened to Gendhing Lancaran, Javanese traditional music.

ISTA-01: ISTA-01: Intelligent Image Processing /Artificial Vision/Speech Processing (Regular Papers)

ISTA-01.1 15:20 Decoding of Graphically Encoded Numerical Digits Using Deep Learning and Edge Detection Techniques
Pvsms Kartik and Konjeti B V N S Sumanth (Amrita School of Engineering, Coimbatore, India); Vnv Sri Ram (Amrita School of Engineering, India); Gurusamy Jeyakumar (Amrita School of Enginering, India)
The encoding of a message is the creation of the message. The decoding of a message is the manner by which people can comprehend, and decipher the message. It is a procedure of understanding and interpretation of coded data into a comprehensible form. In this paper, a self-created explicitly defined function for encoding numerical digits into graphical representation is proposed. The proposed system integrates deep learning methods to get the probabilities of digit occurrence and Edge detection techniques for decoding the graphically encoded numerical digits to numerical digits as text. This system also employs relevant pre-processing techniques to convert RGB to text and image to Canny image. Techniques such as Multi Label Classification of images and Segmentation are used for getting the probability of occurrence. The dataset is created, on own, that consists of 1000 images. The dataset has the training data and testing data in the proportion of 9:1. The proposed system was trained on 900 images and the testing was performed on 100 images which were ordered in 10 classes. The model has created a precision of 89% for probability prediction.
ISTA-01.2 15:35 A Robust and High Capacity Data Hiding Method for JPEG Compressed Images with SVD-Based Block Selection and Advanced Error Correcting Techniques
Kusan Biswas (Jawaharlal Nehru University & National Institute of Public Finance and Policy, India)
In this paper, we propose a frequency domain data hiding method for the JPEG compressed images. The proposed method embeds data in the DCT coefficients of the selected 8x8 blocks. According to the theories of Human Visual Systems (HVS), human vision is less sensitive to perturbation of pixel values in the uneven areas of the image. In this paper we propose a Singular Value Decomposition based image roughness measure (SVD-IRM) using which we select the coarse 8x8 blocks as data embedding destinations. Moreover, to make the embedded data more robust against re-compression attack and error due to transmission over noisy channels, we employ Turbo error correcting codes. The actual data embedding is done using a proposed variant of matrix encoding that is capable of embedding three bits by modifying only one bit in block of seven carrier features. We have carried out experiments to validate the performance and it is found that the proposed method achieves better payload capacity and visual quality and is more robust than some of the recent state-of-the-art methods proposed in the literature.
ISTA-01.3 15:50 Multi-Modal Medical Image Fusion Using LMF-GAN - A Maximum Parameter Infusion Technique
Rekha R Nair (Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India); Tripty Singh (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Rashmi Sankar and Klement Gunndu (Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India)
The multi-sensor, multi-modal, composite design of medical images merged into a single image, contributes to identifying features that are relevant to medical diagnoses and treatments. Although, current image fusion technologies, including conventional and deep learning algorithms, can produce superior fused images, however, they will require huge volumes of images of various modalities. This solution may not be viable for some situations, where time efficiency is expected or the equipment is inadequate. This paper addressed a modified end-to-end Generative Adversarial Network(GAN), termed Loss Minimized Fusion Generative Adversarial Network (LMF-GAN), a triple ConvNet deep learning architecture for the fusion of medical images with a limited sampling rate. The encoding network is combined with a convolutional neural network layer and a dense block called GAN, in contrast to conventional convolutional networks. The loss is minimized by training GAN's discriminator with all the source images by learning more parameters to generate more features in the fused image. The LMF-GAN can produce fused images with clear textures through adversarial training of the generator and discriminator. The proposed fusion method has the ability to achieve state-of-the-art quality in objective and subjective evaluation, in comparison with current fusion methods. The model has experimented with standard data sets.
ISTA-01.4 16:05 StimulEye: An Intelligent Tool for Feature Extraction and Event Detection from Raw Eye Gaze Data
Amrutha Krishnamoorthy (Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India); Sindhura Vijayasimha Reddy (Amrita School of Engineering, Bangalore, India); Gowtham Devarakonda (Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India); Jyotsna C and Amudha J (Amrita Vishwa Vidyapeetham, India)
Extraction of eye gaze events is highly dependent on automated powerful software that charges exorbitant prices. The proposed open-source intelligent tool StimulEye helps to detect and classify eye gaze events and analyse various metrics related to these events. The algorithms for eye event detection in use today heavily depend on hand-crafted signal features and thresholding, which are computed from the stream of raw gaze data. These algorithms leave most of their parametric decisions on the end user which might result in ambiguity and inaccuracy. StimulEye uses deep learning techniques to automate eye gaze event detection which neither requires manual decision making nor parametric definitions. StimulEye provides an end to end solution which takes raw streams of data from an eye tracker in text form, analyses these to classify the inputs into the events, namely saccades, fixations, and blinks. It provides the user with insights such as scanpath, fixation duration, radii, etc.
ISTA-01.5 16:20 Towards Voice Based Prediction and Analysis of Emotions in ASD Children
Poornima Sukumaran (Anna University, India); Kousalya Govardhanan (Anna University & Coimbatore Institute of Technology, India)
Emotion analysis is one of the most swiftly rising domains in the research industry. The voice processing has proven to be an eminent way of recognizing various emotions of the people. The objective of this research is to identify the presence of Autism Spectrum Disorder (ASD) and to analyze the emotions of autistic children through their voice. This research presents an automated voice based analysis to detect and classify the seven basic emotions expressed by the children through source parameters associated with the voice of the children. The proposed research extracts the various prime features of voice and trains the speaker characteristic models using the identified features. The system deploys the Mel-frequency Cepstral Coefficients (MFCCs) which are resulting from a form of cepstral representation of the audio clips of the children for training the cepstral features. The proposed system deploys the Multiple Linear Prediction (MLP) classifier to achieve better classification of the emotions which enhances the behavioral analysis of autistic children and the normal toddlers. Moreover, the system also identifies the next possible emotion exhibited by the children thereby assessing their behavioral state. Hence this paper helps in the analysis of emotions in autistic children through which it aids in imparting appropriate training and treatment to improve their lifestyle.
ISTA-01.6 16:35 A Comparative Analysis of Stroke Diagnosis from Retinal Images Using Hand-Crafted Features and CNN
Jeena R S (CET); Shiny G (College of Engineering Trivandrum, India); Sukesh Kumar A (CET); Mahadevan K (SGMC & RF)
Stroke is a major reason for disability and mortality in most of the developing nations. Early detection of stroke is highly significant in bio-medical research. Research illustrates that signs of stroke are reflected in the eye and can be analyzed from fundus images. A custom dataset of fundus images has been compiled for formulating an automated stroke detection algorithm. A comparative study of hand-crafted texture features and convolutional neural network (CNN) has been recommended in this work for stroke diagnosis. The custom CNN model has also been compared with five pre-trained models trained from ImageNet. Experimental results reveal that the recommended custom CNN model gives the finest performance by providing an accuracy of 95.8 %.
ISTA-01.7 16:50 Portrait Photography Splicing Detection Using Ensemble of Convolutional Neural Networks
Remya Revi K, Wilscy M and Rahul Antony (SAINTGITS College of Engineering, India)
Forged portraits of people are widely used for creating deceitful propaganda of individuals or events in social media, and even for cooking up fake pieces of evidence in court proceedings. Hence, it is very important to find the authenticity of the images, and image forgery detection is a significant research area now. This work proposes an ensemble learning technique by combining predictions of different Convolutional Neural Networks (CNNs) for detecting forged portrait photographs. In the proposed method seven different pretrained CNN architectures such as AlexNet, VGG-16, GoogLeNet, ResNet-18, ResNet- 101, Inception-v3, and Inception-ResNet-v2 are utilized. As an initial step, we fine-tune the seven pretrained networks for portrait forgery detection with illuminant maps of images as input, and then uses a majority voting ensemble scheme to combine predictions from the fine-tuned networks. Ensemble methods had been found out to be good for improving the generalization capability of classification models. Experimental analysis is conducted using two publicly available portrait splicing datasets (DSO-1 and DSI-1). The results show that the proposed method outperforms the state-of-the-art methods using traditional machine learning techniques as well as the methods using single CNN classification models.
ISTA-01.8 17:05 Automated Segmentation of Key Structures of the Eye Using a Light-Weight Two-Step Classifier
Adish Rao (PES University, India); Aniruddha Mysore (PESIT Bangalore South Campus, India); Siddhanth Ajri and Abhishek Guragol (PES University, India); Poulami Sarkar (PESIT Bangalore South Campus, India); Gowri Srinivasa (PESIT Bangalore South Campus & Center for Pattern Recognition, India)
We present an automated approach to segment key structures of the eye, viz., the iris, pupil and sclera in images obtained using an Augmented Reality (AR)/ Virtual Reality (VR) application. This is done using a two-step classifier: In the first step, we use an auto encoder-decoder network to obtain a pixel-wise classification of regions that comprise the iris, sclera and the background (image pixels that are outside the region of the eye). In the second step, we perform a pixel-wise classification of the iris region to delineate the pupil. The images used are from the OpenEDS challenge that evaluated not only the accuracy of the segmentation but also the computational cost involved. Our approach achieved a score of 0.93 on the leaderboard, outperforming the baseline model by achieving a higher accuracy and using a smaller number of parameters. These results demonstrate the great promise pipelined models hold along with the benefit of using domain-specific processing and feature engineering in conjunction with deep-learning based approaches for segmentation tasks.
ISTA-01.9 17:20 Probability of Loan Default - Applying Data Analytics to Financial Credit Risk Prediction
Marcin Paprzycki (IBSPAN & WSM, Poland); Maria Ganzha (Warsaw University of Technology & System Research Institute, Polish Academy of Sciences, Poland); Aleksandra Łuczak (Warsaw University of Technology, Poland)
In the banking industry, one of the important issues is how to establish credit worthiness of potential clients. With the possibility of collecting digital records of results of past credit applications (of all clients), it can be stipulated that machine learning techniques can be used in ``credit decision support'' systems. There exists a substantial body of literature devoted to this subject. Moreover, benchmark datasets have been proposed, to establish effectiveness of proposed credit risk assessment approaches. The aim of this work is to compare performance of \textit{seven} different classifiers, applied to \textit{two} different benchmark datasets. Moreover, capabilities of, recently introduced, methods for combining results from multiple classifiers, into a meta-classifier, will be evaluated
ISTA-01.10 17:35 A Novel Deep Learning Approach for the Automated Diagnosis of Normal Pressure Hydrocephalus
Rudhra B (IIITM-K, India); Malu G (IIITMK, India); Sherly Elizabeth (IIITM-K, Technopark, Trivandrum, India); Robert Mathew (Anugraha Neurocare, India)
Normal Pressure Hydrocephalus (NPH) an Atypical Parkinsonian syndrome is a neurological syndrome that mainly affects elderly people. This syndrome shows the symptoms of Parkinson's disease (PD) such as walking problems, dementia, impaired bladder control, and mental impairment. The Magnetic Resonance Imaging (MRI) is the aptest modality for the detection of the abnormal build-up of cerebrospinal fluid in the brain's cavities or ventricles, which is the major cause of NPH. This work aims to develop an automated biomarker for NPH segmentation and classification (NPH-SC) that detect hydrocephalus efficiently using a deep learning-based approach. Removal of noncerebral tissues (skull, scalp, and dura) and noise from brain images by Skullstripping, Unsharp-mask-based edge sharpening, Segmentation by Markerbased watershed algorithm, and labeling are performed to improve the accuracy of the CNN based classification system. The brain ventricles are extracted using the external and internal markers and then fed into the Convolutional Neural Networks (CNN) for classification. This automated NPH-SC model achieved a sensitivity of 96%, a specificity of 100%, and validation accuracy of 97%, respectively. The prediction system with the help of a CNN classifier is used for the calculation of test accuracy of the system and obtained 98% accuracy, which is a promising result.

ISTA-02: ISTA-02: Intelligent Techniques for Networked Systems/Security/Recommender Systems(Regular Papers)

ISTA-02.1 15:20 A Psychologically-Inspired Fuzzy-Based Approach for User Personality Prediction in Rumor Propagation Across Social Networks
Indu V (Indian Institute of Information Technology and Management-Kerala, India); Sabu M Thampi (Indian Institute of Information Technology and Management - Kerala, India)
Social networks have emerged as a fertile ground for the spread of rumors and misinformation in recent times. The increased rate of social networking owes to the popularity of social networks among the common people and user personality has been considered as a principal component in predicting individuals' social media usage patterns. Several studies have been conducted to study the psychological factors influencing the social network usage of people but only a few works have explored the relationship between the user's personality and their orientation to spread rumors. This research aims to investigate the effect of personality on rumor spread on social networks. In this work, we propose a psychologically-inspired fuzzy-based approach grounded on the Five-Factor Model of behavioral theory to analyze the behavior of people who are highly involved in rumor diffusion and categorize users into the susceptible and resistant group, based on their inclination towards rumor sharing. We conducted our experiments in almost 825 individuals who shared rumor tweets on Twitter related to five different events. Our study ratifies the truth that the personality traits of individuals play a significant role in rumor dissemination and the experimental results prove that users exhibiting a high degree of agreeableness trait are more engaged in rumor sharing activities and the users high in extraversion and openness trait restrain themselves from rumor propagation.
ISTA-02.2 15:35 Exploring Fake News Identification Using Word and Sentence Embeddings
Priyanga Vt (Amrita Vishwa Vidyapeetham,India); Sanjanasri Jp (Amrita Vishwa Vidyapeeham, India); Vijay Krishna Menon, Gopalakrishnan E A and Soman K P (Amrita Vishwa Vidyapeetham, India)
In recent years, the widespread of internet and social media networks like Facebook, Twitter and Whatsapp have changed the way the news is created and published; accessing news has become easy and inexpensive. However, internet and social media, has become breeding ground for the circulation of fake news as well. Fake news are deliberately created either to increase the readership or disrupt the peace in the society for political and commercial benefits. It is necessary to identity and filter out the fake news due to its high content in social media. Most of the existing methods for detecting fake news involves supervised learning which is time consuming and inaccurate. In this paper, we have selected features cautiously by analyzing the standard data set for our study. We use LIAR and ISOT data set for data analysis in this paper. The news data which are highly correlated are churned from the entire data set by using metric like cosine similarity and also separated to different domain based on the nature of the news considered. In this paper, we use auto-encoders to detect fake news and also used centrality measures to differentiate between true and fake news.
ISTA-02.3 15:50 Secure and Efficient WBANs Algorithm with Authentication Mechanism
Vinay Pathak (JNU, new delhi, India); Karan Singh (JNU, New Delhi)
Now a days, due to rapid growth in field sensor technology and embedded technology wireless body area network WBANs plays a vital role in monitoring of human body system and surrounding environment. It supports many or number of application in healthcare on one hand as well as very much help full in pandemic scenarios. In the field of health care, it has become the most innovative area which is intriguing to many researchers because of its huge future prospective and potential. Data collected by different- different wireless sensor or node is very much personal, critical and important because of human life involvement. WBANs are capable to minimize human to human contact which is helpful to stop the spread of severe communicable diseases. Biggest concern is the maintenance of privacy and accuracy of data is still hot area of research due to day by day nature of attacks are changing and increasing as well as for the sake of better performance. A suitable security mechanism is a way to ensure above all. Whereas for achievement of data security it is expedient to investigate/propose a mechanism for data security. It is important to update regular data of the patient, WBANs help to deliver truthful reports related to the patient's health regularly as well as individually. This paper proposes a WSLE algorithm which shows a better result in comparison to the existing algorithm in their previous works. This work is all about proposing a mechanism which needs less resource comparatively. Only authentic entities can able to interact with the authenticate server which has become obligatory for both sides. This helps to keep data safe. Number of authentication schemes has been proposed or discussed by different researchers. In this paper, we are going to propose a Secure and Efficient WBANs Algorithm with Authentication Mechanism (SEAM) a security framework which will take care of the authentication as well as the security of transmitted data.
ISTA-02.4 16:05 User- Centric Framework to Facilitate Trust Worthy Cloud Service Provider Selection Based on Fuzzy Inference System
M Sujatha and K Geetha (Sastra Deemed to be University, India); Balakrishnan P (SCOPE, VIT University Vellore Campus, India)
The widespread adoption of cloud computing by several companies across diverse verticals of different sizes has led to an exponential growth of Cloud Service Providers (CSP). Multiple CSPs offer homogeneous services with a vast array of options and different pricing policies, making the suitable service selection process complex. Our proposed model simplifies the IaaS selection process that can be used by all users including clients from non-IT background. The reach of the proposed framework amidst novice users allows to express their requirement in Natural language format consolidated through well designed queries along with options. These options are mapped with the relevant service types offered by multiple competent service providers. It explores all options exhaustively and enlist them with the best possible ranking along with cost incurred. Hence an amateur client can exercise best possible provider with an appealing cloud instance to suit to immense audience. This framework is validated by applying it for a mutli player online gaming application use case and it has outperformed the online tools thus making it an exemplary model.
ISTA-02.5 16:20 Cost-Enabled QoS Aware Task Scheduling in the Cloud Management System
Rajkumar Rajavel (Galgotias University, India); Sathish Kumar Ravichandran (Christ University, India); Partheeban Nagappan (Galgotias University, India); Sivakumar Venu (Dayananda Sagar Academy of Technology and Management, India)
Maintaining Quality of Service (QoS) related parameters has been recognized as the burning issue in cloud management systems. Lack of such QoS parameters leads to low belief in the minds of cloud users from engaging the services of the cloud service providers. However, the presented task scheduling algorithms took into account QoS parameters like latency, make-span, and load balancing to cater to the user requirements. These parameters are not enough to guarantee the desired user experience or task to be completed within the predetermined time. Therefore, the proposed research work considers the Cost-enabled QoS Aware Task (Job) Scheduling algorithm to enhance satisfaction of the user requirement and maximizes the profit of cloud providers in the market. This proposed scheduling algorithm will lively estimate the cost-enabled QoS metrics of virtual resources available from the unified resource layer. Moreover, the virtual machine manager will frequently update the current state-of-the art information about resources to the proposed scheduler to make appropriate decisions. Hence, the proposed research work guarantees the profit for cloud providers apart from QoS parameters like make-span, cloud utilization and cloud utility which is evident by comparing with existing time and cost based task scheduling algorithms.
ISTA-02.6 16:35 Task Aware Optimized Energy Cost and Carbon Emission Based Virtual Machine Placement in Sustainable Data Centers
T Renugadevi and K Geetha (Sastra Deemed to be University, India)
Management of IT services is rapidly adapting to the cloud computing environment due to optimized service delivery models. Geo distributed cloud data centers act as a backbone for providing fundamental infrastructure for the cloud services delivery. Conversely, their high growing energy consumption rate is the major problem to be addressed. Cloud providers are in a pang of hunger to identify different solutions to tackle energy management and carbon emission. In this work, a multi-cloud environment modeled as geographically distributed data centers with varying solar power generation corresponding to its location, electricity price, carbon emission, and carbon tax. The energy management of the workload allocation algorithm is strongly dependent on the nature of the application considered. The task deadline and brownout information is used to bring in variation in task types. The renewable energy-aware workload allocation algorithm adaptive to task nature is proposed with migration policy to explore its impact on carbon emission, total energy cost, brown and renewable power consumption.
ISTA-02.7 16:50 Analytical Study on Algorithms for Content Based Mobile Phone Recommendation System
Pvsms Kartik, B Abhilash and Durga Naga Sai Sravan Nekkanti (Amrita School of Engineering, Coimbatore, India); Gurusamy Jeyakumar (Amrita School of Enginering, India)
Recommendation of anything helps in filtering out unwanted and irrelevant products from the entire work set which are not of any use or do not add any importance or value to the current task set to be accomplished. Mobile phone recommendation systems could be a probable revolution in the near future, as the telecommunication and mobile phone production industries are increasing exponentially in the market. Such systems would extract description of similar mobile phones which are most correlated with the mobile phone of user interest. The proposed system carries out this task taking the dataset which comprises of mobile phones and its features extracted from the E-commerce website 'Flipkart'. The dataset comprises of 6917 records in total each one corresponding to a unique mobile phone in the website. The proposed model finds the similarities and does the predictions based on the input features in the dataset.
ISTA-02.8 17:05 Real-Time Day Ahead Energy Management for Smart Home Using Machine Learning Algorithm
Nisha Vasudevan, Vasudevan Venkatraman and Ramkumar A (Kalasalingam Academy of Research and Education, India); Androse Joseph Sheela (Kongu Engineering College, India)
Smart grid is a sophisticated and smart electrical power transmission and distribution network, and it uses advanced information, interaction and control technologies to build up the economy, effectiveness, efficiency and grid security. The accuracy of day-to-day power consumption forecasting models has an important impact on several decisions making, such as fuel purchase scheduling, system security assessment, economic capacity generation scheduling and energy transaction planning. The techniques used for improving the load forecasting accuracy differ in the mathematical formulation as well as the features used in each formulation. Power utilization of the housing sector is an essential component of the overall electricity demand. An accurate forecast of energy consumption in the housing sector is quite relevant in this context. The recent adoption of smart meters makes it easier to access electricity readings at very precise resolutions; this source of available data can, therefore, be used to build predictive models., In this study, the authors have proposed Prophet Forecasting Model (PFM) for the application of forecasting day-ahead power consumption in association with the real-time power consumption time series dataset of a single house connected with smart grid near Paris, France. PFM is a special type of Generalized Additive Model. In this method, the time series power consumption dataset has three components, such as Trend, Seasonal and Holidays. Trend component was modelled by a saturating growth model and a piecewise linear model. Multi seasonal periods and Holidays were modelled with Fourier series. The Power consumption forecasting was done with Autoregressive Integrated Moving Average (ARIMA), Long Short Term Neural Memory Network (LSTM) and PFM. As per the comparison, the improved RMSE, MSE, MAE and RMSLE values of PFM were 0.2395, 0.0574, 0.1848 and 0.2395 respectively. From the comparison results of this study, the proposed method claims that the PFM is better than the other two models in prediction, and the LSTM is in the next position with less error.
ISTA-02.9 17:20 New Results in Biclique Cryptanalysis of Full Round GIFT
Jithendra K B (College of Engineering Thalassery, India); Shahana T. Kassim (Cochin University of Science and Technology, India)
Security of a recently proposed bitwise block cipher GIFT is evaluated in this paper. In order to mount full round attacks on the cipher, Biclique cryptanalysis method is applied. Both variants of the block cipher are attacked using Independent biclique approach. For recovering the secret keys of GIFT-64, the proposed attack requires 2^127.45 full GIFT-64 encryption and 2^8 chosen plain texts. For recovering the secret keys of GIFT-128, the proposed attack requires 2^127.82 full GIFT-128 encryption and 2^18 chosen plain texts. The attack complexity is compared with that of other attacks proposed previously. The security level of GIFT is also compared with that of the parent block cipher PRESENT, based on the analysis.
ISTA-02.10 17:35 BAT Algorithm Based Feature Selection: Application in Credit Scoring
Diwakar Tripathi (Madanapalle Institute of Technology Madanapalle, India); B. Ramachandra Reddy (SRM University-AP, India); Y c a Padmanabha Reddy (Madanapalle Institute of Technology and Science Madanapalle, A.P.); Alok Shukla (G L Bajaj Institute of Technology and Management, India); Ravi Kant Kumar and Neeraj Sharma (SRM University AP-Andhra Pradesh-522502, India)
Credit scoring is a vital step for financial institutions to estimate the risk associated with a credit applicant applied for credit product based on their credentials and it directly effects to viability of that institutions. However, there may be a large amount of irrelevant features in the credit scoring dataset. Due to irrelevant features, the credit scoring models may lead to poorer classification performances and higher complexity. So, by removing redundant and irrelevant features may overcome the problem with large amount of features. In this work, we emphasized on the role of feature selection to enhance the predictive performance of credit scoring model. Towards to feature selection, Binary BAT optimization technique is utilized with a novel fitness function. Further, proposed approach aggregated with "Radial Basis Function Neural Network (RBFN)", "Support Vector Machine (SVM)" and "Random Forest (RF)" for classification. Proposed approach is validated on four bench-marked credit scoring datasets obtained from UCI repository. Further, the comprehensive investigational results analysis are directed to show the comparative performance of the classification tasks with features selected by various approaches and other state-of-the-art-approaches for credit scoring.
ISTA-02.11 17:50 Legal Document Recommendation System: A Cluster Based Pairwise Similarity Computation
Jenish Dhanani (Sardar Vallabhbhai National Institute of Technology, Surat & Surat, India); Rupa Mehta (Sardar Vallabhbhai National Institute of Technology, Surat, India); Dipti Rana (Sardar Vallabhbhai National Institute of Technology, India)
Legal practitioners analyze relevant previous judgments to prepare favorable and advantageous arguments for an ongoing case. In Legal domain, recommender systems (RS) effectively identify and recommend referentially and/or semantically relevant judgments. Due to the availability of enormous amounts of judgments, RS needs to compute pairwise similarity scores for all unique judgment pairs in advance, aiming to minimize the recommendation response time. This practice introduces the scalability issue as the number of pairs to be computed increases quadratically with the number of judgments i.e., O (n 2). However, there is a limited number of pairs consisting of strong relevance among the judgments. Therefore, it is insignificant to compute similarities for pairs consisting of trivial relevance between judgments. To address the scalability issue, this research proposes a graph clustering based novel Legal Document Recommendation System (LDRS) that forms clusters of referentially similar judgments and within those clusters find semantically relevant judgments. Hence, pairwise similarity scores are computed for each cluster to restrict search space within-cluster only instead of the entire corpus. Thus, the proposed LDRS severely reduces the number of similarity computations that enable large numbers of judgments to be handled. It exploits a highly scalable Louvain approach to cluster judgment citation network, and Doc2Vec to capture the semantic relevance among judgments within a cluster. The efficacy and efficiency of the proposed LDRS are evaluated and analyzed using the large real-life judgments of the Supreme Court of India. The experimental results demonstrate the encouraging performance of proposed LDRS in terms of Accuracy, F1-Scores, MCC Scores, and computational complexity, which validates the applicability for scalable recommender systems.
ISTA-02.12 18:05 Trust and Fuzzy Inference Based Cross Domain Serendipitous Item Recommendations (TFCDSRS)
Richa Singh (VIT, Chennai); Punam Bedi (University of Delhi, India)
Recommender System (RS) is an information filtering approach that helps the overburdened user with information in his decision making process and suggests items which might be interesting to him. While presenting recommendation to the user, accuracy of the presented list is always a concern for the researchers. However, in recent years, the focus has now shifted to include the unexpectedness and novel items in the list along with ac-curacy of the recommended items. To increase the user acceptance, it is important to provide potentially interest-ing items which are not so obvious and different from the items that the end user has rated. In this work, we have proposed a model that generates serendipitous item recommendation and also takes care of accuracy as well as the sparsity issues. Literature suggests that there are various components that help to achieve the objective of serendipitous recommendations. In this paper, fuzzy inference based approach is used for the serendipity computation because the definitions of the components overlap. Moreover, to improve the accuracy and sparsity issues in the recommendation process, cross domain and trust based approaches are incorporated. A prototype of the system is developed for the tourism domain and the performance is measured using mean absolute error (MAE), root mean square error (RMSE), unexpectedness, precision, recall and F-measure.
ISTA-02.13 18:20 A Metaphor-Less Based AI Technique for Optimal Deployment of DG and DSTATCOM Considering Reconfiguration in the RDS for Techno-Economic Benefits
Nagaballi Srinivas (VNIT Nagpur, India)
The advent of distributed energy resources is undoubtedly transforming the nature of the electric power system. The crisis of conventional energy sources and their environmental effects resulted in the integration of Distributed Generators (DGs) into the distribution system. Simultaneous application of optimum network reconfiguration, DGs, and Distribution Static Compensator (DSTATCOM) unit's placement in the Radial Distribution Systems (RDS) comes with a raft of technical, economic, and environmental benefits. Benefits include improved power quality, reliability, stability, mitigation of power losses, and voltage profile improvement. In this paper, the combinational process of optimal deployment of DGs and DSTATCOM units in RDS with suitable network reconfiguration to achieve positive benefits has been analyzed. A recent metaphor-less based Artificial intelligence (AI) technique named the Rao-1 method is employed to overcome this combinational nonlinear optimization problem. The objective functions are to mitigate the power losses, enhance the voltage profile, and voltage stability index of the RDS considering the net economic cost-benefit to the distribution utility. The simulation study of this pragmatic approach problem is carried out on IEEE 33-bus RDS. The comparison of the results obtained by the Rao-1 method with other existing meta-heuristic optimization methods has been made to show its efficacy.

SIRS-01: SIRS-01: Selected Papers (Sixth International Symposium on Signal Processing and Intelligent Recognition Systems -SIRS'20)

SIRS-01.1 15:20 A Leak Detection in Water Pipelines Using Discrete Wavelet Decomposition and Artificial Neural Network
Prawit Chumchu (Kasetsart University Siracha Campus, Thailand)
This paper proposes an ANN(Artificial Neuron Network) model to find the location of leakage in water-distribution systems. To detect the water leakage in the systems, a tethered robot with acoustic sensor is used. Leakage signal is captured using tethered acoustics. The captured acoustic signal is pre-processed using DWT (Discrete Wavelet Transform). Then, the pre-processed acoustic signal is modelled by the ANN model to detect the leak. The performance of the proposed algorithm was evaluate in a test bed which consists of a robot, a ground station and a simulated water distribution system. A robot with a hydrophone is used for captured acoustic signal in water pipes. The ground station is designed to control the robot and reports the status of leakage. In addition, the proposed framework discovers the location of the leakage. The water-distribution system is implemented for simulating the leakage. The experiment results show that the proposed model is effective for detection of water leakage in the system. The evaluation results indicates that the percent of accuracy is 100%.
SIRS-01.2 15:35 Running to Get Recognised
Gerard Carty (University of Limerick & Avaya, Ireland); Muhammad Raja and Conor Ryan (University of Limerick, Ireland)
This research investigates the use of Convolutional Neural Networks (CNN) and specially, You Only Look Once ver. 4 (YOLOv4) to detect Racing Bib Numbers (RBNs) in images from running races and then to recognise the actual numbers using Optical Character Recognition (OCR) techniques. Pre-processing and Tesseract OCR were employed to achieve this. Using a self-acquired private dataset we achieve a recall of 0.91, precision of 0.88, an F1-measure of 0.89, and mean average precision (mAP) of 0.935 for detection. Full number recognition of 71% is then achieved on the successfully detected RBNs. Additionally, the proposed approach attains a very low average inference time of 23.5ms compared to a previous best recorded time of 750ms. This is achieved this with a relatively small training set of 1374 images, where previous research used 498,385 labelled images.
SIRS-01.3 15:50 COVID-19 Detection from Radiograph Images of the Lungs Using Convolutional Neural Network
Hritam Basak and Rohit Kundu (Jadavpur University, India)
The global pandemic of the novel coronavirus that started in Wuhan, China, has affected more than 29 million people worldwide along with more than 924 thousand tragic deaths. Till date, the COVID-19 virus is still spreading and affecting thousands. The main problem about the detection of the COVID-19 is the scarce number of test kits available for performing real-time Polymerase Chain Reaction (rt-PCR) for the large population of affected or suspected individuals. This leads to the need for an automatic detection system using Artificial Intelligence (AI). Deep Learning is more powerful than classical Machine Learning and hence an important AI tool available. This paper proposes a Convolution Neural Network (CNN) based Deep Learning model to detect COVID-19 positive patients through chest radiology images. COVID19 positive patients have shown distinct features in their lung X-rays according to previous studies and so this can be a reliable method for detecting patients since performing an X-ray test is easier than rt-PCR on suspected patients. Our model has been trained with 820 chest radiographic images (excluding data augmentation) collected from three databases and results in a classification accuracy of 99.61%, and a sensitivity and specificity of 99.21% and 99.29% respectively, proving our model to be a valid COVID-19 detector.
SIRS-01.4 16:05 Comprehending the Dynamics of EEG Generated Under Various Odorant Stimulation on the Brain
Suma Sri Sravya Chandu (Amrita VishwaVidyapetham, India); Sunitha R (Amrita Vishwa Vidyapeetham, India)
Brain the most powerful organ of a human being, manages the central nervous system, interprets and processes the information received from the five sense organs. Electroencephalogram (EEG) is an electrophysiological method to supervise and read the electrical movement of the brain signals especially from the scalp. Olfaction is a chemoreception that forms an impression of smell, through the sensory olfactory system. This paper aims at analyzing the behavior of the human olfactory system towards natural and artificial flavors through a sequence of preprocessing and estimation of power spectral density (PSD) by Welch's method. Power plot analysis was performed to understand the activation of various frequency bands under the influence of odorants. Multiscale Entropy and Fractal Dimension approaches were applied on the preprocessed data to study and compare the nonlinear dynamics of the EEG stimulated by various odorants.
SIRS-01.5 16:20 Detection of Breast Cancer from Mammogram Images Using Deep Transfer Learning
Akalpita Das (GIMT Guwahati, India); Himanish Shekhar Das (Cotton University, India); Utpal Barman (GIMT Guwahati, India); Arijeet Choudhury, Sourav Mazumdar and Anupal Neog (Jorhat Engineering College, India)
Among all types of cancers found in women, breast cancer is having the second highest mortality rate and it is also considered to be the prime cause of high death rate. Mammographic images are often investigated by the experienced and trained radiologists to recognize the breast abnormalities like masses and micro-calcifications. This paper focuses on computer aided diagnosis to help the radiologists so that breast cancer can be detected with better accuracy. In this paper, our aim is to process the mammogram images for breast cancer affected patients using deep learning architectures. In this paper, two approaches are considered. In first approach, mammogram images are feed into various pre-trained models which are trained from scratch. In second approach, exactly same techniques are being replicated with the exception that fine tuning has been performed using transfer learning for all the models. For this work, we have considered variants of convolutional neural networks. Experimental works have performed on two different datasets and for both datasets, the results of the fine-tuned network outperforms all other approaches.
SIRS-01.6 16:35 Explainable NLP: A Novel Methodology to Generate Human-Interpretable Explanation for Semantic Text Similarity
Tanusree De (Accenture, India); Debapriya Mukherjee (Accenture India, India)
Text Similarity has significant application in many real-world problems. Text Similarity Estimation using NLP techniques can be leveraged for automating a variety of tasks that are relevant in business and social context. The outcomes given by AI-powered automated systems provide guidance for humans to take decisions. However, since the AI-powered system is a "black-box", for the human to trust its outcome and to take the right decision or action based on the outcome, there needs to be an interface between the human and the machine which can explain the reason for the outcome and that interface is what we call "Explainable AI". In this paper, we have made a twofold attempt, first, 1) to build a state-of-the-art Text Similarity Scoring System which would match two texts based on semantic similarity and then, 2) build an Explanation Generation Methodology to generate human-interpretable explanation for the text similarity match score.
SIRS-01.7 16:50 Dataset Building for Handwritten Kannada Vowels Using Unsupervised and Supervised Learning Methods
Chandravva Hebbi, Omkar Metri and Manjunath Bhadrannavar (PES University, India); Mamatha H R (P E S University & Visvesvaraya Technological University, India)
In the era of automation, recognition of the Kannada handwritten characters is an inevitable task as it has widespread applications in the digitization of documents in government offices, public sectors, and other domains like banking and post offices. Hence, the need for Optical Character Recognizers (OCR) for the Indian languages like Kannada is vital. This paper presents the recognition and labeling of offline Handwritten Kannada Vowels using the feature extraction techniques like Local Binary Pattern (LBP), Run Length Count (RLC), Chain Code (CC), and Histogram of Oriented Gradients (HOG) and feeding the features to the supervised and unsupervised machine learning algorithms with pure and hybrid features are presented. The comparative study of supervised learning algorithms on the data collected from 500 people. Also, the objective of this work is to label the unlabeled data by automation without manual labor. This has been achieved by feeding the features initially to the unsupervised learning algorithm, i.e., KMeans Clustering algorithm. The classified and misclassified vowels then became the train and test sets respectively for supervised learning algorithms and a combined recognition rate has been presented
SIRS-01.8 17:05 Performance Improvements in Quantization Aware Training and Appreciation of Low Precision Computation in Deep Learning
Uday Kulkarni, Meena M, Praison Joshua, K Kruthika, Gavin Platel, Kunal Jadhav and Ayush Singh (KLE Technological University, India)
World today is exploding with enormous amounts of multimedia data every second and technologies are being developed to understand it and make use of it in a profound way. Deep learning is the best performing branch of artificial intelligence and highly used to solve complex problems. As the demand for deep learning usage is increasing across domains, also from cloud to edge devices, the need to optimize its implementation is a highly targeted area of research where compression of deep learning models is one among them. In this paper we have proposed performance improvements by incorporation of Quantization Aware Training (QAT) of Deep Neural Networks (DNN). Among the compression techniques, this paper proposes quantization aware training in 8-bit low precision setting. Further we will introduce our implementation of fake quantization during training and inference of a deep neural network in 8-bit setting and its performance improvements over the contemporary quantization techniques. We have exhibited our achievement of better and minimized quantization loss, inference time, memory footprint in a LENET on MNIST, CIFAR-10 datasets and MobileNet Architecture on ImageNet dataset. In this method we present inference improvement of up to 11%, accuracy improvement of upto 44.75% and memory footprint reduction of up to 0.5% than post-training quantization (calibration) technique. This paper also attempts at giving a roadmap in designing better deep learning systems by considering reducing memory footprint and latency while deploying DNN in resource constraint devices (edge devices) using QAT.
SIRS-01.9 17:20 GRAD-CAM-Based Classification of Chest X-Ray Images of Pneumonia Patients
Pranav Kumar Seerala and Sridhar Krishnan (Ryerson University, Canada)
Publicly available datasets of Chest X-ray images have been accessible and available over the last few years. Medical image segmentation and classification have been prominent areas of research, since the release of such datasets. In this work, an attempt has been made to come up with a neural network with a limited number of parameters, with a goal of classifying chest X-rays of pneumonia from healthy patients. The intended applications would be edge devices such as cellular phones, Raspberry Pi's, and other computing devices that could be used in developing countries which might be lacking in hardware to deploy and update the model. The dataset used is primarily on pediatric patients, and this paper demonstrates the usage of image segmentation, image de-noising and training data selection, to train on images with the most meaningful information, rather than the entire dataset. The segmentation model coupled with image post-processing has demonstrated robust-ness in classifying chest X-rays of external datasets, which could be used as a standalone tool for other image analysis projects. The results of the hyper-parameter tuned classification model show a dramatic im-provement in overall accuracy of the test set when compared to other Kaggle kernels that have worked on the same data. Class activation maps (Grad-CAM) from the final convolution layer depict the re-gions of the image that contributed most towards determining the class.
SIRS-01.10 17:35 Improved LSTM-UNet for Image Segmentation
Heide Oller (FHNW School of Life Sciences, Switzerland); Rolf Dornberger and Thomas Hanne (University of Applied Sciences Northwestern Switzerland, Switzerland)
The use of convolutional neural networks (CNNs) to automate image segmentation is strived for in the field of microscopy imaging. Different net-works have been proposed, with the U-Net currently the most popular in the field of medical image segmentation. This paper investigates different optimizers to a published long-short term memory (LSTM)-UNet to find the most efficient optimizer for this network when applied to a dataset of simulated nuclei of HL30 cells stained with Hoechst.
SIRS-01.11 17:50 Learning 3DMM Deformation Coefficients for Action Unit Detection
Stefano Berretti (University of Florence, Italy)
Facial Action Units (AUs) correspond to the deformation/contraction of individual or combinations of facial muscles. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Analysing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for AUs detection by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, that mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a learning phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of a same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used to train an AU classifier. Experiments demonstrate that such deformation coefficients+ can be learned from a small set of training data using SVM, resulting in effective AU detection.

SoMMA-01: SoMMA-01: Symposium on Machine Learning and Metaheuristics Algorithms, and Applications - SoMMA'20 (Regular Papers)

15:20 Deep Neural Networks with Multi-Class SVM for Recognition of Cross-Spectral Iris Images
Mulagala Sandhya, Ujas Rudani and Dilip Kumar Vallabhadas (National Institute of Technology Warangal, India); Mulagala Dileep (Vishnu Institute of Technology, India); Sriramulu Bojjagani (SRM University-AP, India); Sravya Pallantla (Gayatri Vidya Parishad College of Engineering, India); Pdss Lakshmi Kumari (SRKR Engineering College, India)
Iris recognition technologies applied to produce comprehensive and correct biometric identification of people in numerous large-scale data of humans. Additionally, the iris is stable over time, i.e., iris biometric knowledge offers links between biometric characteristics and people. The e-business and e-governance require more machine-driven iris recognition. It has millions of iris images which are in near-infrared illumination. It is used for people's identity. A variety of applications for surveillance and e-business will embody iris pictures that are unit non-heritable below visible illumination. The self-learned iris features are created by the convolution neural network (CNN), give more accuracy than hand crafted feature iris recognition. In this paper, a modified iris recognition system is introduced using deep learning techniques along with multi-class SVM for matching. We use Poly-U database, which is from 209 subjects. The CNN with softmax cross-entropy loss gives the most accurate matching of testing images. This method gives better results in terms of EER. We analysed the proposed architecture on other publicly available databases through various experiments.
15:35 Big Data: Does BIG Matter for Your Business?
Wei Zhou (ESCP Europe, France); Selwyn Piramuthu (University of Florida, USA)
With the emergence of big data and business analytics, many new concepts and business models driven by data have been introduced in recent years. Does "big" really matter? We attempt to explain when and why big does not matter in many business cases. In those occasions where "Big" does matter, we outline the data strategy framework that differentiates the degree of big data requirements. In conclusion, we offer practical advice on strategic usage of big data for best practice management and sound decision-making.
15:50 Concept Drift Detection in Phishing Using Autoencoders
Aditya Gopal Menon (Amrita Vishwa Vidhyapeetham, India); Gilad Gressel (Georgia Institute of Technology, USA)
When machine learning models are built with non-stationary data their performance will naturally decrease over time due to concept drift, shifts in the underlying distribution of the data. A common solution is to retrain the machine learning model which can be expensive, both in obtaining new labeled data and in compute time. Traditionally many approaches to concept drift detection operate upon streaming data. However drift is also prevalent in semi-stationary data such as web data, social media, and any data set which is generated from human behaviors. Changing web technology causes concept drift in the website data that is used by phishing detection models. In this work, we create "Autoencoder Drift Detection" (ADD) an unsupervised approach for a drift detection mechanism that is suitable for semi-stationary data. We use the reconstruction error of the autoencoder as a proxy to detect concept drift. We use ADD to detect drift in a phishing detection data set which contains drift as it was collected over one year. We also show that ADD is competitive within +/- 24 % with popular streaming drift detection algorithms on benchmark drift datasets. The average accuracy on the phishing data set is .473 without drift detection and using ADD is increased to .648.
16:05 Gaze Fusion-Deep Neural Network Model for Glaucoma Detection
Sajitha Krishnan (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Amudha J (Amrita Vishwa Vidyapeetham, India); Sushma Tejwani (Narayana Nethralaya-2, India)
The proposed system, Gaze Fusion - Deep Neural Network Model (GFDM) has utilized transfer learning approach to discriminate subject's eye tracking data in the form of fusion map into two classes: glaucoma and normal. We have fed eye tracking data in the form of fusion maps of different participants to Deep Neural Network (DNN) model which is pretrained with ImageNet weights. The experimental results of the GFDM show that fusion map dissimilar to pretrained model's dataset can give better understanding of glaucoma. The model also show the part of the screen where participants has the difficulty in viewing. GFDM has compared with traditional machine learning models such as Support Vector Classifier, Decision Tree classifier and ensemble classifier and shown that the proposed model outperforms other classifiers. The model has Area Under ROC Curve (AUC) score 0.75. The average sensitivity of correctly identifying glaucoma patients is 100% with specificity value 83%.
16:20 Stock Price Prediction Using Machine Learning and LSTM-Based Deep Learning Models
Sidra Mehtab (Praxis Business School, Kolkata, INDIA, India); Jaydip Sen (Praxis Business School, India); Abhishek Dutta (Praxis Business School, Kolkata, INDIA, India)
Prediction of stock prices has been an important area of research for a long time. While supporters of the efficient market hypothesis believe that it is impossible to predict stock prices accurately, there are formal propositions demonstrating that accurate modeling and designing of appropriate variables may lead to models using which stock prices and stock price movement patterns can be very accurately predicted. Researchers have also worked on technical analysis of stocks with a goal of identifying patterns in the stock price movements using advanced data mining techniques. In this work, we propose an approach of hybrid modeling for stock price prediction building different machine learning and deep learning-based models. For the purpose of our study, we have used NIFTY 50 index values of the National Stock Exchange (NSE) of India, during the period December 29, 2014 till July 31, 2020. We have built eight regression models using the training data that consisted of NIFTY 50 index records from December 29, 2014 till December 28, 2018. Using these regression models, we predicted the open values of NIFTY 50 for the period December 31, 2018 till July 31, 2020. We, then, augment the predictive power of our forecasting framework by building four deep learning-based regression models using long-and short-term memory (LSTM) networks with a novel approach of walk-forward validation. Using the grid-searching technique, the hyperparameters of the LSTM models are optimized so that it is ensured that validation losses stabilize with the increasing number of epochs, and the convergence of the validation accuracy is achieved. We exploit the power of LSTM regression models in forecasting the future NIFTY 50 open values using four different models that differ in their architecture and in the structure of their input data. Extensive results are presented on various metrics for all the regression models. The results clearly indicate that the LSTM-based univariate model that uses one-week prior data as input for predicting the next week's open value of the NIFTY 50 time series is the most accurate model.
16:35 An Improved Salp Swarm Algorithm Based on Adaptive β-Hill Climbing for Stock Market Prediction
Abhishek Kumar, Rishesh Garg, Arnab Anand and Ram Sarkar (Jadavpur University, India)
Stock market prediction is a tool to maximize the investor's money and minimize the risk associated with it. This paper proposes a new machine learning based model to predict the stock market price. The proposed model is an improved version of existing Salp Swarm Optimizer (SSO) which is integrated with Least Square-Support Vector Machine (LSSVM). The improved version is a hybrid meta-heuristics algorithm, which is a combination of SSO and Adaptive β-Hill Climbing (Aβ- HC) algorithm. The proposed model selects the best hyperparameters for LSSVM to avoid over fitting and local minima, and in turn increases the model's accuracy. It is evaluated on four standard and publicly available stock market datasets, and its result is compared with some popular meta-heuristic algorithms. The results show that our proposed model performs better than the existing models in most of the cases. The source code for our proposed algorithm is available in https://github.com/singhaviseq/SSA-ABHC.
16:50 Deep Learning Based Stable and Unstable Candle Flame Detection
Amir Khan, Mr. (Aligarh Muslim University & ZHCET, India); Mohammad Samar Ansari (Athlone Institute of Technology, Ireland & Aligarh Muslim University, India)
This paper presents a deep learning based solution for identification of normal and abnormal candle flames, controlled and uncontrolled flames.Candle flames affected by external factors like wind, improper combustion of fuel etc. Proposed CNN based deep neural network can successfully classify the stable and unstable candle flame with an accuracy of 67% for generated test set and an accuracy of 83% for random images taken from open source on internet.
17:05 Modelling Energy Consumption of Domestic Households via Supervised and Unsupervised Learning: A Case Study
Shahid Mehraj Shah (National Institute of Technology, Srinagar, J&K, India)
Electricity energy billing system is prevalent in most of the places in the world. Also digitization of these electricity bills has also been successfully implemented in various underdeveloped countries as well. The vast amount of data is available regarding the energy consumption of consumers. In this paper we consider a case study of one city, about which we have electricity energy data for several years. We first classify consumers based on their average energy usage via clustering algorithms. We also have survey data of several houses. In that survey, we have building information, family information and also appliance information. We use various regression techniques to disaggregate the energy usage corresponding to various appliances.

SSCC-01: SSCC-01: Selected Papers (Eighth International Symposium on Security in Computing and Communications -SSCC'20)

SSCC-01.1 15:20 A Fast Authentication Scheme for Cross-Network-Slicing Based on Multiple Operators in 5G Environments
Jheng-Jia Huang (National Taiwan University of Science and Technology, Taiwan); Chun-I Fan, Yu-Chen Hsu and Arijit Karati (National Sun Yat-sen University, Taiwan)
In 5G environments, it applies the functionalities of Network Function Virtualization and Software-Defined Networking to support multiple services and proposes a new concept called Network Slicing. By using that concept, the 5G telecommunication operators can achieve the goal of supporting users with a variety of different services and can also create a slice with certain unique characteristics. For examples: eMBB slicing, URLLC slicing, etc. However, the traditional authentication mechanism does not address any concrete strategy for network slicing handover in 5G so that the computational process must be calculated by the core network. Hence, we propose a network slicing handover authentication scheme that not only satisfies the standards defined by 3rd Generation Partnership Project but also achieves low time latency through delegating computation overhead to the edge clouds. In addition, we incorporate the concepts of the proxy re-signature and certificateless signature in our scheme. As a result, when users need to use the network slicing services across the telecommunications operators, they can still meet the requirements of reducing the time latency in the process of the authentication flows.
SSCC-01.2 15:35 Analysis of Orthogonal Time Frequency Space Transceiver in Baseband Environment
Vangara Saiprudhvi and Ramanathan R (Amrita Vishwa Vidyapeetham, India)
In this letter, we investigate Orthogonal Time Frequency Space (OTFS) modulation, a newly proposed modulation scheme for emerging wireless communication applications in time-frequency selective channels, from symbol detection perspective. The studies identify the advantages of OTFS performance over OFDM in many aspects, such as data rate increase in high mobility. Another advantage is the sparsity of the channel produced by OTFS that allows using low-complexity algorithms for the detection of the data. We provide the analysis on the baseband OTFS system and then analyze the Message Passing algorithm for OTFS symbol detection. We analyze the effects of damping factor and channel taps on the performance of the system. Simulation results show the error performance of the OTFS system under various channel conditions.
SSCC-01.3 15:50 Anomaly Detection in CAN-BUS Using Pattern Matching Algorithm
Ilia Odeski (BGU University, Israel); Michael Segal (Ben-Gurion University of the Negev, Israel)
With recent advances of the automotive industry, advanced systems have been integrated at in-vehicle communication. However, with the change of perception to data sharing instead of standalone systems, the susceptibility to systemic vulnerability increases. The automotive intra-communication is based on the CAN (Connected Area Network) network protocol. Many types of research have analyzed the protocol's vulnerability to various types of cyber-attacks, and its implications on vehicle systems, with emphasis on safety systems. Research has found that the communication system is not immune to various types of attacks, thus providing access to crucial functions of the vehicle. This paper explores the design and implementation of intrusion detection method in intra-vehicle communication, which aims to identify malicious CAN messages. Based on the historical traffic rate, the algorithm uses a KMP approximate string-matching. Through theoretical analysis and experiments carried out on a real CAN dataset with different attack scenarios, we received very high performance during high and medium intensity attacks. To the best of our knowledge, this work is the first study that applies the KMP approximate pattern matching to IDS for the in-vehicle network security.
SSCC-01.4 16:05 New Security Architecture of Access Control in 5G MEC
Zbigniew Kotulski, Wojciech Niewolski, Tomasz Nowak and Mariusz Sepczuk (Warsaw University of Technology, Poland)
The currently developed 5G networks using MEC technology (5G MEC) allow for the harmonious cooperation of many areas of the economy (called the vertical industries) within an integrated information network. Providing the necessary security in such a complex configuration of business partners requires the design of consistent and effective security architecture. In this paper, we present a new concept of an access control architecture for the 5G MEC network in line with the 5G network model and MEC architecture proposed by international standardization organizations. We give an overview of the high-level security architecture of 5G MEC networks, which provides security solutions for the network's components and establishes secure access to all cooperating entities. Next, we introduce the MEC Enabler, a new network's module, which manages security credentials required to access resources of MEC-hosted services. We consider a series of several use cases with increasing demands on network data resources and computing power. Finally, we present a sample protocol diagram for gaining access to resources (authentication in a service using MEC technology) in our access control architecture.
SSCC-01.5 16:20 Conjunctive Keyword Forward Secure Ranked Dynamic Searchable Encryption over Outsourced Encrypted Data
MD Asrar Ahmed (University College of Engineering & Osmania University, India); S Ramachandram (Osmania University, India); Khaleel Ur Rahman Khan (ACE Engineering College, Rangareddy)
Cloud Computing enables individuals and organizations with extensive computing capabilities along with scalable storage services. However the security of outsourced data brings about privacy concerns when confidential data is outsourced to third party service providers. The Searchable Symmetric Encryption (SSE) is a solution towards enabling data owners to securely outsource and later retrieve the matching documents based on encrypted queries. Majority of existing SSE schemes lack support for relevance based retrieval of matching documents to queried keywords, expressive queries, and privacy of query terms and documents. In this paper we present a novel dynamic SSE scheme which retrieves documents ranked based on their relevance to the queries, and ensures forward and backward privacy definitions given by R Bost et al. Our scheme is lightweight as it allows server to compute document-query relevance in a secure manner without affecting the entire outsourced index, especially during update operations, which is normally the case in existing state of the art. We employ an indexing mechanism which is alternate to the most frequently used Inverted Index structure but overcomes drawbacks of such an index. Experimental analysis of our scheme on RFC dataset demonstrated a sub-linear search time and efficient update operations. The proposed scheme also ensures forward and back-ward security during search and update operations
SSCC-01.6 16:35 Thermal Management in Large Data Centres: Security Threats and Mitigation
Betty Saridou (Democritus University of Thrace, Greece); Gueltoum Bendiab (Beckley Point - Student AccommodationAddress, United Kingdom (Great Britain) & Freres Mentouri, Constantine, Algeria); Stavros Shiaeles (University of Portsmouth, United Kingdom (Great Britain)); Basil Papadopoulos (Democritus University of Thrace, Greece)
Data centres are experiencing significant growth in their scale, especially, with the ever-increasing demand for cloud and IoT services. However, this rapid growth has raised numerous security issues and vulnerabilities; new types of strategic cyber-attacks are aimed at specific physical components of data centres that keep them operating. Attacks against temperature monitoring and cooling systems of data centres, also known as thermal attacks, can cause a complete meltdown and are generally considered difficult to address. In this paper, we focus on this issue by analysing the potential security threats to these systems and their impact on the overall data centre safety and performance. We also present current thermal anomaly detection methods and their limitations. Finally, we propose a hybrid method to prevent thermal attacks, that uses multi-variant anomaly detection and a fuzzy-based health factor to enhance data centre thermal awareness and security.
SSCC-01.7 16:50 A Communication-Induced Checkpointing Algorithm for Consistent-Transaction in Distributed Database Systems
Houssem Mansouri (University of Setif1, Algeria); Al-Sakib Khan Pathan (Independent University, Bangladesh)
For better protection of distributed systems, two well-known techniques are: checkpointing and rollback recovery. While failure protection is often considered a separate issue, it is crucial for establishing better secure distributed systems. This article is dedicated to the proposal of a new checkpointing algorithm for saving a consistent-transaction state in distributed databases ensuring that database management systems are able, after a failure, to recover the state of the database. The proposed communication-induced algorithm does not hamper the normal transaction processing and saves a global consistent-transaction state that records only the fully completed transactions. Analysis and experimental results of our proposal show that the proposed scheme saves a minimum number of forced checkpoints and has some performance gains compared to the alternative approaches.
SSCC-01.8 17:05 Deep Hierarchical APP Recommendation with Dynamic Behaviors
Taiguo Qu, Wenjun Jiang and Dong Liu (Hunan University, China); Guojun Wang (Guangzhou University, China)
Many app recommendation models have been proposed to provide mobile users with the apps that meet their individual needs. However, three main drawbacks limit their performance: (1) Neglect the dynamic change of user preferences in a short time; (2) Singly use either the machining learning or the deep learning method, which cannot learn discrete features or continuous features very well; (3) Directly deal with all apps without considering their hierarchical features. To overcome the above drawbacks, this paper proposes a Dynamic behavior-based age Hierarchy Model (DHM for short). To be specific, we integrate the Boosting Tree and Neural Network, combine the static data as basis and the dynamic behaviors as refinement, and update dynamic behaviors in time to improve the accuracy of personalized app recommendation. Then, this paper proposes a User Hierarchy based personalized App recommendation Model (UHAM for short), it exploits the user attribute layering method to make hierarchical recommendation for users in different age groups, which further enhance the efficiency. We conduct extensive experiments with a real app dataset, and the results validate the effectiveness of our model.
SSCC-01.9 17:20 A Survey of Security Attacks on Silicon Based Weak PUF Architectures
Chintala Yehoshuva, R. Raja Adhithan and N. Nalla Anandakumar (Society for Electronic Transactions and Security (SETS), India)
Physically Unclonable Functions (PUFs) are popular hardware-based security primitives that can derive chip signatures from the inherent characteristics of ICs. Due to their assumed security and cost advantages, one important category of PUFs, so-called weak PUFs which is used in numerous security applications such as device ID generation, IP protection and secure key storage. Nevertheless, a number of recent works have been reported several attacks on weak PUFs architectures. This paper presents a brief survey of existing attacks on silicon-based weak PUF architectures with their detailed comparison and associated countermeasures.
SSCC-01.10 17:35 On the Feasibility of DoS Attack on Smart Door Lock IoT Network
Belal Asad (Bournemouth University, United Kingdom (Great Britain)); Neetesh Saxena (Cardiff University, United Kingdom (Great Britain))
The Internet of Things (IoT) is one of the most extensive technological evolution of the computing network. This technology can transform the physical world into a virtual world for testing and emulation to evaluate the key issues present in the physical devices. This work aims to explore the security in IoT devices and demonstrates the security gaps in the behavior of the smart door lock. In this paper, we conducted two surveys to gather consumers' requirements about the IoT devices as to whether they do understand the security risks involves with these devices. Further, we carried out a denial of service attack on a smart lock device to demonstrate that such devices are not secure. This work also highlights the security weakness and suggest guidelines to improve the overall system using cloud and edge computing and authentication and access control-based solutions.
SSCC-01.11 17:50 Evading Static and Dynamic Android Malware Detection Mechanisms
Teenu S John (Iiitmk, India); Tony Thomas (Indian Institute of Information Technology and Mangement - Kerala, India)
With the widespread usage of Android mobile devices, malware developers are increasingly targeting Android applications for carrying out their malicious activities. Despite employing powerful malware detection mechanisms, an adversary can evade the threat detection model by launching intelligent malware with fine-grained feature perturbations. Since machine learning is widely adopted in malware detection owing to its automatic threat prediction and detection capabilities, attackers are nowadays targeting the vulnerability of machine learning models for malicious activities. In this work, we demonstrate how an adversary can evade various machine learning based static and dynamic Android malware detection mechanisms. To the best of our knowledge, this is the first work that discusses adversarial evasion in both static and dynamic machine learning based malware detection mechanisms.
SSCC-01.12 18:05 Pandora: A Cyber Range Environment for the Safe Testing and Deployment of Autonomous Cyber Attack Tools
Hetong Jiang (University of Queensland, Australia); Taejun Choi and Ryan Ko (The University of Queensland, Australia)
Cybersecurity tools are increasingly automated with artificial intelligent (AI) capabilities to match the exponential scale of attacks, compensate for the relatively slower rate of training new cybersecurity talents, and improve of the accuracy and performance of both tools and users. However, the safe and appropriate usage of autonomous cyber attack tools - especially at the development stages for these cyber attack tools - is still largely an unaddressed gap. Our survey of current literature and tools showed that most of the existing cyber range designs are mostly using manual tools and have not considered augmenting automated tools or the potential security issues caused by the tools. In other words, there is still room for a novel cyber range design which allow security researchers to safely deploy autonomous tools and perform automated tool testing if needed. In this paper, we introduce Pandora, a safe testing environment which allows security researchers and cyber range users to perform experiments on automated cyber attack tools that may have strong potential of usage and at the same time, a strong potential for risks. Unlike existing testbeds and cyber ranges which have direct compatibility with enterprise computer systems and the potential for risk propagation across the enterprise network, our test system is intentionally designed to be incompatible with enterprise realworld computing systems to reduce the risk of attack propagation into actual infrastructure. Our design also provides a tool to convert in-development automated cyber attack tools into to executable test binaries for validation and usage realistic enterprise system environments if required. Our experiments tested automated attack tools on our proposed system to validate the usability of our proposed environment. Our experiments also proved the safety of our environment by compatibility testing using simple malicious code.

Wednesday, October 14 15:30 - 18:00 (Asia/Calcutta)

Lightning TalksDetails

October 14, 2020, 3.30 PM - 6.00 PM

Cyber Security for the Smart Electricity Grid - Trends and Directions (Zubair Baig , Deakin University, Australia)

Why Trust Breaks Down (Arthur Carmazzi, DCI (Asia), India)

IoT Wearables - Building Made Easier (Shriram K Vasudevan, Amrita School of Engineering, India)

Modelling of Asynchronous Waveforms (Abhay Shriramwar, Cyient Ltd, India) MicroServices Pattern: Leveraging to its advantage (Monika Arora, BITS Pilani, India)

Turning to Online Education: A Massive Technology Transfer Challenge (Victoria E. Erosa, International Graduate Center, City University of Applied Sciences, Bremen, Germany)

HRIDAI: A Tale of Two Categories of ECGs (Priya Ranjan, SRM AP University, India)

Automating Data Science (Sumana Maradithaya, M S Ramaiah Institute of Technology, India)

Key management in future satellite networks for military communication - a survey and challenges (T R Usha Kumari, CAIR DRDO, India)

Combination of Educational Technology & Artificial Intelligence Used For Enhancement of Teaching-Learning Process & Improvement in Students Performance (Sudhanshu Suhas Gonge, Vishwakarma Institute of Technology, Pune, India)

AR/VR - The Game Changer in Education and Real Estate (Shriram K Vasudevan, Amrita School of Engineering, India)

FaaS Orchestration in Production Serverless Platform (Urmil Bharti, SRCASW, University of Delhi, India)

Migration Strategies for Cloud Applications (Deepali Bajaj, University of Delhi, India)

Opportunities and Challenges of 3D Integration - A Security Perspective (Jaya Dofe, California State University, Fullerton, USA)

Thursday, October 15

Thursday, October 15 9:30 - 10:30 (Asia/Calcutta)

Keynote: Connected Autonomous VehiclesDetails

Speaker: Dr. Mohammed Atiquzzaman, Edith J. Kinney Gaylord Presidential Professor, University of Oklahoma, USA

Abstract: Modern vehicles are equipped with lots of sensors for measurement of vehicle operating conditions and the surrounding, including weather conditions, and can be a viewed as a web of sensors on wheels. They can sense a range of information about the vehicle, such as location, speed, braking intensity, road traction, etc., some of which can represent road weather conditions. Lots of crashes happen due to the driver being unware of the surrounding road weather conditions, such as icy patches and frozen pavement. By facilitating vehicles within an area to exchange information between themselves in real-time, the drivers can be instantly alerted about road hazards and possibly avoid potential crashes. The talk will discuss ways to increase the safety of drivers and thus reduce crashes resulting from adverse road weather conditions. This was achieved by disseminating, in real-time, the information collected by a vehicle to its surrounding vehicles using state-of-the-art wireless communications between vehicles. The information was also communicated to road side infrastructure to increase driver safety; for example, the duration of the traffic signals at a junction can be changed dynamically in response to current road weather conditions transmitted by vehicles in the surrounding area.

Thursday, October 15 10:30 - 11:20 (Asia/Calcutta)

Keynote: Signal Analysis for Remote Health Monitoring

Speaker: Prof. Sri Krishnan, Ryerson University, Toronto, Ontario, Canada

Abstract: In this talk, the contextual topic of remote health monitoring using wearable devices, information and communication technology, signal processing and machine learning will be covered. Home-based remote monitoring of vital health signals will not only benefit the current pandemic situation but also long term healthcare needs such as telemedicine, digital health and rehabilitation. The talk will also cover the current research done in the area of connected healthcare and wearable computing in the Signal Analysis Research Group at Ryerson University, Canada.

Thursday, October 15 11:20 - 12:10 (Asia/Calcutta)

Keynote: 3D Face Modeling

Speaker: Dr. Stefano Berretti, University of Florence, Italy

Abstract: Though 3D Morphable Models (3DMM) of the face dates to late '90, in the past few years they have been re-discovered in the context of deep learning and are now incorporated into many state-of-the-art solutions for face analysis. In this talk, I will discuss recent approaches to the problem of 3DMM construction and fitting, focusing on the challenges in building and applying these models, and the main insights that helped us address them. I will additionally mention interesting open problems, highlighting the broad range of current and future applications, and some stepping stones towards unexplored directions.

Thursday, October 15 12:10 - 13:10 (Asia/Calcutta)

Keynote: Towards edge-fog-cloud continuum

Speaker: Dr. Marcin Paprzycki, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Over time, two trends have been observed in the "world of computing". One of them was a push from centralized towards decentralized solutions. The second was the move in the opposite direction. These seem to be similar to the thesis and antithesis in Hegel's philosophy. Interestingly, similarly to Hegel's synthesis, we are approaching a unified model of edge-fog-cloud continuum. My talk will reflect on the journey and outline the proposed way forward.

Thursday, October 15 14:00 - 14:50 (Asia/Calcutta)

Keynote: Efficient Deployment of DeepTech AI Models in Engineering Solutions

Speaker: Dr. Juan Manuel Corchado, Director - European IoT Digital Innovation Hub, Director- BISITE Research Group, University of Salamanca & President of the Air Institute, Spain

Abstract: Artificial Intelligence revived in the last decade. The need for progress, the growing processing capacity, and the low cost of the Cloud have facilitated the development of new, powerful algorithms. The efficiency of these algorithms in Big Data processing, Deep Learning, and Convolutional Networks is transforming the way we work and is opening new horizons. Thanks to them, we can now analyze data and obtain unimaginable solutions to today's problems. Nevertheless, our success is not entirely based on algorithms, it also comes from our ability to follow our "gut" when choosing the best combination of algorithms for an intelligent artifact. It's about approaching engineering with a lot of knowledge and tact. This involves the use of both connectionist and symbolic systems, and of having a full understanding of the algorithms used. Moreover, to address today's problems we must work with both historical and real-time data. We must fully comprehend the problem, its time evolution, as well as the relevance and implications of each piece of data, etc. It is also important to consider development time, costs, and the ability to create systems that will interact with their environment will connect with the objects that surround them and will manage the data they obtain in a reliable manner. In this keynote, the evolution of intelligent computer systems will be examined. The need for human capital will be emphasized, as well as the need to follow one's "gut instinct" in problem-solving. We will look at the benefits of combining information and knowledge to solve complex problems and will examine how knowledge engineering facilitates the integration of different algorithms. Furthermore, we will analyze the importance of complementary technologies such as IoT and Blockchain in the development of intelligent systems. It will be shown how tools like "Deep Intelligence" make it possible to create computer systems efficiently and effectively."Smart" infrastructures need to incorporate all added-value resources so they can offer useful services to the society, while reducing costs, ensuring reliability, and improving the quality of life of the citizens. The combination of AI with IoT and blockchain offers a world of possibilities and opportunities. The use of edge platforms or fog computing helps increase efficiency, reduce network latency, improve security and bring intelligence to the edge of the network; close to the sensors, users, and to the medium used.

This keynote will present success stories regarding biotechnology, smart cities, industry 4.0, the economy, and others. All these fields require the development of interactive, reliable, and secure systems that we are capable of building thanks to current advances. Several use cases of intelligent systems will be presented and it will be the different processes have been optimized by means of tools that facilitate decision-making.

Thursday, October 15 14:50 - 17:20 (Asia/Calcutta)

CoCoNet-S3: CoCoNet-S3: Seventh International Symposium on Computer Vision and the Internet (VisionNet'20) - Regular and Short Papers

CoCoNet-S3.1 14:50 Automated Detection of Liver Tumor using Deep Learning
Abhijith V, Mable Biju, Sachin Gopakumar and Sharon Andrea Gomez (Mar Baselios College of Engineering and Technology, Trivandum, India); Tessy Mathew (Mar Baselios College of Engineering and Technology, Trivandrum, India)
Cancer has been recognized by the World Health Organization as the second leading reason for deaths around the world. With the rise in population, Hepatocellular Carcinoma (HCC) cases have increased due to a lack of early diagnosis and treatment. Conventionally, CT or MRI scans of affected livers undergo manual examination by trained professionals, which usually takes substantial time and effort. With the rising number of cases, this process needs to be sped up. Using deep learning models for medical image segmentation has proven to be an effective method. The proposed approach of deep learning model uses a 2D U-net architecture constructed on Fully Convolutional Network (FCN). The U-net architecture consists of three layers; the contracting/downsampling, the expanding/upsampling, and the bottleneck layer which acts as a median between the other two layers. The dataset consists of Computed Tomography images for training and testing respectively where each scan is in a 3D image format called NIfTI (.nii) and is of variable sizes. Our proposed model is enveloped in application software, where the front end provides a minimalist and intuitive user experience. Using this approach, we received an accuracy of 0.71 using the dice similarity metric. The main benefit of having an application software approach is the ease of adoption in places where such a solution is required to save valuable time and effort.
CoCoNet-S3.2 15:05 Deep Visual Attention Based Transfer Clustering
Akshaykumar Gunari (H. No:137, Akshay Colony, 1st phase, Hubli & KLE Technologcal University, India); Shashidhar Veerappa Kudari (51 Sagar Colony, near jk School, Shakti Colony, India); Sukanya Sanjay Nadagadalli (KLE Technological University, Hubballi, India); Keerthi Goudnaik (KLE Technological University, India); Ramesh Ashok Tabib (KLE Technological University, Hubballi, India); Uma Mudenagudi (KLE Technological University, India); Adarsh Jamadandi (India & B. V. Bhoomaraddi College of Engineering and Technology, India)
In this paper, we propose a methodology to improvise the technique of Deep Transfer Clustering (DTC) when applied to the less variant data distribution. Clustering can be considered as the most important unsupervised learning problem. A simple definition of clustering can be stated as "the process of organizing objects into groups, whose members are similar in some way". Image clustering is a crucial but challenging task in the domain machine learning and computer vision. We have discussed the clustering of the data collection where the data is less variant. We have discussed the improvement by using attention-based classifiers rather than regular classifiers as the initial feature extractors in the Deep Transfer Clustering. We have enforced the model to learn only the required region of interest in the images to get the differentiable and robust features that do not take into account the background. This paper is the improvement of the existing Deep Transfer clustering for less variant data distribution.
CoCoNet-S3.3 15:20 Breast Mass Classification Using Classic Neural Network Architecture and Support Vector Machine
Sreelekshmi V and Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India); Gopakumar G (Amrita Institute, India); Priya R (Amrita Vishwa Vidyapeetham, India)
According to WHO, the most dangerous disease prevailing among women is breast cancer. It is among one of the diseases that is untraceable in the beginning. About 1 in 8 women suffer breast cancer and even results in the removal of their breast. In this domain, a novel experiment to classify breast cancer using convolutional neural network and fuzzy system are introduced. A combination of convolution neural network and fuzzy system has been devised for grouping similar masses of benign and malignant in mammography database based on the mass area in breast. The mammography images are taken for image enhancement and image segmentation for identifying the mass area and the classic neural network architecture (Alexnet) performs the feature extraction.After that it is followed by fuzzy system for finding how much denser the malignant or benign cancer is. A well-known classic neural network architecture AlexNet is employed and is fine-tuned to group similar classes.The fully connected (fc) layer is replaced with support vector machine(SVM) to improve the classification effectiveness. The results are derived using the following publicly available datasets (1) Digital database for screening mammography (DDSM), (2) Curated Breast Imaging Subset of DDSM (CBIS-DDSM) and (3) Mammography Image Analysis Society(MIAS). Data augmentation is also performed to increase the training samples and to achieve better accuracy.
CoCoNet-S3.4 15:35 2D-Image Super-Resolution on Heritage site
Sheetal Pyatigoudar, Meena M, Sunil V Gurlahosur and Uday Kulkarni (KLE Technological University, India)
One proposed method for image enhancement is single image super-resolution. For this task many convolutional neural networks based models were designed. These convolutional neural networks based models perform better than the other approaches in quality measurements like structural similarity and peak signal to noise ratio (PSNR). Resulting super resolved image quality is dependent on choice of a loss function. Ongoing work is to a great extent dependent on advancing mean squared reconstruction error. But PSNR and structural similarity values cannot give fine details in an image and provide higher values with unsatisfying quality. Hence Generative adversarial networks model was introduced for this problem in recent years. In this paper image super-resolution (SR) is done with a generative adversarial network (GAN). It is the first method used for 4× upscaling factors. Proposed approach calculates loss function which is combination of two loss functions like content and adversarial loss.
CoCoNet-S3.5 15:50 Thermal Facial Expression Recognition Using Modified ResNet152
Aiswarya K Prabhakaran (Amrita Vishwa Vidyapeetham); Jyothisha J Nair and Sarath S (Amrita Vishwa Vidyapeetham, India)
Facial expression for emotion detection has taken wide popularity with visible images using machine learning techniques and convolutional neural networks. However,emotion recognition from visible images is not much plausible as they are sensitive to light conditions and people can easily fake expression. In this paper, we propose a method for facial expression recognition with thermal images using ResNet 152. Residual networks are easier to optimize, and can gain accuracy from considerably increased depth. The objective of this paper is to use a pre-trained modified ResNet152 to train thermal facial images in order to predict different emotions.We use NVIE (Natural Visible and Infrared Facial Expression) dataset for emotion classification.
CoCoNet-S3.6 16:05 Dynamic Search Paths for Visual Object Tracking
Srivatsav Gunisetty, Vamshi Krishna Bommerla, Mokshanvitha Dasari, Vennela Chava and Gopakumar G (Amrita Vishwa Vidyapeetham, Amritapuri, India)
The Long-term sub-track of Visual Object Tracking challenge comprises of some of the most challenging scenarios like occlusion and target disappearance and reappearance. To this end, many Deep learning solutions with multiple levels of detection have been proposed. Most of these solutions tend to re-identify a wrong target during the occlusion or disappearance as they start looking for the target in the entire frame. Instead, through this work, we intend to prove that predicting a probable search region for the target by understanding its trajectory and searching for a target in it will help in reducing the misidentifications and also aid in the increase of IoU. For this, we have utilized the trajectory modeling capabilities of the Kalman Filter. With this proof of concept work, we achieved an average improvement of 37.37% in IoU in the sequences where we overperformed MBMD.
CoCoNet-S3.7 16:20 Real Time Retail Smart Space Optimization and Personalized Store Assortment with Two Stage Object Detection Using Faster Regional Convolutional Neural Network
Nitin Dantu (Amrita School of Engineering, India); Shriram K Vasudevan (Amrita University, India)
In the present day scenario of the retail environment, there is a tendency of customers to do prolonged shopping. During their constant efforts to purchase products of their choice, they bat around the entire store. They choose a product and continue exploring for more. Whilst their further exploration, there is a possibility that they might encounter a better product that may satisfy their needs. So, there is a tendency that they may pick up the new product, compare with the existing product and leave the latter behind if they find the new product to be of a better purpose to their use, causing the initial product to be misplaced. There might be a possibility that empty spaces be created between products that might look sparse and lower stock display if not properly monitored. The primary goal of this research is smart space management and personalized store assortment by using computer vision. That is, wherever there are empty spaces created, we constantly monitor, and whenever there is any product misplaced, we send an automated notification to corresponding staff. We use state of the art computer vision technology to address this issue. All the processing is done in real-time and the system is found to be functionally very stable and works under all ideal conditions.
CoCoNet-S3.8 16:35 Video Retrieval using Residual Networks
Tejaswi Nayak U, Sujatha C, Tanmayi V Kamat and Padmashri Desai (KLE Technological University, India)
With the growing size of data across various different forms in today's world, a lot of meaningful information needs to be extracted from huge amount of data. Specially the multimedia content on web is increasing rapidly, thus the demand for searching and retrieval of the required multimedia data is also increasing. Hence, there is a need for a faster retrieval of required data for different queries such as image, video, audio and text. In this paper, we propose a video retrieval framework using residual networks (ResNet-34) based on the query image or video clip and retrieve the relevant or similar videos from the video dataset. The ResNet-34 with Locality Sensitive Hashing algorithm provides a faster retrieval of the relevant or similar videos from the dataset. The retrieval efficiency is improved from quadratic to logarithmic efficiency class. We demonstrate the proposed method for nine different categories of you tube videos and obtain an overall precision rate of 84% which is comparable with the state-of-art.

CoCoNet-S4: CoCoNet-S4: Symposium on Natural Language Processing (NLP'20) - Regular & Short Papers

CoCoNet-S4.1 14:50 Semantic Sensitive TF-IDF to Determine Word Relevance in Documents
Amir Jalilifard (Federal University of Minas Gerais, Brazil); Vinicius Fernandes Caridá (Itaú Unibanco S/A & FIAP University, Brazil); Alex Fernandes Mansano (Brazil); Rogers S. Cristo (Universidade de São Paulo, Brazil); Felipe Penhorate Carvalho da Fonseca (Itau Unibanco, Brazil)
Keyword extraction has received an increasing attention as an important research topic which can lead to have advancements in diverse applications such as document context categorization, text indexing and document classification. In this paper we propose STF-IDF, a novel semantic method based on TF-IDF, for scoring word importance of informal documents in a corpus. A set of nearly four million documents from health-care social media was collected and was trained in order to draw semantic model and to find the word embeddings. Then, the features of semantic space were utilized to rearrange the original TF-IDF scores through an iterative solution so as to improve the moderate performance of this algorithm on informal texts. After testing the proposed method with 160 randomly chosen documents, our method managed to decrease the TF-IDF mean error rate by a factor of 50% and reaching the mean error of 13.7%, as opposed to 27.2% of the original TF-IDF.
CoCoNet-S4.2 15:05 Fake Review Detection using Hybrid Ensemble Learning
Sindhu Hegde (IIIT Hyderabad, India); Raghu Raj Rai and P G Sunitha Hiremath (KLE Technological University, India); Shankar Gangisetty (KLE Technological University, Hubballi, India)
Opinion spam on online restaurant review sites are a major problem as the reviews influence the users' choice to visit or not to a restaurant. In this paper, we address the problem of detecting genuine and fake reviews in restaurant online reviews. We propose a fake review detection technique comprising data preprocessing, detection and ensemble learning that learns the reviews and their features to filter out the fake reviews. Initially, we preprocess to obtain the refined reviews and employ two independent classifiers using deep machine learning and feature-based machine learning techniques for detection. These classifiers tackle the problem in two aspects i.e., the deep machine learning model learns the word distributions and the feature-based machine learning model extracts the relevant features from the reviews. Finally, a hybrid ensemble model from the two classifiers are built to detect the genuine and fake reviews. The experimental analysis of the proposed approach on Yelp datasets outperforms the existing state-of-the-art methods.
CoCoNet-S4.3 15:20 Utilizing Corpus Statistics for Assamese Word Sense Disambiguation
Nomi Baruah and Arjun Gogoi (Dibrugarh University, India); Shikhar Kr. Sarma (Gauhati University, India); Randeep Borah (Dibrugarh University, India)
Classification or categorization of a word based on its meaning in respect to a context is one of the major problems in Natural Language Processing (NLP). Such a problem is termed as Word Sense Disambiguation (WSD) and the mentioned problem is seen to be prevalent in all languages across the globe. However, in Indian languages WSD poses greater challenges due to limitation of digital resources and lack of UNICODE. In this paper, we have made an attempt to highlight the efforts put by researchers to overcome WSD. It is also to be mentioned that for the said purpose two WSD algorithms for Assamese language WSD are contrasted while asserting the corpus statistics in the approach. Of the two aforementioned WSD algorithms, the first is applied using the lesk algorithm simpler while the second is exercised to determine the probability of words and phrases on grounds of condition that co-occur with every meaning of an ambiguous word in disambiguation. Both the algorithms delivered affirmative results for a trained set of corpus. However, compared to the second, the Lesk algorithm yielded better results in terms of overall efficiency of the system developed in comparison to words and phrases co-occurrence.
CoCoNet-S4.4 15:35 Part of Speech tagging using Bi-LSTM-CRF and performance evaluation based on tagging accuracy
Shilpa R Kamath, Chaitra Shivanagoudar and Karibasappa K G (KLE Technological University, India)
Part of Speech tagging (POS) refers to the computational task of identifying related parts of speech for specific words in text documents. A research challenge is to use various techniques to identify and utilize these tags to improve several Natural Language Processing applications. In this paper, a Bidirectional Long-Short term with Conditional Random Field (denoted as Bi-LSTM-CRF) model has been proposed for POS tagging. This novel model trained on the Named Entity Resolution dataset is compared with other Recurrent Neural Network models such as bi-directional Long Short Term Model networks, and Long Short Term Model with Conditional Random Field. Bi-LSTM-CRF model is applied to the annotated NER dataset and its tagging accuracy and F1-score are compared with the other pre-existing models. Experimental results show that Bi-LSTM-CRF provides better results for POS tagging. The Bi-LSTM-CRF model is competitive on the Annotated NER dataset for English to produce greater accuracy and F1-score and outperform the rest of the models.
CoCoNet-S4.5 15:50 Clustering Research Papers: A Qualitative Study of Concatenated Power Means Sentence Embeddings over Centroid Sentence Embeddings
Devashish Sameer Gaikwad, Venkatesh Vishwas Yelnoorkar and Atharva Abhijeet Jadhav (College of Engineering, Pune, India); Yashodhara Vikrant Haribhakta (College of Engineering Pune, India)
Mathematical average of word embeddings is a common baseline for sentence embedding techniques which typically fall short of the performance of more complex models such as BERT and InferSent. There has been significant improvement in the field of sentence embeddings and especially towards the development of universal sentence encoder that can be used for transfer learning in a wide variety of downstream tasks. Academic paper retrieval systems are widely used in academic institutions to store and categorise scientific papers and find connections between them using citation links, but these methods do not account for the content of the papers. For unsupervised clustering of these papers, a new approach of sentence embeddings is proposed using concatenated power means sentence embeddings and centroid sentence embeddings. The sentence embeddings so created are clustered using K-Means clustering algorithm. The results show a clear increase of 47.94% in Cosine Distance of nearest papers using Concatenated Power Means Sentence Embeddings with respect to baseline Centroid Embeddings for the highest performing Glove Models proving that the computationally inexpensive P-Means Clustering sentence embeddings can be used for unsupervised clustering of scientific research papers using their abstracts.
CoCoNet-S4.6 16:05 Web-based Interactive Neuro-Psychometric Profiling to Identify Human Brain Communication and Miscommunication Processing
Arthur Carmazzi (Directive Communication International, Indonesia); Phakkharawat Sittiprapaporn (Mae Fah Luang University, Thailand)
This study investigated the effect of individual brain communication processes on interpersonal communication and potential miscommunication by using a web-based interactive neuro-psychometric profiling tool. The brain communication process is influential patterns of individual communication and cultivates the potential of individuals to career or work in the future. The methodology used in this study was quantitative, surveys, and observation studies. The aim of this study was then to explore the brain clarity processing and distinguishing the miscommunication assessed by web-based interactive neuro-psychometric profiling instruments. Two hundred respondents were involved in this study. All respondents were equally divided between gender; one-hundred males (50%) and one-hundred females (50%). The impact of this study has practical implications for the respondents' communication behavior towards individual developments. The impact on science was to know the correlation between the brain communication process on gender, creativity, and communication behavior by means of potential miscommunication process. The output of this study was a type of brain communication process that shows the tendency of nature, attitudes, and individual potential.
CoCoNet-S4.7 16:20 Statistical and Neural Machine Translation for Manipuri-English on Intelligence Domain
Laishram Rahul (Visvesvaraya Technological University & Tata Power SED, India); Loitongbam Sanayai Meetei (National Institute of Technology Silchar, India); Jayanna HS (Indian Institute of Technology Guwahati, India)
This paper describes the development and results of Manipuri-English machine translation system built on an intelligence domain. Manipuri is an under-resourced Tibeto-Burman language that is spoken mainly in the North-Eastern states of India. A total of 56678 Manipuri-English parallel corpora from intelligence domain based on the Open Source Intelligence (OSINT) keywords and phrases is collected for the experiment. An evaluation of Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) is carried out in terms of BLEU score. A BLEU score of 23.91 is achieved with the SMT based approach which is outperformed by the NMT based system with a BLEU score of 40.67. Further, a language specific morphological analysis based on the suffixes is investigated. The findings on the incorporation of morphological analysis report a BLEU score of 25.03 with the SMT and a BLEU score of 44 with NMT, both of which are a significant improvement.
CoCoNet-S4.8 16:35 A Novel Approach to Text Summarisation using Topic Modelling and Noun Phrase Extraction
Nikhil M Lal, Krishnanunni S, Vishnu Vijayakumar, Vaishnavi N and S Siji Rani (Amrita Vishwa Vidyapeetham, India); Deepa K (Amrita Viswa Vidyapeetham, Amritapuri, India)
Over the past few years, one of the remarkable developments that happened on the web is the rapid growth of textual data. This substantial increase, however, induces a complication in the retrieval of vital information from the digitized collection of data. The conventional technique used to tackle this problem is Automatic Text Summarisation. This technique extracts the essential words or sentences from the data and summarises it without affecting the semantics. Automatic text summarisation is classified into two, Extractive and Abstractive. The Extractive method summarises a document by selecting the important words or sentences from it, based on some attributes while the Abstractive method attempts to generate its summary from the semantics of the data. In this paper, we propose a novel approach in Extractive text summarisation by using a new sentence scoring parameter. The experimental results show that the proposed sentence scoring parameter improves the performance of the Extractive text summariser, when compared with other summarisation models. To validate our proposed model, we compared it with four commonly used summarisation models on grounds of ROUGE-1 score and F1 score.

CoCoNet-S5: CoCoNet-S5: Main Track - Communication and Networked Systems (Regular & Short Papers)

CoCoNet-S5.1 14:50 Demand-based Dynamic Slot Allocation for Effective Superframe Utilization in Wireless Body Area Network
A Justin Gopinath and B Nithya (National Institute of Technology, India)
Wireless Body Area Network (WBAN) is an emerging technology for remotely monitoring the critically affected patients regularly, which is a utility platform for a medical pandemic like COVID-19. IEEE 802.15.6 Medium Access Control (MAC) defines the communication standard to pillar the quality requirements of the sensor nodes. Most of the existing works are focused on optimizing the conventional MAC by adopting dynamic scheduled access and efficient contention scheme to utilize the superframe structure. However, utilizing the entire slots based on demand from different priority sensor nodes is a challenging task. To address this issue, an efficient time-slot allocation method, namely the Demand-based Dynamic Slot Allocation (DDSA) algorithm is proposed. DDSA computes sensor node priority based on the run-time parameters such as critical index, remaining energy, and delivery demand. The slot assignment is proportional to the priority order, and the critical index factor resolves slot conflict. This guarantees data priority preservation with fair allocation for critical and non-critical medical data. The simulation is carried out using the Castalia-OMNeT++ simulator, and the results are shown that the proposed DDSA algorithm outperforms priority-based MAC and the conventional method in terms of packet reception rate, energy efficiency, and latency.
CoCoNet-S5.2 15:05 Convex Combination of Maximum Versoria Criterion based Adaptive Filtering Algorithm for Impulsive Environment
S Radhika (Sathyabama Institute of Science and Technology, India); A Chandrasekar (St. Joseph's College of Engineering); K Ishwarya Rajalakshmi (St. Joseph's College of Engineering, India)
This paper elaborates convex combination approach of two maximum versoria criteria based adaptive filters for impulsive environment. The maximum versoria criteria based adaptive filter performs better than minimum means square error and maximum correntropy criteria under impulsive environment. The main drawback with the current approach is that there is tradeoff in the speed of convergence and steady state mean square error. In order to overcome this tradeoff, convex combination method is adopted in this paper. A new update rule is also proposed to make the algorithm to have more robustness. Experiments were conducted for echo cancellation and system identification applications to validate the performance improvement of the proposed approach.
CoCoNet-S5.3 15:20 Link prediction analysis on directed complex network
Salam Jayachitra Devi (Jawaharlal Nehru University, New Delhi, India); Buddha Singh (JNU new Delhi, India)
Link prediction helps in the analysis of complex network and predicts the future possible links. Researchers developed various link prediction method using the network topological information. The topological information depends on different types of network such as Undirected network, Directed network, Weighted network, etc. So, to designed a link prediction method based on the types of complex network is a challenging task. Methods which are suitable for undirected network cannot be applied to a directed network. Hence, for every method associated with undirected network, corresponding methods of directed network can be developed by considering the topological information associated with the directed network. In this paper, we have designed a link prediction method known as Modified Resource Allocation (MRA) for directed complex network. In the existing directed resource allocation (DRA) method the immediate neighbors in the path length of two is considered. Here, this resource allocation index has been extended by considering neighbors in the path length of at most three. The proposed MRA method is primarily designed to predict the probability of formation of links between the disconnected nodes in a directed network by considering the longer path length. Area under the receiver operating characteristic curve (AUC) metric is used for evaluating the performance. Based on parameter σ, the AUC value is calculated and select the most ideal solution. The comparative analysis of the proposed method with existing link prediction methods is performed to determine that the proposed MRA method provides better results than the existing link prediction models.
CoCoNet-S5.4 15:35 Cost Effective Device for Autonomous Monitoring of the Vitals for COVID-19 Asymptomatic Patients in Home Isolation Treatment
As the number of CoVID-19 cases keep growing exponentially in the world, the use of the combination of wearable technology and IoT technology. An IoT enabled healthcare device is useful for proper monitoring of CoVID-19 patients to increase safety and reduce spreading. The health care device is connected to a large cloud network to obtain desirable solutions for predicting diseases at an early stage. This paper presents the design of a healthcare system that makes use of these technologies in a cost effective and intuitive way which highlights the application of these technologies in the battle against the pandemic. The wearable can give real time analysis reports of body vitals so that necessary precautions can be taken in case of infection. The wearable is designed in such a way that it can be used as a precautionary measure for people who are not infected with the virus and as a monitoring device for affected patients during the curse of their treatment. This low-cost design can not only be used to prevent the community spread of the virus but also for the early prediction of the disease.
CoCoNet-S5.5 15:50 An HTTP DDoS detection model using machine learning techniques for the cloud environment
Navarikuth Muraleedharan (Centre for Developmen of Advanced Computing (C-DAC), India, Bangalore, India); Janet B (National Institute of Technology, India)
The cloud computing platform has been evolved as an important computing paradigm for today's world. As the cloud environment mainly focuses on the service model, to ensure the availability of these services to the indented user is an essential requirement. In this paper, an HTTP DDoS detection model for the cloud environment is presented. The proposed system use machine learning based classifiers on network flow data. The model is trained and evaluated using the CIDDS-001 dataset. Results obtained show the proposed classifier can achieve an accuracy of 99.99 % using the Random Forest classifier. A comparison of the obtained results with the recent works available in the literature shows the proposed model outperforms it in the classification accuracy.
CoCoNet-S5.6 16:05 A LoRa-based Data Acquisition System for Wildfire Early Detection
Stefan Rizanov (Technical University-Sofia, Bulgaria); Anna V. Stoynova (Technical University of Sofia, Bulgaria); Dimitar Todorov (Technical University-Sofia, Bulgaria)
A new original LoRa-based data acquisition system for wildfire detection is developed and presented. The emphasis in the paper is pointed towards hardware design concepts, physical system architecture, and implementation as well as embedded firmware structural details and algorithms. The main purpose of the proposed design is to propose techniques whose goals are to improve upon existing WSN fire hazard detection system designs by reducing the end-device power consumption. These techniques and the data analytical steps are described in detail and evaluation on the basis of testing of their overall system performance improvement has been shown.
CoCoNet-S5.7 16:20 IoT Device Authentication & Access Control Through Hyperledger Fabric
Bibin Kurian and Narayanan Subramanian (Amrita Vishwa Vidyapeetham, India)
Internet of Things (IoT) is one of the sizzling technology that connects everything to everyone, everywhere. Security and privacy with confidentiality, integrity and availability to data are among the most pressing challenge faced by IoT as well as the internet. Networks are getting more expanded and are becoming more open, security practises has to be uplifted to ensure protection of this rapidly growing internet, its users and data.In this paper we propose a new authentication and access control mechanism for the IoT devices through a blazing blockchain technology - Hyperledger Fabric, an open-source distributed ledger platform for developing enterprise-grade permissioned blockchains. Blockchain is typically a hash-chain of blocks consisting of a number of (ordered) transactions. Fabric provides a secure and scalable permissioned platform with plug-in components that supports data privacy and smartcontracts, rather than a permission-less system where anybody can access and transact data. The authentication and access control of the IoT devices is achieved by making use of newly introduced features in managing channel, chaincodes, policies, Certificate Authority (CA) and others in hyperledger fabric version 2.0. Our architecture has the potential to act upon different layers of the IoT in authentication and access control safeguarding the confidentiality, integrity and availability of data.
CoCoNet-S5.8 16:35 Exploring IoT enabled Multi Hazard Warning System for Disaster Prone Areas
Natural disaster in India has become a great challenge in the recent years. Each year the rates have been escalating affecting both the social as well as economic progress of the country. India's topographic/climatic and socio-economic features make the country most vulnerable to the devastating effects of such calamities. Hence, it is the need of the hour to come with a system capable of long-term as well as quick prediction of disaster. This can be useful for early preparedness and developing well planned mitigation/relief system which can reduce the effects of such disaster and can be also useful in channelizing the finding in a right way during calamities. The proposed system consists of modules for prediction of: Weather pattern, flood, earthquake, landslide, fire and gas leakage. The sensor node deployed at various disaster prone areas transmit sensor data to a local aggregator, that pre-process the data and relays to remote monitoring servers. . The remote monitoring platform has algorithms for prediction of disaster as well as suggestions for quick response. Hence, the possibility of disaster can be predicted prior to the onset of these calamities.
CoCoNet-S5.9 16:50 A Survey on Congestion Control Algorithms of Wireless Body Area Network
Mekathoti Vamsi Kiran (National Institute of Technology Trichy, India); B Nithya (NIT Trichy, India)
Nowadays, research on Wireless Body Area Network (WBAN) touches its extremity as the need arising more for the present mundane lifestyle of the world. Exclusive demand for WBAN is mainly due to its special properties such as its mobility, tiny in size, and network topology, etc. WBAN is a specialised technology designed to monitor a remote patient (or subject - as WBAN is not limited to human beings) and it grabs attention from researchers as it is emergency-aware. Due to the nature of the WBAN, the collisions among data packets are inevitable which in turn increases congestion in the network by triggering more number of re-transmissions. To eradicate these is-sues, several Congestion Control (CC) algorithms are proposed in the literature. This paper surveys some of the recent CC algorithms and stretches a detailed comparative study of these algorithms. This survey reveals the strength and weakness of these algorithms and the future research direction in this re-search field.

ISTA-03: ISTA-03: Intelligent Image Processing /Artificial Vision/Speech Processing(Regular Papers)

ISTA-03.1 14:50 Lung Nodule Classification Using Combination of CNN, Second and Higher Order Texture Features
Amrita Naik (M. E, Goa University); Damodar Reddy Edla (National Institute of Technology Goa, India)
Lung Cancer is the most common cancer throughout the world and Identification of malignant tumors at an early stage is needed for diagnosis and treatment of patient thus avoiding the progression to later stage. In recent times, deep learning architectures like CNN have shown promising results in effectively identifying malignant tumors in CT scans. In this paper, we combine the CNN features with texture features like Haralick and Gray level run length matrix features to gather benefits of high level features and spatial features extracted from the lung nodules to improve the accuracy of classification. These features are further classified using SVM classifier instead of softmax classifier so as to reduce the overfitting problem. Our model was validated on LUNA dataset on which it achieved an accuracy of 93.53%, sensitivity of 86.62% and specificity of 96.55% and positive predictive value of 94.02%.
ISTA-03.2 15:05 Colon Cancer Prediction on Histological Images Using Deep Learning Features and Bayesian Optimized SVM
Tina Babu (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India); Tripty Singh (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India); Shahin Hameed (MVR Cancer Center and Research Centre, Poolacode, Kerala, India)
Colon cancer is one of the highest cancer diagnosis mortality rates worldwide. However, the expertise of pathologists is a demanding and time-consuming job for histopathological analysis. The automated diagnosis of colon cancer from biopsy examination played an important role for patients and prognosis. As conventional handcrafted feature extraction requires specialized experience to pick realistic features, deep learning processes have been chosen as abstract high-level features can be extracted automatically. This paper presents the colon cancer detection system using transfer learning architectures to automatically extract high-level features from colon biopsy images for automated diagnosis of patients and prognosis. Here in this study, the image features are extracted from a pre-trained convolutional neural network (CNN) and use those features to train the Bayesian optimized Support Vector Machine classifier. Moreover, Alexnet, VGG-16, and Inception-V3 pre-trained neural networks were experimented to analyze the best network for colon cancer detection. Furthermore, the proposed framework is evaluated using four datasets, two are collected from Indian hospitals (with different magnifications 4X, 10X, 20X, and 40X), and the other two are public colon image datasets. Compared with the existing classifiers and methods on public datasets, the test results evaluated the Inception-V3 network with the accuracy range from 96.5%-99%, as best suited for the proposed framework.
ISTA-03.3 15:20 ExypnoSteganos - A Smarter Approach to Steganography
Gaurav Sarraf (B. M. S. Institute Of Technology & Management & VTU, India); Anirudh Ramesh Srivatsa (BMS Institute of Technology & Mgmt, India); Swetha MS (B. M. S. Institute Of Technology, VTU, India)
With the ever-rising threat to security, multiple industries are always in search of safer communication techniques both in rest and transit. Multiple security institutions agree that any systems security can be modeled around three major concepts: Confidentiality, Availability, and Integrity. We try to reduce the holes in these concepts by developing a Deep Learning based Steganography technique. In our study, we have seen, data compression has to be at the heart of any sound steganography system. In this paper, we have shown that it is possible to compress and encode data efficiently to solve critical problems of steganography. The deep learning technique, which comprises an autoencoder with Convolutional Neural Network as its building block, not only compresses the secret file but also learns how to hide the compressed data in the cover file efficiently. The proposed techniques can encode secret files of the same size as of cover, or in some sporadic cases, even larger files can be encoded. We have also shown that the same model architecture can theoretically be applied to any file type. Finally, we show that our proposed technique surreptitiously evades all popular steganalysis techniques.
ISTA-03.4 15:35 Face Recognition and Tracking for Security Surveillance
Lalitha Sreeram (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Sreelu Nair, Abhinav Reddy and Prithvi Krishna (Amrita Vishwa Vidyapeetham, India)
According to the National Crime Records Bureau, 63,407 children have gone missing in the year 2016, which makes almost 174 children go missing in India every day, out of which only 50% are ever found again. This brings up a need for an efficient solution for the mentioned problem. With the amount of data growing, there is a need for machine assistance during these search activities. Since faces are part of the inherent identities of people, how well face recognition technologies can be used becomes essential for development of applications which use CCTV footage across a camera network to identify the person lost in the crowd. We have proposed a technique using One-Shot learning for face recognition to identify stranded people in places such as mass gatherings. The same technology can be used for the identification of criminals across the city. The paper also talks about the tracking of people across a network of multiple non-overlapping cameras, with a feature of tracking the target's vehicle. The experimentation is performed using mobile cameras. Thus, it helps in monitoring the actions of criminals and finding their hideout.
ISTA-03.5 15:50 An Intelligent System to Identify Coal Maceral Groups Using Markov-Fuzzy Clustering Approach
Alpana Alpana (Jawaharlal Nehru University, India); Satish Chand (Jawaharlal Nehru University, New Delhi, India); Subrajeet Mohapatra (Birla Institute of Technology Mesra Ranchi, India); Vivek Mishra (Hebei University of Engineering, India)
Coal is the mixture of organic matters, called as macerals, and inorganic matters. Macerals are categorized into three major groups, i.e., vitrinite, inertinite, and liptinite. The maceral group identification serves an important role in coking and non-coking coal processes that are used mainly in steel and iron industries. Hence, it becomes important to efficiently charac-terize these maceral groups. Currently, industries use the optical polarized microscope to distinguish the maceral groups. However, the microscopical analysis is a manual method which is time-consuming and provides subjective outcome due to human interference. Therefore, an automated approach that can identify the maceral groups accurately in less pro-cessing time is strongly needed in industries. Computer-based image analysis methods are revolutionizing the industries because of its accuracy and efficacy. In this study, an intelligent maceral group identification system is proposed using markov-fuzzy clustering approach. This approach is an integration of fuzzy sets and the markov random field, which is employed towards maceral group identification in a clustering framework. The proposed model shows better results when compared with the standard cluster-based segmentation techniques. The results from the suggested model have also been validated against the outcome of manual methods, and the feasibility is tested using performance metrics.
ISTA-03.6 16:05 A Novel Statistical Approach to Predict Road Accidents in the State of Haryana Using Fuzzy-Analytical Hierarchy Process
Navdeep Mor (Guru Jambheshwar University of Science and Technology, Hisar); Hemant Sood (NITTTR, Chandigarh, India); Tripta Goyal (PEC, Chandigarh, India); Naveen Kumar (Guru Jambheshwar University of Science and Technology, Hisar, India)
Road accidents in terms of a social and financial burden to any nation or society have become one of the most dominating problems that need to be resolved. Alt-hough multiple road safety agencies and stakeholders are working hard to reduce this issue by proper planning, designing, analysis, and management, despite put-ting all possible efforts, road accidents are not reducing. Moving one step for-ward in this direction, this paper thus introduces a multi-dimensional novel ap-proach called the Fuzzy- Analytical Hierarchy Process (F-AHP) to assign the priorities to the factors responsible for the occurrence of road crashes in the state of Haryana for the first time in India. The primary advantage of using this Fuzzy-AHP technique is that it includes a predetermined goal, criteria for making deci-sions (groups of main and sub-main criteria), and different alternatives as a com-plete structure leading to more accurate and reliable results. The data related to road accidents in the state was collected through RTI and government reports of the state of Haryana. Implementation of statistical analysis and frequency of ef-fective factors in this Multi-Criteria Decision Making (MCDM) approach im-proved the accuracy and safety performance of the model on a micro-level. The outcomes of this model indicated that this methodology could be used and im-plemented for planning and developing phase of roads in any state with similar accident scenario.
ISTA-03.7 16:20 Effective Splicing Localization Based on Noise Level Inconsistencies
N R L Chandra Sekhar P (GITAM, Visaakhapatnam, Andhra Pradesh, India); Shankar Tn (Koneru Lakshmaiah Education Foundation, India)
Using smartphones and social networking sites got people used to exchange photos and videos. On the other hand, the simplicity of image editing tools leaves questionable the validity of such pictures or videos. In the field of image forensics, intensive work is underway over the decade to bring out their trustworthiness. This paper proposes an efficient way of identifying manipulated areas based on Noise Inconsistencies of image. Unlike the existing methods, the proposed approach extracts the local characteristics from individual objects and their surroundings as a background instead of using the entire image. A pair-wise dissimilarity obtained between foreground and background objects after extracting the local characteristics of each object and then locating the manipulated region, which has the highest variance among other objects. The experimental results reveal the proposed method's superiority over other current methods
ISTA-03.8 16:35 A Cost-Effective Computer-Vision Based Breast Cancer Diagnosis
Prabira Kumar Sethy (Sambalpur University, India); Chanki Pandey (Government Engineering College, Jagdalpur, CG, India); Mohammad Rafique Khan (Government Engineering College Jagdalpur, CG, India); Santi Behera (Veer Surendra Sai University of Technology, India); K Vijayakumar (St Joseph Institute of Technology, India); Sibarama Panigrahi (SUIIT, India)
In the last decade, this has been extensive reports of world health organization (WHO) on Breast Cancer, about 2.1 million women were affected every year, and it is the second most leading cause of cancer death in women. Initial detection and diagnosis of cancer appreciably increase the chance of saving lives that cuts treatment costs. In this paper, we perform a survey of almost all the techniques utilized in breast cancer detection and diagnosis in image processing, machine learning (ML), and deep learning (DL). Also, we proposed a novel computer-vision based cost-effective method for breast cancer detection and diagnosis. Along with the detection and diagnosis of breast cancer, our proposed method is capable of finding the exact position of the abnormality present in the breast that will helps in breast-conserving surgery or partial mastectomy. The proposed method is the simplest and cost-effective that has produced highly accurate and useful outcomes when contrasted with the existing approach.
ISTA-03.9 16:50 Comparative Classification Techniques for Identification of Brain States Using TQWT Decomposition
Rahul Agrawal (G H Raisoni College of Engineering Nagpur, India); Preeti Bajaj (G H Raisoni College of Engineering, Nagpur, India)
Brain Computer Interface provides and simplifies the communication channel for the physically disabled person suffering from severe brain injury related to brain stroke and have lost ability to speak. It helps them to connect with the outside world. In the proposed work Electroencephalogram signal is used as input source taken from Bonn University database which is divided into three class of data consisting of 247 samples each. It is further processed by Tunable Q-Wavelet Transform signal decomposition technique in which the signal are sub divided into various sub bands depending on the value of Q-factor, redundancy factor & number of sub bands. A novel custom technique uses Q-factor of 3, redundancy value as 3 & 12 number of sub bands for High pass filtering as well as Q-factor of 1, redundancy value as 3 & 7 number of sub bands for low pass filtering which will combines with nine Statistical measures for feature extraction purpose. The classification is done by using multi class Support vector machine which gives the accuracy of 99.59%. The accuracy outperforms best when compared with the existing research work done previously. Also in the proposed work the comparative study has been done on the same dataset by using deep neural net-work also along with Support Vector Machine which gives the accuracy of 100%. Apart from accuracy other evaluation parameters like precision, sensitivity, specificity, and F1 score are also calculated. The classified data helps the signal transforms into three Communication messages which will help to solve the speech impairment problem of disabled person.
ISTA-03.10 17:05 ShExMap and IPSM-AF - Comparison of RDF Transformation Technologies
Paweł Szmeja (Systems Research Institute, Polish Academy of Sciences, Poland); Eric Prud'hommeaux (Massachusetts Institute of Technology, USA)
RDF is growing in popularity for enterprise data harmonization software (e.g. Virtuoso, or Corporate Memory). Interoperability techniques e.g. query rewriting, and language transformation rely on the expressivity and flexibility of RDF. As a meta-model capable of expressing information with the use of a variety of vocabularies or ontologies, RDF enables a range of solutions for the problem of data interoperability. Exploiting RDF for interoperability surfaces the need for transformations between homologous RDF structures. This paper examines two independently developed technologies that enable mapping of RDF graphs: Shape Expressions Mapping Language (ShExMap) and IPSM (Inter Platform Semantic Mediator). We investigate the formal languages used by both, examining their theoretical foundation, peculiarities, and design. We consider two practical examples of transformations, and compare effectively equivalent constructs that can be expressed in either approach. We also describe both technologies in broader context of practical applicability.
ISTA-03.11 17:20 Ontology Based Multiobject Segmentation and Classification in Sports Videos
Akila K, S Indra Priyadharshini, Pradheeba Ulaganathan, Prem Priya P and Yuvasri B (R M K College of Engineering and Technology, India); Suriya Praba T (SASTRA Deemed University, India); Veeramuthu Venkatesh (SASTRA University, India)
The main goal is to identifies and segments the multiple, partly occluded objects in the image. Our approach is carried out by the subsequent stages, primarily starts with frame conversion. Next in the preprocessing stage the Gaussian filter is employed for image smoothening. Then from the preprocessed image, Multi objects are segmented by means of modified ontology-based segmentation and the edge is detected from the segmented images. After that, from the edge detected frames area is extracted which results in object detected frames. In the feature extraction stage, attributes such as area, contrast, correlation, energy, homogeneity, color, perimeter, circularity are extorted from the detected objects. The objects are categorized as human or other objects (bat/ball) by means of the feed forward back propagation neural network classifier (FFBNN) based upon the extracted attributes.

ISTA-04: ISTA-04: Intelligent Tools and Techniques and Applications(Regular Papers)

ISTA-04.1 14:50 Cloud Service Negotiation Framework for Real-Time E-Commerce Application Using Game Theory Decision System
Rajkumar Rajavel (Galgotias University, India); Sathish Kumar Ravichandran (Christ University, India); Partheeban Nagappan (Galgotias University, India); Kanagachidambaresan Ramasubramanian Gobichettipalayam (Veltech Rangarajan Dr Sagunthala R&D Institute of Science and Technology, India)
A major demanding issue is developing a Service Level Agreement (SLA) based negotiation framework in the cloud. To provide personalized service access to consumers, a novel Automated Dynamic SLA Negotiation Framework (ADSLANF) is proposed using a dynamic SLA concept to negotiate on service terms and conditions. The existing frameworks exploit a direct negotiation mechanism where the provider and consumer can directly talk to each other, which may not be applicable in the future due to increasing demand on broker-based models. The proposed ADSLANF will take very less total negotiation time due to complicated negotiation mechanisms using a third-party broker agent. Also, a novel game theory decision system will suggest an optimal solution to the negotiating agent at the time of generating a proposal or counter proposal. This optimal suggestion will make the negotiating party aware of the optimal acceptance range of the proposal and avoid the negotiation break off by quickly reaching an agreement.
ISTA-04.2 15:05 AI Approaches for IoT Security Analysis
Mohamed Abou Messaad (University of Applied Sciences Offenburg, Germany); Chadlia Jerad (University of Manouba & University of Carthage, Tunisia); Axel Sikora (University of Applied Sciences Offenburg, Germany)
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems, and as they potentially also open the access to other IT networks and infrastructures. Existing Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of Artificial Intelligence (AI) and Machine Learning (ML). It is only recently, that these techniques are also applied to IoT networks. The keynote gives an overview of the state of the art of IoT network security and about AI-based approaches for the IoT security analysis.
ISTA-04.3 15:20 Investigation of Automatic Mixed-Lingual Affective State Recognition System for Diverse Indian Languages
Lalitha Sreeram (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India)
Automatic recognition of human affective state using speech has been the focus of the research world for more than two decades. In the present day, with multi-lingual countries like India and Europe, population are communicating in various languages. However, majority of the existing works have put forth different strategies to recognize affect from various databases, with each comprising single language recordings. There exists a great demand for affective systems to serve the context of mixed-language scenario. Hence, this work focusses on an effective methodology to recognize human affective state using speech samples from a mixed language framework. A unique cepstral and bi-spectral speech features derived from the speech samples classified using random forest (RF) are applied for the task. This work is first of its kind with the proposed approach validated and found to be effective on a self-recorded database with speech samples comprising from eleven various diverse Indian languages. Six different affective states of angry, fear, sad, neutral, surprise and happy are considered. Three affective models have been investigated in the work, with the experimental results demonstrate the proposed feature combination in addition to data augmentation show enhanced affect recognition.
ISTA-04.4 15:35 Data Reconciliation Using MA-PCA and EWMA-PCA for Large Dimensional Data
R Jeyanthi (Amrita Vishwa Vidyapeetham, India); Madugula Sahithi and N. V. Lakshmi Sireesha (Amrita School of Engineering, India); Mangala Sneha Srinivasan (Amrita School of Engineering, Bengaluru, India); Sriram Devanathan (Amrita Vishwa Vidyapeetham, Amrita University & Center of Excellence in Advanced Materials and Green Technologies, India)
In process industries, measurements usually contain errors due to the improper instrumental variation, physical leakages in process streams and nodes, and inaccurate recording/reporting. Thus, these measurements violate the laws of conservation, and do not conform to process constraints. Data reconciliation (DR) is used to resolve the difference between measurements and constraints. DR is also used in reducing the effect of random errors and more accurately estimating the true values. A multivariate technique that is used to obtain estimates of true values while preserving the most significant inherent variation is Principal Component Analysis (PCA). PCA is used to reduce the dimensionality of the data with minimum information loss. In this paper, two new DR techniques are proposed moving-average PCA (MA-PCA) and exponentially weighted moving average PCA (EWMA-PCA) to improve the performance of DR and obtain more accurate and consistent data. These DR techniques are compared based on RMSE. Further, these techniques are analyzed for different values of sample size, weighting factor, and variances.
ISTA-04.5 15:50 A Rule Based Approach for Aspect Extraction in Hindi Reviews
Chinmayee Ojha and Manju Venugopalan (Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India)
Fast growth of technology and the tremendous growth of population has made millions of people to be active participants on social networks forums. The experiences shared by the participants on different websites is highly useful not only to customers to make decisions but also helps companies to maintain sustainability in businesses. Sentiment analysis is an automated process to analyze the public opinion behind certain topics. Identifying targets of user's opinion from text is referred to as aspect extraction task, which is the most crucial and important part of Sentiment Analysis. The proposed system is a rule-based approach to extract aspect terms from reviews. A sequence of patterns is created based on the dependency relations between target and its nearby words. The system of rules works on a benchmark of dataset for Hindi shared by Akhtar et al., 2016. The evaluated results show that the proposed approach has significant improvement in extracting aspects over the baseline approach reported on the same dataset.
ISTA-04.6 16:05 Hybrid Online Model Based Multi Seasonal Decompose for Short-Term Electricity Load Forecasting Using ARIMA and Online RNN
Nguyen Quang Dat (Hanoi University of Science and Technology, Vietnam); Ngoc Anh Nguyen Thi (Hanoi University of Science and Technology & CIST, CMC Corporation, Vietnam); Nguyen Nhat Anh (CIST - CMC, Vietnam); Vijender Kumar Solanki (CMR Institute of Technology, Hyderabad, TS & IEEE Senior Member, India)
Short-term electricity load forecasting (STLF) plays a key role in operating the power system of a nation. A challenging problem in STLF is to deal with real-time data. This paper aims to address the problem using a hybrid online model. Online learning methods are becoming essential in STLF because load data often show complex seasonality (daily, weekly, annual) and changing patterns. Online models such as Online AutoRegressive Integrated Moving Average (Online ARIMA) and Online Recurrent neural network (Online RNN) can modify their parameters on the fly to adapt to the changes of real-time data. However, Online RNN alone cannot handle seasonality directly and ARIMA can only handle a single seasonal pattern (Seasonal ARIMA). In this study, we propose a hybrid online model that combines Online ARIMA, Online RNN, and Multi-seasonal decomposition to forecast real-time time series with multiple seasonal patterns. First, we decompose the original time series into three components: trend, seasonality, and residual. The seasonal patterns are modeled using Fourier series. This approach is flexible, allowing us to incorporate multiple periods. For trend and residual components, we employ Online ARIMA and Online RNN respectively to obtain the predictions. We use hourly load data of Vietnam and daily load data of Australia as case studies to verify our proposed model. The experimental results show that our model has better performance than single online models. The proposed model is robust and can be applied in many other fields with real-time time series.
ISTA-04.7 16:20 Weight Modulation in Top-Down Computational Model for Target Search
Aarthi R (Amrita VishwaViydapeetham, India); Amudha J (Amrita Vishwa Vidyapeetham, India)
The aim of computer vision research is to build a model that acts as human- like system. Recent development in visual information is effectively used to derive computational models to address different applications. Biological models help to identify the salient objects in the image. But, identification of the non-salient objects in the heterogeneous environment is a challenging task that requires better understanding of the visual system. In this work, Weight modulation based top-down model is proposed that integrates the visual features depend on its importance for the target search application. The model is designed to learn the optimal weights such that it biases the features of target from the other surrounding regions. Experimental analysis is performed using different scene of standard dataset with selected object in the scene. Metrics like area under curve, average hit number and correlation reveals that the method is more suitable in identifying the target by suppressing the other dominant objects.
ISTA-04.8 16:35 Neuro-Fuzzy Based Estimation of Rotor Flux for Electric Vehicle Operating Under Partial Loading
Manish Kumar (Netaji Subhas Institute of Technology, University of Delhi, New Delhi, India); Bhavnesh Kumar and Asha Rani (Netaji Subhas University of Technology (Formerly NSIT),New Delhi, India)
The primary objective of this work is to optimize the induction motor rotor flux so that maximum efficiency is attained in the facets of parameter and load variations. The conventional approaches based on loss model are sensitive to model-ling accuracy and parameter variations. The problem is further aggravated due to nonlinear motor parameters in different speed regions. Therefore, this work introduces an adaptive neuro-fuzzy inference system-based rotor flux estimator for electric vehicle. The proposed estimator is an amalgamation of fuzzy inference system and artificial neural network, in which fuzzy inference system is designed using artificial neural network. The training data for neuro-fuzzy estimator is generated offline by acquiring rotor flux for different values of torque. The conventional fuzzy logic and differential calculation methods are also developed for comparative analysis. The efficacy of developed system is established by analysing it under varying load conditions. It is revealed from the results that suggest-ed methodology provides an improved efficiency i.e. 94.51% in comparison to 82.68% for constant flux operation.
ISTA-04.9 16:50 Impact of Cultural-Shift on Multimodal Sentiment Analysis
Tulika Banerjee, Niraj Yagnik and Anusha Hegde (Manipal Institute of Technology, India)
Human communication is not limited to verbal speech but is infinitely more complex, involving many non-verbal cues such as facial emotions and body language. This paper aims to quantitatively show the impact of non-verbal cues, with primary focus on facial emotions, on the results of multi-modal sentiment analysis. The paper works with a dataset of Spanish video reviews. The audio is available as Spanish text and is translated to English while visual features are extracted from the videos. Multiple classification models are made to analyze the sentiments at each modal stage i.e. for the Spanish and English textual datasets as well as the datasets obtained upon coalescing the English and Spanish textual data with the corresponding visual cues. The results show that the analysis of Spanish textual features combined with the visual features outperforms its English counterpart with the highest accuracy difference, thereby indicating an inherent correlation between the Spanish visual cues and Spanish text which is lost upon translation to English text.
ISTA-04.10 17:05 QALO-MOR: Improved Antlion Optimizer Based on Quantum Information Theory for Model Order Reduction
Rosy Pradhan (Veer Surendra Sai University of Technology, Burla, India); Mohammad Rafique Khan (Government Engineering College Jagdalpur, CG, India); Prabira Kumar Sethy (Sambalpur University, India); Santosh Kumar Majhi (Veer Surendra Sai University of Technology, India)
The field of optimization science is proliferating that has made complex real-world problems easy to solve. Metaheuristics based algorithms inspired by nature or physical phenomena based methods have made its way in providing near-ideal (optimal) solutions to several complex real-world problems. Recently, Ant lion Optimization (ALO) has emerged by hunting behavior of antlions for searching for food. Even with a unique idea, it has some limitations like a slower rate of convergence and sometimes confines itself into local solutions (optima). Therefore, to enhance its performance, quantum information theory is applied with the classical ALO and the hybridized algorithm named as QALO or quantum theory based ALO. It can escape from the limitations of basic ALO and also produces stability between processes of explorations followed by exploitation. CEC2017 benchmark set is adopted to estimate the performance of QALO compared with state-of-the-art algorithms. Experimental and statistical results demonstrate that the proposed method is superior to the original ALO. The proposed QALO extends further to solve the model order reduction (MOR) problem. The QALO based MOR method performs preferably better than other compared techniques. The results from the simulation study illustrate that the proposed method effectively utilized for global optimization and model order reduction.

SIRS-02: SIRS-02: Regular Papers - Sixth International Symposium on Signal Processing and Intelligent Recognition Systems(SIRS'20)

SIRS-02.1 14:50 Robust and Imperceptible Digital Image Watermarking Based on DWT-DCT-Schur
R. Sripradha and Kaliyaperumal Deepa (Amrita School of Engineering, India)
The advent of internet and hence multimedia communication has been very useful to quickly transmit data, in all fields, including the medical field. In this scenario, security breach becomes a major concern. Watermarking is one of the schemes used for authentication purposes, copyright protection etc. Watermark-ing involves two images, one watermark and one cover-image. The watermark is embedded into the cover-image imperceptibly and has to be extracted later for it to be authenticated. Both the cover image and the watermark undergo various transforms. A non-blind, robust, imperceptible watermarking, in the frequency domain is presented here. Two different frequency domain trans-forms have been applied on the cover-image successively followed by a mathematical decomposition to improve the robustness, they are, namely, DWT (Discrete Wavelet Transform), DCT (Discrete Cosine Transform) and Schur decomposition. A two level DWT is applied to the encrypted watermark before embedding it to the cover image. This method is highly effective on X-ray images and color images alike. This can be used on medical images for authenti-cation purposes. Robustness and imperceptibility of the image under various at-tacks have been simulated on Matlab and examined.
SIRS-02.2 15:05 HRIDAI:A Tale of Two Categories of ECGs
Priya Ranjan (SRM University, Amaravathi, India); Kumar Dron Shrivastav (Amity University, Noida, India); Satya Vadlamani (Laboratory of Disease Dynamics & Molecular Epidemiology, India); Rajiv Janardhanan (Amity University, Noida, India)
This work presents a geometric study of computational dis-ease tagging of ECGs problems. Using ideas like Earthmover's distance (EMD) and Euclidean distance, it clusters category 1 and category -1 ECGs in two clusters, computes their average and then predicts the cat-egory of 100 test ECGs, if they belong to category 1 or category -1. We report 80 percent success rate using Euclidean distance at the cost of intense computation investment and 69 percent success using EMD. We suggest further ways to augment and enhance this automated classification scheme using bio-markers like Troponin isoforms, CKMB, BNP. Future directions include study of larger sets of ECGs from diverse populations and collected from a heterogeneous mix of patients with different CVD conditions. Further we advocate the robustness of this programmatic approach as compared to deep learning kind of schemes which are amenable to dynamic instabilities. This work is a part of our ongoing framework Heart Regulated Intelligent Decision Assisted Information (HRIDAI) system.
SIRS-02.3 15:20 Extraction of Parcel Boundary from UAV Images Using Deep Learning Techniques
Ganesh Khadanga and Kamal Jain (Indian Institute of Technology, Roorkee, India)
The fast-growing UAV techniques enabled the field functionaries, Government agencies to capture the data of the agriculture fields. But the processing of the UAV data to generate the parcel boundary information needs lengthy image processing tasks. This has created a demand to generate the parcel boundaries fast and in an automated way. This paper discusses the techniques to extract the field boundaries using the Deep Leaning Technique for high precision feature extraction using a model similar to U-Net. The processes are automated with little human interventions and produce the boundaries with the desired accuracy. The process of generation of data for input to the model and an operating procedure for the generation of parcel boundaries is described. The generated boundaries are quite satisfactory and will assist in automated recording of the boundaries. This will also reduce the labor-intensive field tasks.
SIRS-02.4 15:35 Robust Beamforming Against DOA Mismatch with Null Widening for Moving Interferences
Diksha Thakur (Jaypee University of Information Technology & Waknaghat, India); Vikas Baghel and Salman Raju Talluri (Jaypee University of Information Technology, India)
The performance of traditional adaptive beamformer deteriorates severely in certain cases of direction of arrival (DOA) mismatches and rapidly moving interferences. Due to the presence of mismatch in DOA, the desired signal can be lost and because of fast moving interference sources, the interference may move out of the null. To solve these problems, a robust adaptive beamformer for null widening along with robustness against DOA mismatch is proposed. To deal with DOA mismatch magnitude constraints are applied on the region of interest (ROI). Along with this, reconstruction of the interference steering matrix is done by a taper matrix in order to widen the nulls which can suppress moving interferences. The simulation outcomes demonstrate that the performance of the proposed method in terms of output signal to interference noise ratio (SINR) is not only superior to the existing methods but also comparable to the optimal case.
SIRS-02.5 15:50 Analysis of Unintelligible Speech for MLLR and MAP-Based Speaker Adaptation
Balaji V (Christ University, India); Gurupadappa Sadashivappa (RVCE, India)
Speech Recognition is the process of translating human voice into textual form, which in turn drives many applications including HCI (Human Computer Interaction). A recognizer uses the acoustic model to define rules for mapping sound signals to phonemes. This article brings out a combined method of applying Maximum Likelihood Linear Regression (MLLR) and Maximum A Posteriori (MAP) techniques to the acoustic model of a generic speech recognizer, so that it can accept data of people with speech impairments and transcribe the same. In the first phase, MLLR technique was applied to alter the acoustic model of a generic speech recognizer, with the feature vectors generated from the training data set. In the second phase, parameters of the updated model were used as informative priors to MAP adaptation. This combined algorithm produced better results than a Speaker Independent (SI) recognizer and was less effortful for training compared to a Speaker Dependent (SD) recognizer. Testing of the system was conducted with the UA-Speech Database and the combined algorithm produced improvements in recognition accuracy from 43% to 90% for medium to highly impaired speakers revealing its applicability for speakers with higher degrees of speech disorders.
SIRS-02.6 16:05 Diagnosis of Parkinson's Disease by Deep Learning Techniques Using Handwriting Dataset
Atiga Belaed Al-Wahishi (College of Computing and Information Technology- AASTMT, Alexandria, Egypt); Nahla Belal (Arab Academy for Science, Technology, and Maritime Transport & College of Computing and Information Technology, Egypt); Nagia M Ghanem (Faculty of Engineering, Alexandria University, Egypt)
Diagnosis and evaluation of Parkinson's disease (PD) by clinicians are normally dependent on several established clinical criteria. Also, measuring the severity level according to these criteria depends heavily on doctor's expertise, which is subjective and inefficient. So, the current work aimed to provide a quantitative and comprehensive evaluation in order to improve diagnosis and assess severity of PD using deep learning approaches. In current study, a machine learning based method was proposed to automatically rate/ evaluate the PD severity from handwritten exams, by extracting the information from static and dynamic tests. A hybrid model was suggested by fusing convolutional neural network (CNN) and long-short term memory (LSTM) to estimate. The accuracies of the suggested hybrid model (CNN+LSTM) were 85.5% and 99.3% using ParkinsonHW and HandPD datasets, respectively. We conclude that the quantitative evaluation provided by our model may be considered a helpful tool in clinical detection and severity prediction of Parkinson's disease.
SIRS-02.7 16:20 Supervised Feature Learning for Music Recommendation
Ajit Rao, Aditya R and Adarsh B (PES University, India); Shikha Tripathi (PES University, Bangalore, India)
The advent of music streaming services coupled with the drastic increase in the rate of music consumption has made automatic music recommendation a relevant and active area of research. Most modern recommendation systems utilize a combination of one or more techniques. Collaborative filtering and content-based recommendation are popularly used techniques. The former works well on large datasets but suffers from the well documented cold start problem. The latter tackles this issue, but generally requires intensive deep neural networks and large datasets to achieve high recommendation accuracy. In this paper, we propose a content based recommendation system that attempts to address the cold start problem using a relatively shallow neural network. We train a convolutional neural network to indirectly learn feature vectors for users and songs through a classification task. We show that our model generates sensible, diverse and personalized recommendations and is effective even on small datasets. We compare our results quantitatively against that of the popular latent factor models for music recommendation and show that our song to vector model outperforms traditional recommendation methods. The system has been validated qualitatively as well and performs satisfactorily.
SIRS-02.8 16:35 Identification of Indian English by Speakers of Multiple Native Languages
Radha Krishna Guntur (VNRVJIET, India); Ramakrishnan Krishnan (IIST, India); Vinay Kumar Mittal (KL University & KLEF, India)
Many aspects of speech can provide information about particular speaker's characteristics. This paper presents a novel method of automatic classification of speakers based upon their regional language accent. Present study uses English spoken as second language by speakers of four South Indian Languages. A new data set is developed by following a specific data collection protocol. The acoustic deviations were studied by analysing the distribution of Mel-frequency Cepstral Coefficients (MFCCs) for the non-native English speech. Using the state-of-the-art methods of Language and accent identification, Indian language are differentiated, and good recovery rates are observed. The results obtained shows that the accent classification in the Indian setting is more effective in the text-independent mode compared to the text-dependent mode, and is also gender dependent. Native language identification has been achieved with an accuracy of 94% from a short utterance of 60 seconds of non-native English speech. It is found that identification of native Malayalam and Kannada speakers is easier than native Tamil and Telugu speakers.
SIRS-02.9 16:50 Offline Signature Verification Based on Spatial Pyramid Image Representation with Taylor Series
Bharathi Pilar (Mangalore University & University College Mangalore, India)
This paper presents Spatial Pyramid image representation based technique with partial sum of second order Taylor Series Expansion (TSE) as the feature extraction technique for offline signature verification. In this approach, the given signature image is partitioned into sub blocks recursively and the local features in each sub blocks are computed and are represented in the form of histogram. The histograms of the image and subblocks at various levels are concatenated to form a single histogram. Thus, the resulting spatial pyramid feature vector is a simple and computationally efficient extension of an orderless bag-of-features consisting of collection of histograms concatenated to form a single histogram. The partial sum of second order Taylor Series Expansion (TSE) computed with finite number of terms within a small neighbourhood, gives approximation for the function and hence provides a powerful mechanism to extract the localised structural features of signature. We propose kernel structures by extending the sobel operators to compute the higher order derivatives of TSE. Our approach captures both local and global features from the signature image. We have used weighted histograms, wherein weight associated with a level is inversely proportional to cell width at that level. The Support Vector Machine (SVM) is used for the classification. Experimental results on standard datasets reveals the performance of the proposed approach. Comparative analysis with some of the well known approaches exhibit the classification accuracy of the proposed approach.
SIRS-02.10 17:05 Channel-Aware Decision Fusion with Rao Test for Multisensor Fusion
Domenico Ciuonzo (University of Naples Federico II, IT, Italy); Pierluigi Salvo Rossi (Norwegian University of Science and Technology, Norway)
This paper tackles unknown signal detection in a distributed fashion via a Wireless Sensor Network (WSN) made of tiny and low-cost sensor devices. The sensors are assumed to measure an unknown deterministic parameter within unimodal and symmetric noise. To model energy-constrained operations usually encountered in an Internet of Things (IoT) scenario, local one-bit quantization of the raw measurement is performed at each sensor. A Fusion Center (FC) receives noisy quantized sensor observations through reporting parallel-access Rayleigh channels and makes a global decision. We propose the Rao test as a simpler alternative to the Generalized Likelihood Ratio Test (GLRT) for multisensor fusion. The intent of our work is performing fusion directly from the received signals, following a decode-and-fuse approach. Then, we study the design of the (channel-aware) quantizer of each sensor with the intent of maximizing the asymptotic detection probability. Finally, we compare the performance of the Rao test with that of the GLRT by simulations (related to a practical WSN scenario).

SSCC-02: SSCC-02: Eighth International Symposium on Security in Computing and Communications - Regular & Short Papers

SSCC-02.1 14:50 Trust-Based Adversarial Resiliency in Vehicular Cyber Physical Systems Using Reinforcement Learning
Felix O Olowononi (Howard University & Data Science and Cybersecurity Center, USA); Danda B. Rawat and Chunmei Liu (Howard University, USA)
Vehicular cyber physical systems (VCPS), by leveraging on advancements in sensing, wireless technologies and vehicular ad hoc networks (VANET) have improved driving experience and birthed systems like cooperative adaptive cruise control, and have formed the foundation for autonomous and platooned vehicles. However, its deployment is adversely affected by security concerns even as attackers continue to improve their attacks methods. In recent years, machine learning has become an invaluable tool for research, and investigations into its application in CPS security is increasingly active. Although, supervised learning techniques were used for most tasks initially, reinforcement learning due to the excellent results obtained in peculiar cases like the environment in which vehicles operate have become popular. Trust management systems are also very useful in identifying adversaries in a vehicular network. In this paper, a data-oriented trust-based method for improving the resiliency of vehicles to adversarial attacks is investigated using RL. Improving on other works that combine direct and indirect trusts and assume that vehicles interact for a long time, this method is suited for the dynamic environment vehicles operate in and the high mobility they experience. Specifically, the Q-learning learning algorithm is used to learn and adapt the weight used to estimate the trust value and therefore reflect the real environment. The simulation results obtained show that the proposed methodology is efficient and establishes the contributions of the application of RL to CPS security.
SSCC-02.2 15:05 SaaS - Microservices-Based Scalable Smart Contract Architecture
Eranga Bandara (Old Dominion University, USA); Xueping Liang (University of North Carolina Greensboro, USA); Sachin Shetty and Peter Foytik (Old Dominion University, USA); Nalin Ranasinghe and Kasun De Zoysa (University of Colombo School of Computing, Sri Lanka); Wee-Keong Ng (Nanyang Technological University, Singapore)
Novel blockchain platforms introduce a programming interface called "smart contracts" to interact with the blockchain ledger. The blockchain application business logic is encoded on smart contracts. Then clients submit transactions to these smart contracts and perform the operations. In existing blockchain systems all the smart contracts run on a single monolithic service. Even though there are multiple contracts with fully independent business logic, they run on a single monolithic container. This dependence on a monolithic container can be a performance bottleneck during the processing of a large number of transactions. To address this challenge we adopt microservice-based architecture into the blockchain smart contracts by introducing a novel architecture to run independent smart contracts on separate microservices. The new smart contract architecture is built on top of Mystiko which is highly scalable blockchain storage targeted for big data. Mystiko comes with functional programming and actor-based concurrent smart contract platform Aplos, which is identified as Smart Actor platform. Based on the microservices philosophy we redesign the Aplos smart actors platform on Mystiko blockchain. This architecture is introduced as a "SaaS - Smart actors as a service". With SaaS, we can deploy different Aplos smart actors in the blockchain as separate independent services(e.g docker containers) instead of a single monolith service. This ensures different smart actors can execute transactions independently. Finally, the architecture increases the scalability, supports concurrent transaction execution and produces high transaction throughput on the blockchain.
SSCC-02.3 15:20 The Concerns of Personal Data Privacy, on Calling and Messaging, Networking Applications
Angeliki Kalapodi and Nicolas Sklavos (University of Patras, Greece)
The General Data Protection Regulation (GDPR) was adopted in 2018 and had as main objective the establishment of the protection of personal data as a right of European citizens. However, the information, being the most valuable asset of our time, is a necessary element for the professional activity of most websites and applications. Secure design is fundamental for ensuring and maintaining trust between devices and user. IoT devices are the future of technology and communication. Given the multi-device interface technology, we must consider the ability to protect the user from leaks to other devices, applications and websites. Encryption is an important tool that can help us, ensure users trust in the devices they use. Privacy of personal data must therefore be ensured, especially since they are now a protected right of users. In this work, we examine the most commonly used calling and messaging, networking applications, of everyday life: Ayoba, Facebook Messenger, Line, Signal, Skype, Slack, Telegram, Whatsapp, and Viber. Especially, the permissions they ask from the user, the protection and the guarantees they provide under the General Data Protection Regulation, is explored. The additional permissions requested, or the shared data, are figured out. The results of our research, have proven the leakage of personal data and information, from the smart hand held devices, of the users, to third-party websites. Finally, the intersection of the third-party websites and, consequently, the sharing of the users information to other parties, without their immediate permission, are reported.
SSCC-02.4 15:35 Performance Study of Multi-Target Tracking Using Kalman Filter and Hungarian Algorithm
We present the method for multi-target tracking using the combination of Kalman filter and Hungarian algorithm and test the efficiency of this method with two different data sets. In Data set - I, no target leave or enter the frame and in Data set - II, targets leave and enter the frame at regular intervals. This tracking method deals with the data association problem that arises with multiple targets in a single frame and also the dimensionality problem that arises due to repeated changes in the size of state-space associated with multiple targets. We use 2 important methods to achieve this. The first is the Kalman filter which is an extension of Bayesian filter. It uses a probabilistic approach to deal with the estimation of data. The second one is the Hungarian algorithm, used to overcome the data association problem and data association comes into the picture only when there are multiple targets.
SSCC-02.5 15:50 A Forensic Analysis on the Availability of MQTT Network Traffic
Naga Venkata Hrushikesh Chunduri (Amrita Viswa Vidyapeetham, Coimbatore, India); Ashok Kumar Mohan (Amrita Vishwa Vidyapeetham Amrita University, India)
IoT is a diversified technology which have large scalability integrating hardware and software components. IoT comprises of protocols that are light weight, sensors which are attached to field components and finally software that integrates all the above. These light weight protocols are prone to have many security issues which among one is MQTT protocol that operates with client/server architecture. Our work focuses on showcasing the poor impact of security measures on the protocol by attacking MQTT brokers which act like servers. We have performed intrusion and denial of service attack on publicly available MQTT test brokers to obtain sensitive information and validate it's security implications. We also have mentioned our observations in machine learning built random forest algorithm to detect the attack logs and the reasons to shift for a forensic approach.
SSCC-02.6 16:05 Benchmarking Machine Learning Bioinspired Algorithms for Behaviour-Based Network Intrusion Detection
Paulo Ferreira (School of Technology and Management - Polytechnic of Leiria, Portugal); Mario Antunes (School of Technology and Management - Polytechnic Leiria & CIIC & CRACS, INESC-TEC, Portugal)
Network security encompasses distinct technologies and protocols, being behaviour based network Intrusion Detection Systems (IDS) a promising application to detect and identify zero-day attacks and vulnerabilities exploits. In order to overcome the weaknesses of signature-based IDS, behaviour-based IDS applies a wide set of machine learning technologies to learn the normal behaviour of the network, making it possible to detect malicious and not yet seen activities. The machine learning techniques that can be applied to IDS are vast, as are the methods to generate the datasets used for testing. This paper aims to evaluate CSE-CIC-IDS2018 dataset and benchmark a set of supervised bioinspired machine learning algorithms, namely CLONALG Artificial Immune System, Learning Vector Quantization (LVQ) and Back-Propagation Multi-Layer Perceptron (MLP). The results obtained were also compared with an ensemble strategy based on a majority voting algorithm. The results obtained show the appropriateness of using the dataset to test behaviour based network intrusion detection algorithms and the efficiency of MLP algorithm to detect zero-day attacks, when comparing with CLONALG and LVQ.
SSCC-02.7 16:20 Multilevel Secure Container Deployment Framework in Edge Computing
Seema Nambiar, Nanda Krishna, Chirag Tubakad and Adithya Kiran (PES University, Bengaluru, India); Subramaniam Kalambur (PES University, India)
Large scale distributed IOT applications like smart city, smart building etc are becoming a reality. The microservice architectural pattern is now becoming common for its ease of development and is also used in edge systems. In order to secure containers, the gVisor container framework is emerging as an alternative to the standard Docker container, but has increased performance overheads in the network stack and file system processing. In this paper, we first characterize the performance of gVisor containers running real programs and demonstrate the loss in performance. Next, we propose a multi-level container deployment framework that chooses the right container framework trading off between performance and security based on the containers use in a microservice application. We demonstrate that using our framework, it is possible to ensure security with a relatively lower impact on performance.
SSCC-02.8 16:35 Audio Steganography Using Multi LSB and IGS Techniques
Chinmay Kuchinad (PES University, Bangalore, India); Chiranjeevi N and Kartik Hegde (PES University, India); Shikha Tripathi (PES University, Bangalore, India)
With advances in communication technologies and flow of enormous amount of data , the need for providing a secure pathway for the same has become a necessity. In this paper different methodologies are explored to find efficient steganographic method for embedding one form of data into another, with an aim of enhancing data security. The proposed approach involves embedding an image into an audio signal. Image encryption methods are also explored with the intention of adding an extra layer of security to the system. The proposed system shows improvement in capacity and SNR.Various attacks are carried out on the stego signal to check the robustness of the resulting steganographic system.
SSCC-02.9 16:50 GIDS: Anomaly Detection Using Generative Adversarial Networks
Rinoy Macwan (DA-IICT, India); Sankha Das (BITS Pilani, Rajasthan, India); Manik Lal Das (DAIICT, India)
Cyber security in modern digital age has become a major challenge before an individual/organization to protect assets from malicious entities. Machine learning technique has been used in advanced intrusion detection system (IDS) which detects new attacks by analyzing existing attacks' metrics with the help of rich collection of datasets. Generative Adversarial Networks (GAN) has found lots of attentions in recent times, in particular, for forgery detection in image data. GAN also shows its potential that can be used in text-based traffic inspection to check whether the traffic contains any suspicious strings. In this paper, we present an intrusion detection system using GAN, termed as GIDS, that detects anomalies in input strings with a reasonable accuracy. GIDS minimizes the mapping error without using an external encoder. The analysis and experimental results show that GIDS detects anomalies with an accuracy of 83.66 percent, while keeping false positive rate low.
SSCC-02.10 17:05 Lightweight Cryptographic Primitives to Provide Data Confidentiality and Data Privacy at IoT End Nodes: A Comparative Analysis
Heera Wali (KLE Technological University, India); Nalini C Iyer (B.V.Bhoomaraddi college of Engg and Technology, India); Vishwanath G Garagad (KLE Technological University & B. V. Bhoomaraddi College of Engineering and Technology, India)
IoT devices work under the small power envelop which makes the end nodes vulnerable to deploy the computationally complex cryptographic primitives to provide data security and data confidentiality. The conventional cryptographic primitives such as AES (encryption), SHA (Hashing), RSA (signing), do not scale well at the memory, power-constrained IoT end node architectures. Thus, lightweight cryptographic primitives are being proposed to overcome the conventional cryptography primitives. This paper outlines many of the techniques and a comparative analysis of the lightweight cryptographic primitives PRESENT, CLEFIA,SIMON, etc, that are defined as replacements for conventional cryptography for data security/confidentiality at the end nodes.

Thursday, October 15 17:30 - 18:20 (Asia/Calcutta)

Keynote: title will be announced shortly

Speaker: Dr. Ian T. Foster, University of Chicago and Argonne National Laboratory, USA

Biography: Dr. Ian Foster is the Director of Argonne's Data Science and Learning Division, Argonne Senior Scientist and Distinguished Fellow and the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago. He was the Director of Argonne's Computation Institute from 2006 to 2016. Foster's research contributions span high-performance computing, distributed systems, and data-driven discovery. He is widely recognized for co-inventing grid computing, which laid the groundwork for the cloud computing systems that are used today. He has published hundreds of scientific papers and eight books on these and other topics. Methods and software developed under his leadership underpin many large national and international cyberinfrastructures. Foster received a BSc (Hons I) degree from the University of Canterbury, New Zealand, and a PhD from Imperial College, United Kingdom, both in computer science. Foster's honors include the Gordon Bell Prize for high-performance computing (2001), the Lovelace Medal of the British Computer Society (2002), the IEEE Tsutomu Kanai Award (2011), and the IEEE Charles Babbage Award (2019). He was elected Fellow of the British Computer Society in 2001, Fellow of the American Association for the Advancement of Science in 2003, and in 2009, a Fellow of the Association for Computing Machinery, who named him the inaugural recipient of the high-performance parallel and distributed computing (HPDC) achievement award in 2012. In 2017, he was recognized with the Euro-Par Achievement Award and in 2019 he was a recipient of the IEEE Computer Society Charles Babbage Award.

Friday, October 16

Friday, October 16 9:30 - 10:20 (Asia/Calcutta)

Keynote: Detecting network anomalies and intrusions

Speaker: Prof. Ljiljana Trajkovic, Simon Fraser University, Burnaby, British Columbia, Canada

Abstract: The Internet, social networks, power grids, gene regulatory networks, neuronal systems, food webs, social systems, and networks emanating from augmented and virtual reality platforms are all examples of complex networks. Collection and analysis of data from these networks is essential for their understanding. Traffic traces collected from various deployed communication networks and the Internet have been used to characterize and model network traffic, analyze network topologies, and classify network anomalies. Data mining and statistical analysis of network data have been employed to determine traffic loads, analyze patterns of users' behavior, and predict future network traffic while spectral graph theory has been applied to analyze network topologies and capture historical trends in their development. Machine learning techniques have proved valuable for predicting anomalous traffic behavior and for classifying anomalies and intrusions in communication networks. Applications of these tools help understand the underlying mechanisms that affect behavior, performance, and security of computer networks.

Friday, October 16 10:20 - 11:10 (Asia/Calcutta)

Keynote: In Hardware We Trust: Electronic Design Automation

Speaker: Dr. Nicolas Sklavos, University of Patras, Hellas

Abstract: Modern handheld devices and systems are developed day by day, in order to satisfy the complexity of users' needs and applications. Nowadays, integrated circuits (ICs) play a sensitive role in devices' operation, since they are the main cores for almost each type of process and data transaction. The needs for high performance, minimized area, and less power, are more demanding each time, and electronic design automation (EDA), is oriented as a crucial factor, for these targets. Although, besides the traditional circuits and systems, design approaches, the arising threats in hardware each time, make very important the priority for secure hardware design, and trusted devices, at the same time. Traditional approaches of design and test, are argued, since most of the processes' parts need considerations, assumptions, and specifications, for both trustworthy and security in all metrics, including modeling and evaluation. This keynote talk, gives a detailed overview of hardware security and EDA approaches, including security threats, in integrated circuits, though the design cycle. It also, deals with, the countermeasures and the motivation of the prior art. Examples of modern applications are introduced, in sense of trusted hardware, and secure by design. Solutions and alternative approaches are figured out, as well, a detailed overview is discussed, for the expectations of the future, for both users' applications, and devices.

Friday, October 16 11:10 - 12:00 (Asia/Calcutta)

Keynote: Learning from Class-Imbalanced Data: Challenges,Methods and Applications

Speaker: Dr. El-Sayed El-Alfy, King Fahd University of Petroleum and Minerals, Saudi Arabia

Abstract: Nowadays, machine learning and intelligent systems are gaining increasing importance in this era of digital transformation.As more data is generated, the advances in this field present new opportunities in a wide spectrum of domains such as healthcare, finance, social media, cybersecurity, industrial systems, and sensor networks. However, some events or classes are rare and not equally represented in data for many real-world applications. This imposes several challenges for standard machine learning classification algorithms. Though several approaches have been proposed over the past decades, there are open issues that need further investigation. In this talk, we review majorresearch challenges and state-of-the-art solutions with examples for handling imbalanced datasets in order to build more effective models.

Friday, October 16 12:00 - 13:00 (Asia/Calcutta)

Keynote: Device Edge Computing: Next Frontiers for IoT and Robotics

Speaker: Dr. Arpan Pal, Chief Scientist and Research Area Head, Embedded Systems and Robotics, TCS Research, India

Abstract: Edge computing is the next frontier for IoT where analytics and AI need to be performed in the Edge device itself without sending the sensor data to the cloud. In this talk, we will discuss the main drivers for device edge computing, outline the application use cases, and provide a glimpse of the required technology - current and future.

Friday, October 16 13:30 - 14:30 (Asia/Calcutta)

Keynote: AI Approaches for IoT Security Analysis

Speaker: Dr. Axel Sikora, University of Applied Sciences Offenburg, Germany

Abstract: IoT networks are increasingly used as entry points for cyber attacks, as often they offer low-security levels, as they may allow the control of physical systems, and as they potentially also open the access to other IT networks and infrastructures. Existing Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of Artificial Intelligence (AI) and Machine Learning (ML). It is only recently, that these techniques are also applied to IoT networks. The keynote gives an overview of the state of the art of IoT network security and about AI-based approaches for the IoT security analysis.

Friday, October 16 14:45 - 18:00 (Asia/Calcutta)

ACN-02: ACN-02: International Conference on Applied Soft Computing and Communication Networks (ACN'20) - Short Papers

ACN-02.1 14:45 A Comprehensive Survey on Big Data Technology Based Cybersecurity Analytics Systems
S Saravanan (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Amrita University, India); Gopalakrishnan Prakash (Amrita School of Engineering, India)
The traffic data in Internet is enormous as most of the population in the world uses Internet nowadays. Due to the improvement in the speed of modern communication links in Internet, huge volume of traffic data can be generated in a small amount of time. These two aspects characterize Internet traffic data as Big Data. The proper analysis of traffic data helps network administrators in managing their network and cybersecurity professionals to identify security events in their network. Recently, many cybersecurity systems leveraging big data technologies for examining security events have been developed at both academic and industry. These systems need to process a huge amount of packet captures (PCAP) to detect and mitigate security attacks. In this work, we present a comprehensive survey on Big Data Technology based Cybersecurity Analytics (BDTCA) system that utilizes big data technologies for analyzing security events. This survey paper identifies the various big data technologies, datasets and feature selection algorithms that are used in BDTCA systems. This paper also reports different methods that are used to read packet trace files from Hadoop Distributed File System (HDFS). Finally, observations, recommendations and future directions in the area of BDTCA systems are presented.
ACN-02.2 15:00 P2P Bot Detection Based on Host Behavior and Big Data Technology
Pullepu Datta Veera Sai Teja, Pasupuleti Hema sirija and Pendekanti Roshini (India); S Saravanan (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Amrita University, India)
A botnet is a herd of malware compromised devices, known as bots, connected through the Internet to perform malicious activities. The botnet can be of two types based on the architecture namely Client-Server architecture (Centralized botnet) and Peer to Peer architecture (P2P botnet). From the past few years, P2P botnets have been emerging as the biggest threat to networks. With the evolution of P2P botnets, detection has become a more challenging task since it can easily blend with benign network traffic and makes it hard to detect P2P bots in the presence of benign P2P. Modern P2P botnet detection system needs to process huge packet capture (PCAP) files as the amount of traffic data generated in the network is enormous. This paper proposes a Hadoop based P2P botnet detection system that detects P2P bots in the Local Area Network (LAN) which consists of both P2P bot and benign P2P traffic and reads PCAP files directly from Hadoop Distributed File System (HDFS) and avoids conversion of PCAP files to text. The detection is based on the various characteristics of P2P bots such as count of unique destination hosts connected, total amount of data transferred from the source host, average of the TTL value of the packets transferred from the source host and count of unique destination ports connected. Experiments and evaluations are done on the publicly available real network dataset.
ACN-02.3 15:15 VHF OSTBC MIMO System Used to Overcome the Effects of Irregular Terrain on Radio Signals
Xolani Maxama (CUT, South Africa); Elisha Markus (Central University of Technology & Free State South Africa, South Africa)
Owing to irregular terrain in the Northern Cape Province of South Africa, radio signal reception is a challenge. All forms of electronic communication are not available in certain areas of the Northern Cape. This setback has impacted the power utility Eskom negatively, as they have substations and other high voltage apparatus to monitor using the SCADA system in some of these areas. This paper addresses this challenge by making use of the Very High Frequency Orthogonal Space-Time Block Code, Multiple-In, Multiple-Out (VHF OSTBC MIMO) system. The simulation results are generated using the Pathloss 4 and Matlab softwares. The results generated provide coverage predictions, bit error rates (BER) and receive signal levels of two OSTBC MIMO systems operating at two different VHF frequencies. The results reveal that employing a low VHF frequency OSTBC MIMO transceiver system, in rough terrain environments can greatly improve radio signal reception.
ACN-02.4 15:30 iCREST: International Cross-Reference to Exchange-Based Stock Trend Prediction Using Long Short-Term Memory
Kinjal Chaudhari and Ankit Thakkar (Institute of Technology, Nirma University, India)
Stock market investments have been primarily aimed at gaining higher profits from the investment; a large number of companies get listed on various stock exchanges to initiate trading through the stock market. For the potential expansion of market tradings, several companies may choose to get listed on multiple exchanges which may be domestic and/or international. In this article, we propose international cross-reference to exchange-based stock trend (iCREST) prediction approach to study how historical stock market data of a company listed on internationally-located stock exchanges can be integrated. We consider the timezone and currency variations in order to unify the data; we also incorporate data integration-based pre-processing to eliminate loss of useful stock price information. We calculate the difference between exchange prices of a company and adopt long short-term memory (LSTM) models to predict one-day-ahead stock trend on respective exchanges. Our work can be considered as one of the novel approaches that integrate the international stock exchanges to predict the stock trend of corresponding markets. For the experiment, we take datasets of five companies listed on National Stock Exchange (NSE), Bombay Stock Exchange (BSE), as well as New York Stock Exchange (NYSE); the prediction performance is evaluated using directional accuracy (DA), precision, recall, and F-measure metrics. The results using these metrics indicate performance improvement with international exchanges and hence, the potential adaptability of the proposed approach.
ACN-02.5 15:45 Speech Based Selective Labeling of Objects in an Inventory Setting
A. Alice Nithya, Mohak Narang and Akash Kumar (SRM Institute of Science and Technology, India)
Object detection has been extensively used and is considered as one of the prerequisites for various vision-based applications like object recognition, instance segmentation, and pose estimation and so on. This field has attracted much research attention in recent years due to its close relationship with scene analysis and image understanding. Traditional methods extract features and use shallow machine learning architectures to detect objects. These methods faced many difficulties in combining the extracted low-level features with the high-level context of the detector and classifier. Though the recent developments in deep learning architectures helped visual scene analysis and detection methods to perform remarkably, a lot more focus is required in performing voice-based scene analysis and object detection methods. In this paper, a voice-based scene analysis and object detection methods is proposed which aims in detecting a specific object through voice input in inventory storage. An automatic speech recognizer based on deep learning architecture is used to perform speech- to-text conversion. The converted text is used to identify which object is to be located in the scene. A dictionary-based search algorithm is used for reducing the search time of the class of interest. Object detection is performed using Faster RCNN architecture. Experimental results are tested on the retail product checkout dataset for evaluating the system performance. Experimental results show that this approach makes it possible for the model to segment only the specific object while ignoring all the other classes.
ACN-02.6 16:00 Classification and Evaluation of Goal Oriented Requirements Analysis Methods
Farhana Mariyam (Jamia Millia Islamia, India); Shabana Mehfuz (Jamia Millia Islamia, New Delhi, India); Mohd Sadiq (Jamia Millia Islamia, India)
Goal oriented requirements analysis (GORA) is a sub-process of goal oriented requirements engineering, which is used for the identification and analysis of the high level objective of an organization. There are different types of the GORA methods like AGORA, PRFGOREP, FAGOSRA, Tropos, etc., which have been developed to deal with different issues related to GORA like reasoning with goals, selection and prioritization of the goals and requirements, stakeholders analysis, detection of conflictions among goals, and so on. The objective of this paper is to classify and evaluate the GORA methods based on goal concepts, goal links and soft computing techniques used in GORA methods to deal with impression and vagueness during the decision making process. Based on the evaluation, we have also discussed the future scope in the field of GORA methods.
ACN-02.7 16:15 Serverless Deployment of a Voice-Bot for Visually Impaired
Deepali Bajaj (Shaheed Rajguru College & University of Delhi, India); Urmil Bharti (University of Delhi & Shaheed Rajguru College of Applied Sciences for Women, India); Hunar Batra (Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, India); Anita Goel (Dyal Singh College, University of Delhi, India); Suresh Gupta (IIT Delhi, India)
Serverless Computing is an upcoming paradigm for the development and deployment of cloud applications. Rather than directly provisioning cloud infrastructure, developers write their short, ephemeral functions to be executed on a serverless platform. This simplified programming model relieves its developers from the burden of server management tasks like allocation, scaling and deallocation of resources. Resources are typically allocated and charged in proportion to the number of times a function call is made. Monetary profits of serverless architectures greatly depend on the function execution behavior and application workloads volumes. Out of few relevant case studies, certain applications are exemplar and suit perfectly to serverless frameworks - like voice bot and chatbot applications. Recently, chatbots are becoming popular in all varieties of business applications as they can handle multiple users' requests at a time and reduce per service cost. In this paper, we have investigated a case study of "Feed-O-Back - A Teachers Feedback Voice bot for visually impaired students'' and we also observed its implementation in serverless frameworks. Feed-O-Back Bot is a web application powered by conversational AI and enabled with natural language processing to collect the feedback from the students. Since this bot is used intermittently and applicable to few students only with special needs, reserving a dedicated cloud virtual machine instance will not be economically justifiable. We opted for Google Cloud Platform (GCP) and compared pricing of dedicated cloud in-stances (Infrastructure-as-a-Service model - IaaS) to serverless architectures. We evaluated that Google Cloud Function (FaaS model) is more cost-efficient and proven to be economical than Google Compute Engine (IaaS model). We have described the details of voice bot implementation using Dialogflow conversation service deployed on Cloud Function for Firebase.
ACN-02.8 16:30 Smart Cities and Spectrum Vulnerabilities in Long Range Unlicensed Communication Bands: A Review
Elisha Markus (Central University of Technology & Free State South Africa, South Africa); Johnson Fadeyi (CUT, South Africa)
This paper presents a survey of key analysis and observations of spectrum vulnerabilities for long range communications in unlicensed bands. As smart cities are on the rise across the globe, it is expected that the unlicensed spectrum will play a critical role in achieving the cities of the future. However, the use of this band is fast becoming saturated due to massive deployment of different technologies. There are many issues that need to be addressed in order to achieve these smart cities. Key among them is the security vulnerabilities of LPWANs/IoT technologies which adversely affects the confidentiality and integrity of data supplied by sensors. In this study, various possible attacks on smart cities are analyzed; their effects and possible solutions are proposed to help minimize or eliminate the effect of these attacks. Furthermore, this paper highlights major challenges surrounding spectrum utilization, advantages and disadvantages of various radio spectrum management techniques. A conclusion is reached as to what the gaps are in terms of security of the network architecture.
ACN-02.9 16:45 Localization in Wireless Sensor Networks Using a Mobile Anchor and Subordinate Nodes
Abhinesh Kaushik (Jawaharlal Nehru University, India); Daya Krishan Lobiyal (Jawaharlal Nehru University, New Delhi, India)
Localization is one of the significant field of research in wireless sensor networks(WSNs). Localization broadly refers to estimating the location of unidentified nodes in a WSN using anchor nodes (nodes whose location is known). Localization, along with the mobility of nodes in the network, makes this field more potent for practical purposes. Therefore, in this paper, we propose an algorithm for localization in WSNs using a mobile anchor and subordinate nodes (LWMS). In the proposed algorithm, we use a mobile anchor node to localize all the other unidentified nodes in the network. The mobile anchor node localizes the nodes in its path using RSSI technique, and these localized nodes are called the static anchor nodes. Using these static anchor nodes, the algorithm localizes all other nodes around them. There are still some nodes left in the network to be localized due to holes or a small percentage of anchor nodes. Thus, nodes localized using static anchor nodes are called subordinate anchor nodes. Further, these subordinate nodes are used to localize all the remaining nodes in the network. We have also used a method of solving the system of distance equations, which reduces the propagation of error. The simulation results corroborate that LWMS performs better than the other two algorithms used for comparison.
ACN-02.10 17:00 CloudSim Exploration: A Knowledge Framework for Cloud Computing Researchers
Lakshmi Sankaran (Christ Deemed to be University, India); Janardhanan subramanian Saleema (Christ University, India)
This paper aims to help find solutions for questions an early researcher may have to set up experiments in their development environment. Simultaneously, while identifying the steps required for experimenting, the authors narrowed on a simulation toolkit for Cloud Computing as an area of their study. Because of such simulators, the cloud computing environment itself is available easily at the comfort of one's desktop resources instead of visiting an actual physical data center to collect trace and log files as data sets for real workloads. This paper acts as an experience sharing to naive researchers who are interested in how to go about to start cloud computing setups. A new framework called Cloud Computing Simulation Environment (CCSE) presented with inspiration from Procure Apply Consider and Transform (PACT) model to ease the learning process. The literature survey in this paper shares the path taken by researchers for understanding the architecture, technology, and tools required to set up a resilient test environment. This path also depicts the introduced framework CCSE. The parameters found out of the experiments were: Virtual Machines(VMs), Cloudlets, Host, and Cores. The appropriate combination of the values of the parameters would be horizontal scaling of VMs. Increasing VMs does not influence the average execution time after a specific limit on the number of VMs allocated. Nevertheless, in vertical scaling, appropriate combinations of the cores and hosts yield better execution times. Thereby maintaining the optimal number of hosts is an ultimate saving of resources in case of VM allocations.
ACN-02.11 17:15 Interference Management Technique for LTE, Wi-Fi Coexistence Based on CSI at the Transmitter
Diana Josephine (Coimbatore Institute of Technology); A Rajeswari (Aerodrome Post, India & Coimbatore Institute of Technology)
The coexistence of wireless devices is a better option when devices operate in the 2.4/5 GHz unlicensed radio spectrum. In recent times, LTE and Wi-Fi are the most promising wireless technologies that coexist. Joint operation of LTE and Wi-Fi in the same license-exempt bands is a real possibility but it requires complex intelligent techniques, i.e. when devices coexist, they suffer from significant interference and leads to performance degradation. In this paper, a novel Channel Condition-based Equipment (CCBE) has been developed and implemented to quantify the impact of interference on the performance metrics. CCBE involves the estimation of CSI by the receiver and uses those channel coefficients at the transmitter to study the behavior of the channel for effective transmissions. Results show that for a fixed SNR there is a reasonable reduction in BER when transmissions are allowed after estimating the behavior of the channel. The coexistence environment is deployed in MATLAB with the aid of WLAN and LTE toolboxes.

Friday, October 16 14:45 - 20:15 (Asia/Calcutta)

CoCoNet-S6: CoCoNet-S6: Symposium on Emerging Topics in Computing and Communications (SETCAC'20)- Regular & Short Papers

CoCoNet-S6.1 14:45 Internet Performance Profiling of Countries
Nikolay Todorov (Ward Solutions, Ireland); Ivan Ganchev (University of Plovdiv Paisii Hilendarski, Ireland & University of Limerick, unknown); Máirtín O'Droma (University of Limerick & Director, Telecommunications Research Centre, University of Limerick, Ireland)
The paper 1 presents research into data capture and analysis techniques for creating spatial and temporal Internet Quality of Service (QoS) performance profiles of countries. Active Internet probing and traffic monitoring techniques are used, facilitated through the European RIPE Atlas project to capture raw QoS measurements. The research goal is to contribute towards developing large-scale network QoS performance profiling and testing methods for local to long-range and global-range Internet QoS performance analysis. The range of stakeholder interest in such profiles is wide, from network owners and Internet service providers (ISPs) through to Internet service provisioning consultants, and corporate, business and individual users. Also, applications are wide such as detection and location of temporal traffic bottle-neck and faults, bottleneck incidence behavior as a function of geographic and temporal service demands, to Internet service level agreements (SLAs) and their policing. Twenty-six European countries are examined and profiled on the basis of a bi-directional north-south and east-west 'compass profiling' methodology over a one-month period. The worst-case scenarios detected are presented as an example. The results may serve as an initial benchmark for mapping evolving performance profiles over longer or continuous periods of time, employing more geographical spread testing probes, and a mix of profiling methodologies especially for verification purposes. A similar approach may be taken to regional, international, and intercontinental Internet QoS profiling.
CoCoNet-S6.2 15:00 Modelling a Folded N-Hypercube Topology for migration in Fog Computing
Pedro Juan Roig (Miguel Hernández University, Spain); Salvador Alcaraz and Katja Gilly (Miguel Hernandez University, Spain); Carlos Juiz (Universitat de les Illes Balears, Spain)
IoT moving devices need to have their associated VMs as close as possible so as to minimize latency, jitter, bandwidth use and even power consumption. It implies that Data Centers in Fog Computing environments must be ready to move VMs around its different hosts on a discretionary manner in order to cope with such movements. In this paper, a Folded N-Hypercube switching infrastructure is being modelled according to different views, such as arithmetic, logical and algebraic, paying special attention as to how to manage VM migrations around the Fog domain.
CoCoNet-S6.3 15:15 Android Malware Classification Based on Static Features of an Application
Ashwini S D, Manisha Pai and Sangeetha J (M S Ramaiah Institute of Technology, India)
Android is the most sought-after mobile platform that has changed what mobiles can do. Due to this, a continuous increase in android malware applications has been seen that poses a significant hazard to users. Thus, the detection of malware applications in the Android environment has become a trending research field for cybersecurity researchers. Android malware detection depends on characterizing the Android application's functionalities. Over the years, malware has evolved and has become more sophisticated. Hence, it cannot be detected only using a single static feature as it might result in a high number of false negatives. We propose a detection model in this paper that accurately classifies the samples as malware or benign fewer false positives and false negatives. We have used string features that include suspicious API calls, used permissions, requested permissions, filtered intents, hardware components, and restricted API calls. We have then employed four machine learning algorithms, namely, Ridge Classifier, XGBoost Classifier, Random Forest, and Support Vector Classifier to evaluate the effectiveness of the binary feature vector formed by the combination of these string features. It was noted that Random Forest achieved the highest score for accuracy, precision, recall, area under curve, and F1 score.
CoCoNet-S6.4 15:30 Brain Electric Microstate of Karawitan Musicians' Brain in Traditional Music Perception
Indra Wardani, Djohan Djohan and Fortunata Tyasrinestu (Indonesian Institute of the Arts Yogyakarta, Indonesia); Phakkharawat Sittiprapaporn (Mae Fah Luang University, Thailand)
The rapid development of music research leads to many interdisciplinary topics. In the neuroscience field of study, music is being studied related to either its effect on the cognitive process or the cognitive process behind it. Generally, neuroscience is a field of study to focus on the music's integration. The difference in brain structure and activities between non-musician and musicians has been shown by several previous studies. Instead of differentiating brain activity between musician and non-musician, the present study demonstrated the different brain activity while musicians listened to music regarding their musical experience. Applying the electroencephalography recording in the experimental approach toward Karawitan musicians, the results showed higher brain activity in listening to familiar music, Gendhing Lancaran, traditional music of Java. In addition, the dominant brain activity happened in the temporal lobe while Karawitan musicians listened to Gendhing Lancaran, the traditional music of Java.
CoCoNet-S6.5 15:45 An Analysis of Rain streak Modelling as a Noise Parameter using Deep Learning Techniques
Akaash B (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, India); Aarthi R (Amrita VishwaViydapeetham, India)
Outdoor Vision Systems (OVS) play a vital role in the surveillance of the environment. However, the images and videos captured by these systems could be severely tampered by the sharp intensity changes brought about by adverse weather and climatic conditions. In this work, synthetically prepared rain images are modelled to visualize the randomly distributed rainstreak patterns as noise. The analysis have been performed using various deep learning networks such as autoencoders with and without skip connections and Denoising Convolutional Neural Networks(DnCNN). The best model for this process has been suggested based on Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) obtained by comparing the original and the reconstructed image.
CoCoNet-S6.6 16:00 Text Sentiment Analysis using Artificial Intelligence Techniques
Sanskriti Srivastava, Vergin Sarobin, Jani Anbarasi and Namrata Sankaran (Vellore Institute of Technology, Chennai, India)
In today's world data is being generated with such high velocity and variety that analyzing such large volumes of data to extract meaningful results is a taxing job and manually impossible. Developing methods to analyze such large volumes of data for the purpose of finding hidden patterns to achieve meaningful interpretations is necessary for many organizations for making informed decisions. Sentiment Analysis is one such method that is used to interpret the emotions represented by text data. There exists a broad range of applications for sentiment analysis. Public opinion is an important "business insight". All businesses are interested in analyzing the consumer behavior, understanding their needs, understanding their likes and dislikes and their buying patterns and as more and more people are becoming vocal about their preferences, the data required by these companies are becoming readily available on blogs and social media platforms, but the analysis of such large amount of raw data to derive useful conclusions is a hectic task if tried to perform manually. In such cases, sentiment analysis can be used to analyze this raw data. Sentiment analysis can be used for a variety of needs ranging from understanding the public opinion of a government policy to assigning movie ratings from analysis of viewer ratings. This method is especially an important factor in social media monitoring to gain a wider public opinion about a topic. NLP tools available today can be used to efficiently analyze this raw text and classify texts as positive, negative, or neutral. Different machine learning algorithms like Random Forest, Logistic regression, and Support vector machine can be used to train the model using the features extracted by the NLP techniques. The trained model can then be used to predict the polarity of the raw input data. ROC and PR curves have been plotted to check the accuracy of the algorithms.
CoCoNet-S6.7 16:15 Diverting Tantrum Behavior using Percussion Instrument on Autistic Spectrum Disorders
Zefanya Lintang (Indonesia Institute of the Arts, Indonesia); Djohan Djohan and Fortunata Tyasrinestu (Indonesian Institute of the Arts Yogyakarta, Indonesia); Phakkharawat Sittiprapaporn (Mae Fah Luang University, Thailand)
The aim of this study was to distract the tantrum of the patient with Autistic Spectrum Disorders (ASD). Children with tantrums disturbed others and even the children themselves, as well as the activities they were doing. From this condition, we attempted to divert the tantrums of the children with ASD by using percussion instruments. The use of musical instruments is an effective method to redirect the behavior intended to be changed since musical instruments are interesting and pleasing objects for children suffering from ASD. Owing to the matter, music and its instruments have a chance to be utilized in diverting the behavior of ASD children. This experiment by single-subject design ABA took a single subject of ASD which was 7 years old. The musical instruments applied are 5 percussions which are tambourine, glockenspiel, Tifa, maracas, and bell. The result showed that the subject throwing tantrums are hardly managed in the first baseline. It was supported by the mean baseline, which is 7.3 minutes whereas the average result in treatment showed 2.3 minutes. It revealed that the time span of the subject's response in treatment was less than in the baseline. However, in the treatment phase, the subject's behavior was distracted. It occurred because the subject seemed interested and enthusiastic as playing the percussions. The treatment also increased the cognitive aspect of the subject, focusing on ongoing activities. The subject could play bell based on the therapist's instruction and holding on to it until 15 minutes. The percussions instrument was able to improve the subject's focus on playing bell and distract the tantrum.
CoCoNet-S6.8 16:30 MIMO Based 5G Data Communication Systems
Sudhanshu S. Gonge (Vishwakarma Institute of Technology Pune & Savitribai Phule Pune University, India); Shubham Mathesul, Ayush Rambhad and Parth Shrivastav (Vishwakarma Institute of Technology Pune, India)
Many of the main targets or expectations that need to be met in the immediate term, i.e. after 5G, are expanded efficiency, higher data rate, reduced average latency and enhanced coverage quality. Energy use for the networks is a big concern. To satisfy these requirements, there needs to be dramatic changes in the design of the telecommunications network. This paper discusses the findings of a comprehensive study on the telecommunications network of the fifth generation (5G), including some of the main new innovations that further develop the infrastructure. The main subject of this comprehensive study is the 5G wireless network, Massive MIMO infrastructure, and Device-to-Device Compatibility (D2D). A comprehensive survey is provided in the Section IV addressing detailed study about deployments and predictions.
CoCoNet-S6.9 16:45 Performance Evaluation of Cross-layer Routing Metrics for Multi-radio Wireless Mesh Network
Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); Mouna Naravani (KLE Technological University, India)
Wireless Mesh Networks (WMNs) are emerging as future-generation technologies for back-haul connectivity of different types ofnetworks. They are multi-hop networks consisting of mesh nodes, meshclients and mesh routers. To achieve high performance, these networksuse Multi-Channel Multiple Radio (MCMR) capabilities of mesh routers.However, QoS degrades due to the introduction of inter-flow interferenceand intra-flow interference by these nodes. Thus, there is a need to designrouting protocol by considering interference in the network. Furthermore,as the traditional approaches of protocol design lacks in improving QoS,the cross-layer metrics are designed by using the information from physi-cal, MAC and network layer. In this paper, we investigated and analysedthe performance of multi-radio cross-layer routing metrics. We performedextensive quantitative analysis in NS-2 using OLSR protocol. The per-formance differentials and trade-off analysis is carried out using five QoSparameters.

Friday, October 16 14:45 - 18:15 (Asia/Calcutta)

CoCoNet-S6A: CoCoNet-S6A: Applied Computing (Regular & Short Papers)

CoCoNet-S6A.1 14:45 Random Permutation based Linear Discriminant Analysis for Cancelable Biometric Recognition
P Punithavathi and S Geetha (VIT University Chennai Campus, India)
The increased use of biometrics in the present scenario has led to concerns over the security and privacy of the enrolled users. This is because the biometric traits like face, iris, ear, etc., are not cancelable or revocable. In case if the templates are compromised, the imposters may gain illegitimate access. To resolve such issues a simple yet powerful technique called "Random Permutation-based Linear Discriminant Analysis" for cancelable biometric recognition has been proposed in this paper. The proposed technique is established on the notion of a cancelable biometric system through which the biometric templates can be revoked and renewed. The proposed technique accepts the cancelable biometric template and a key (called PIN) issued to the user. The user's identity is recognized only when both cancelable biometric template and PIN are valid, else the user is prohibited. The performance of the proposed technique is demonstrated on the freely available face (ORL), iris (UBIRIS) and ear (IITD) datasets against state-of-the-art methods. The key benefits of the proposed technique are (i) classification accuracy remains unaffected by using random permutation (ii) robustness across different biometric traits.
CoCoNet-S6A.2 15:00 Digital Image Transmission Using Combination Of DWT-DCT Watermarking And AES Technique
Sudhanshu S. Gonge (Vishwakarma Institute of Technology Pune & Savitribai Phule Pune University, India)
The internet technology brings big revolution in 21st century. It faciliates communication between man-to-man, man-to-machine, machine-to-machine and vice-versa. There are many applications viz. (i) vehicle-to-vehicle, (ii) vehicle-to-infrastructure, (iii) drone communication, etc., transmits and receives data in various format. Like image data, audio data, video data, text data, etc. Bank system uses data communication technique for transferring money through debit card, net-banking, credit card, demand draft, cheques, RTGS, NEFT, etc. Bank cheques are cleared through CTS system. For clearing the cheque, it is scan and image is transfer to cheque clearing house. During cheque image transmission there is a need of security, confidentiality, integrity, authorization, copyright protection, and indexing services. There are many techniques and algorithm which provides this facility,to overcome this issue. To solve this issue combination of DWT-DCT watermarking along with AES technique is used. Its performance and analysis against various attacks is also explain in this research paper.
CoCoNet-S6A.3 15:15 CATS: Cluster-Aided Two-Step Approach for Anomaly Detection in Smart Manufacturing
Dattaprasad Shetve (Indian Institute of Information Technology (IIIT), Sricity, India); Raja VaraPrasad (Indian Institute of Information Technology (IIIT) Sricity, India); Ramona Trestian and Huan X Nguyen (Middlesex University, United Kingdom (Great Britain)); Hrishikesh Venkataraman (Indian Institute of Information Technology (IIIT) & Center for Smart Cities, India)
In the age of smart manufacturing, there are typically multitude of sensors that are connected to each assembly line. The amount of data generated could be used to create a digital twin model of the complete process; wherein virtual replicas of the device and the process can be created before and during the process. An important aspect is automatic anomaly detection in the manufacturing process. Anomaly/Outlier detection identifies data points, events, and/or observations that deviate from the dataset's normal behavior. A major problem in predicting anomaly from datasets is the limited accuracy that can be achieved. Several state-of-the-art techniques provide very high accuracy (>95%). However, these result in a considerable increase in the required time, thereby limiting its use to non-real-time applications. This paper proposes a Cluster-Aided Two-Step (CATS) approach for anomaly detection wherein two unsupervised detection techniques are employed in serial. The technique used for the first step is Density Based Spatial Clustering for Applications with Noise (DBSCAN) while the second technique is Local Outlier Factor (LOF). The output of the first-step technique is fed to the second technique, thereby utilising the knowledge generated in the first step. An extensive simulation analysis indicates that the proposed CATS algorithm results in >95% accuracy for the outlier population is above 15% with a prediction time of lesser than 85 seconds.
CoCoNet-S6A.4 15:30 An Android Based Smart Home Automation System in Native Language
Nayan Thara Prakash, Mathew Santhosh, Sneha Raj M p, Gokul G and Gemini George (APJ Abdul Kalam Technological University, India)
Speech recognition is the ability of a machine to analyze and respond to the oration of a person. The proposed system mainly concentrates on the people who are visually disabled, paralyzed and handicapped, so that such users can monitor home appliances from anywhere inside a home. The project develops a home automation system which is connected with an Android smartphone by an Arduino device. In this advanced world people want to switch from the conventional switches to centralized control system. Especially, when it comes to the elderly or handicapped people as mentioned earlier, they may feel trouble in managing all switches which are located at different parts of their residence. So, the system assists them in handling everything within their smartphones. Here the other strong point of the work is that when it comes to the people can use their native language to interact with the device, and hence, can cater to a larger unprivileged sections of the society.
CoCoNet-S6A.5 15:45 Smart Mirror Based Personal Healthcare System
Aanandhi V B, Anshida Das, Melissa Grace Melchizedek, Nived Priyadarsan and Binu Jose A (Mar Baselios College of Engineering and Technology, India)
A Smart Mirror is a device that is an extended and enhanced version of a conventional mirror. It's a two-way mirror with an inbuilt display behind the glass. It allows the user to access and interact with various features that can be found in smart devices, like smart phones and tablets etc. A Smart Mirror can display images, videos, current time, weather forecast, news feed, upcoming appointments, and all kinds of data supported by a smart device. The existing Smart Mirror frameworks were developed just to show time, date and weather. After some updates it contained schedules, alerts and notices and later it got updated with music player and voice acknowledgement. The Smart Mirror will be redefined by remembering each and every aspect and disadvantages of the existing systems. The Smart Mirror offers unique features to improve the user experience and system security through biometric authentication, multimedia capabilities and customized user profiles. This device can replace a wide range of household utilities like Clocks, Calendars, and external virtual assistants like Amazon Echo, Google Home etc. The mirror will provide personalized healthcare services for each of its users. This includes analyzing varying health patterns of the user, like sleep patterns and body parameters, and providing suggestions to improve their lifestyle. It also displays the medicine timetables of each user. Thus, it will be beneficial for elderly people and anybody with a busy routine.
CoCoNet-S6A.6 16:00 Enhancement of VerticalThings DSL with Learnable Features
Resource-efficient ML for Edge and endpoint IoT Devices is a field of active research and increasing development. Libraries have been providing support for machine learning enthusiasts to run ML algorithms in the cloud. Executing ML algorithms on motes is a challenge, as resources are highly constrained. To optimally use these resources, the developer often needs to have complete knowledge of the underlying architecture. VerticalThings is a domain specific language (DSL) developed for programming ML based embedded applications. The language offers constructs for key platform functions such as resource management, concurrency, task isolation, and security. This enables static analysis of (a) important safety and security properties, and (b) timing and power considerations. To enhance this DSL further, we developed a DSL named Fiery-Ice which provides intelligent learning of parameters based on sensor data. We would integrate both the DSLs within the IDE developed for vertical things. The learnable parameters are learnt at compile time avoiding the use of scarce memory of the embedded systems.This paper shows the capabilities of a domain specific language (DSL) named Fiery-Ice which is designed to help embedded developers use VerticalThings and develop ML based embedded applications with ease. The contributions of this article are: (a) A domain specific language (DSL) named Fiery-Ice and it's capabilities to perform machine learning related tasks ; (b) How Fiery-Ice helps students better their understanding of machine learning algorithms ;
CoCoNet-S6A.7 16:15 VR Classroom for Interactive and Immersive Learning with Assessment of Students Comprehension
Jaya Sudha J. S. (SCT College of Engineering, India); Nandagopal Nandakumar, Sarath Raveendran and Sidharth Sandeep (APJ Abdul Kalam Technological University, India)
The Virtual Classroom environment is created using Virtual Reality that enables multiple students to enter as if in a real class but with better learning environment. Conventional Learning is currently limited in the current model of textbook teaching. An interactive and visual environment provided for learning enhances the rate at which the student grasps concepts. Even though many modern online teaching methods are available today, it is not possible to check whether a student is paying attention or not. Technology is evolving at a very fast rate and this research is an apt integration of two modern technologies: Machine Learning and Virtual Reality, so as to increase the quality of education for students. A shared VR environment, optimized for learning, will be created. Students can wear a Head Mounted Display and select an avatar for themselves, which will be seen by other students and teachers. The VR environment is created using Unity3D software. Students will also have to wear an EEG scanner on their heads. The output of this scanner will be fed to the Machine Learning subpart. Neural networks are used to identify whether the student is paying attention or not. If a student is not paying attention, the teacher will be informed about it, with a message near the student's avatar. It has many advantages over traditional learning techniques, like usage of multiple senses and inclusivity for differently abled students.
CoCoNet-S6A.8 16:30 Live Acoustic Monitoring of Forests to Detect Illegal Logging and Animal Activity
JC Karthikeyan (APJ Abdul Kalam Technological University, India); Sreehari S (Kerala Technological University, India); Jithin Reji Koshy (APJ Abdul Kalam Technological University, India); Kavitha K V (Sree Chitra Thirunal College of Engineering, India)
Illegal cutting of trees and poaching in the forest has become a serious issue regarding environmental conservation. Trespassing in the forest has an adverse effect on the habitat of animals. There is no effective solution for real-time detection and warning of such activity. Image-based monitoring solutions are too costly and cannot cover a wide range of areas. A novel approach of audio-based monitoring systems using deep neural learning can be proposed as a solution to this problem. A model has to be trained using various audio samples of cutting of trees, gunshots, etc. along with the outliers. There are numerous tree felling techniques and hunting techniques. In the case of methods that are known to the model, the model detects that event and hence warns the authorities. The audio samples in the dataset in the time domain are converted to the frequency domain using Fast Fourier Transform (FFT). This distributes the signal across corresponding frequencies. For better visualization of features, it is then converted into a Mel scale and the spectrum of this spectrum is computed using cosine transformation to obtain the Mel Frequency Cepstral Coefficients. Relevant features are then extracted using these Coefficients and classify them using the proposed deep neural learning method. There is a significant difference between the energy concentration distributions of the sound that has to be detected with that of the outliers. This enables to classify the audio samples with a greater signal to noise ratio. The resulting model is then used for live monitoring of forests against illegal activities. The current situation of the wildlife demands an accurate database of animal activity in a particular area. This helps both the wildlife tourism as well as various studies. For addressing this issue, the proposed model is also trained to detect the presence of animals and it will accomplish it without disturbing the wildlife activity.
CoCoNet-S6A.9 16:45 Subspace Clustering using Matrix Factorization
Sandhya Harikumar and Shilpa Joseph (Amrita Vishwa Vidyapeetham, Amritapuri, India)
High dimensional data suffers from the curse of dimensionality and sparsity problems. Since all samples seem equidistant from each other in high dimensional space, low dimensional structures need to be found for cluster formation. This paper proposes a top-down approach for subspace clustering called projective clustering to identify clusters in low dimensional subspaces using best low-rank matrix factorization strategy, Singular Value Decomposition. The advantages of this approach are two-fold. First is to obtain multiple low dimensional substructures using the best low-rank approximation, thereby reducing the storage requirements. Second is the usage of the obtained projective clusters to retrieve approximate results of a given query in time-efficient manner. Experimentation on six real-world data sets proves the feasibility of our model for approximate information retrieval.
CoCoNet-S6A.10 17:00 A Comparative Analysis of Garbage Collectors and their Suitability for Big Data Workloads
Advithi Nair, Aiswarya Sriram, Alka Simon and Subramaniam Kalambur (PES University, India); Dinkar Sitaram (PESIT, India)
Big Data applications tend to be memory intensive and many of them are written in memory managed languages like Java/Scala. The efficiency of the Garbage Collector (GC) plays an important role in the performance of these applications. In our paper, we perform a comparative analysis of Java Garbage Collectors for three commonly used Big Data workloads to check the choice of the Garbage Collector for each of the workloads. The Garbage Collectors under scrutiny are Garbage First, Parallel and ConcurrentMarkSweep. We demonstrate (a) the relative difference between existing Java workloads that are used to study Garbage Collectors and Big Data workloads and (b) the selection of the right Garbage Collector for a given workload. We find that the Garbage First collector gives a performance uplift of up to 18% in certain workloads.
CoCoNet-S6A.11 17:15 Prediction of Energy Consumption using Statistical and Machine Learning Methods and Analysing the Significance of Climate and Holidays in the Demand Prediction
Naveen Tata, Srivasthasva Srinivas Machiraju, Akshay V, Divyasree Menon and Sai Shibu N B (Amrita Vishwa Vidyapeetham, India); Arjun D (Amrita University, India)
With the increase in the development of smart metering in energy systems, a large amount of data is being generated. The data consists of energy generated, energy consumed and energy stored with respect to time. This data can be used to improve the efficiency, reliability and stability of the power system by using machine learning algorithms. Energy requirement of each consumer can be predicted with the available data. Renewable energy generation can also be predicted. In this paper, different statistical and machine learning models are used to analyse the energy usage in smart communities. To validate the prediction models, smart meter data from our campus is used. The results show that the Long Short-Term Memory (LSTM) model is more suitable for energy demand prediction. The LSTM model is then used to predict the energy demand in students' hostels during conditions such as climate and holidays.
CoCoNet-S6A.12 17:30 Energy Efficient VM Management in OpenStack based Private Cloud
Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); Somashekar Patil (KLE Technological University, India)
The increase in usage of cloud data centers in recent years has led to high utilization of resources. Allocation of new VMs, disabling of existing hosts or existing VM being removed are the reasons due to which the resource utilization data centers vary over time. Need for load balancing for server consolidation based on SLA is increasing. In resource management, the important problem that needs to be solved is that detection of the host's state is overloaded/under-loaded. Improved host load detection enhances the scheduling which results in higher utilization of the compute, network and storage resources of cloud data centers. In this work, we propose host load prediction techniques using machine learning and statistical models. Scheduling of VM on request is performed using Modified Best fit Algorithm depending on the prediction results, that minimizes the unnecessary large fragments of RAM in compute nodes generated while scheduling VM using worst fit algorithm. Experimental results show that machine learning model LSTM and the statistical ARIMA model gives comparatively good results for Planet Lab CPU trace data set. The Load balancing for server consolidation is performed depending upon the prediction result. Energy-efficient consolidation is performed to optimize energy consumption physical servers. We carry out the experimentation of proposed work's performance analysis in the Open Stack-based multi-node setup. The result obtained shows the fair distribution of load among the servers and the benefits of energy-efficient VM consolidation . There is a significant improvement in energy saving and the proposed model optimizes the utilization of resources.
CoCoNet-S6A.13 17:45 Performance Evaluation of WebRTC for Peer to Peer Communication
Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); Kiran Jadhav (KLE Technological University, Hubli, India); Mohammed Moin Mulla (KLE Technological University, Hubli, Karnataka, India)
In this era of the Internet and developing technology, there are numerous ways of interacting with each other and a plenty of services available to make it possible. These include social media platforms, email, VoIP, messaging applications etc. One of the vital aspects here would be Real-Time Communication (RTC), which means interacting with people all around the world as if they were face-to-face. The advancement in RTC has led to the development of a new innovative technology called Web Real-Time Communication (WebRTC), which enables an easy streaming of audio and video content over the web. This powerful tool currently revolutionizing web communication has introduced RTC capabilities into browsers as well as mobile applications. The study of this WebRTC technology and its implementation has been carried out in this paper. The WebRTC standards specified protocols and signaling techniques have been learned. The complete WebRTC communication flow between the peers is sequentially explained. WebRTC is supported on two major browsers Google Chrome and Mozilla Firefox, and experiments are conducted on devices running on these browsers with different configurations. Performance has been measured in terms of peer connection establishment, peer communication, user data transfer and video streaming parameters are also taken into account.

Friday, October 16 14:45 - 20:15 (Asia/Calcutta)

CoCoNet-S7: CoCoNet-S7: Networked Systems and Security(Short Papers)

CoCoNet-S7.1 14:45 Maximizing Lifetime of Mobile Ad hoc Networks with Optimal Cooperative Routing
Kanavath Chinna Kullayappa, Naik (JNTU University Anantapur, India); Ch. Balaswamy (Gudlavalleru Engineering College, India); Patil Ramana Reddy (JNTU University Anantapur, India)
Research in MANET is a challenging task because topology changes frequently and results in link breakages due to node mobility and fast over tiredness of node energy due to limited battery capacity. Therefore, the topology, node mobility and energy are main important factors that have an impact over the performance of a routing protocol and decreases the overall lifetime of the network. In order to enhance the lifetime of the network a cooperative communication scheme have been proposed in this paper. Cooperative communication requires cooperative table, relay table and cooperative neighbor table to store the topological information and implement cooperative transmission among the nodes there by improving the robustness against the node mobility. Cooperative communication uses Multi-hop transmission between the source and destination nodes in order to save energy and thus enhancing the lifetime of the network using Minimum Energy consumption Selection Decode and Forward (MESDF) routing protocol. The proposed scheme chooses the best relays with minimum energy consumption in a cooperative and distributed manner and considers the link break probability and energy harvesting techniques, to determine the optimal route across a cooperative network. Simulation results clearly shows that the robustness of proposed method increases against the node mobility and saves 21% of node energy in a selected route which in turn increases the lifetime of the network when compared to the existing cooperative and non-cooperative routing methods.
CoCoNet-S7.2 15:00 On-Demand Multi Mobile Charging Scheduling Scheme for Wireless Rechargeable Sensor Networks
Charan Ramteja Kodi, Debjit Das and Shasi Shekhar (IIT Dhanbad, India)
The Sensor nodes that were in the networks senses and process the data, each sensor usually have different task burdens due to the environmental change, which results in a dynamic change of the energy consumption rate at different nodes.In order to provide real-time on demand charging to these sensors is a real challenge .Based on the certain threshold each sensor node requests for charging ,these charging requests are taken in to the matrix and processed accordingly based on the selection rate by MCV. This paper deals with wireless charging in sensor networks and explores efficient policies to perform simultaneous multi mobile charging power transfer through a mobile charging vehicle (MCV). The proposed solution, called On-demand Multi Mobile Charging Scheduling Scheme (MMCSS), features Selection Rate(SR) based on which selection of next charging request node is selected efficiently by considering important parameters. After selecting the node based on the SR it is checked whether the charging is possible or not based on Next charging node Possible (NNCP)or not. Then the shortest path is given from the MCV to selected node using Dijkstra algorithm. The various MCV charging conditions are discussed below.
CoCoNet-S7.3 15:15 A Novel Design Approach Exploiting Data Parallelism in Serverless Infrastructure
Urmil Bharti (University of Delhi & Shaheed Rajguru College of Applied Sciences for Women, India); Deepali Bajaj (Shaheed Rajguru College & University of Delhi, India); Anita Goel (Dyal Singh College, University of Delhi, India); Suresh Gupta (IIT Delhi, India)
Serverless computing has emerged as a new application design and execution model. The serverless application is decomposed into granular logical functional units that run on small, low cost and short lived compute containers. These containers are dynamically managed by FaaS service providers. Users are charged only for the compute and storage resources needed for the execution of their piece of code. Cloud functions have restrictions on memory usage and execution time-out as imposed by their service providers. Due to this limitation, compute intensive tasks time-out before their completion and hence unable to harness the power of serverless computing. In this paper, we propose a design approach for serverless applications. It exploits data parallelism in embarrassingly parallel computations. Using our approach, compute bound tasks that are implemented in conventional design and fail in serverless environment, can get executed successfully without worrying about the limitations imposed by serverless platforms. For this, several extensive experimentations using Amazon's AWS Lambda service has been performed. Further, a serverless application designed using our approach exploits the auto-scalability feature of serverless computing, to achieve faster execution benefit.
CoCoNet-S7.4 15:30 Intelligent Transportation System: The Applicability of Reinforcement Learning Algorithms and Models
Krishnendhu S p (National Institute of Technology, Calicut, India); M. Prabu (National Institute of Technology Calicut, India)
Nowadays, many research works that associate real-time data widely use an unsupervised Artificial Intelligence (AI) technique, namely Reinforcement Learning (RL). Its fast adaptiveness to the dynamicity drags the attention of researchers who works in real-time traffic signal control systems. The scope of RL in most of the research problems remains remarkable with its peculiar characteristics. This article reviews the basic concepts of RL, along with RL algorithms and models with an emphasis on Traffic Signal Control (TSC). TSC is one among the trending applications of RL. Traffic congestion control with less human intervention is a challenging task of the Intelligent Transportation System (ITS). It not only helps traffic managers to get a grip over the traffic operation situation and analyze congestion, but also assists travelers to avoid congestion. Considering its significance, we have chosen TSC as the basis to explain the RL algorithms and models presented in this paper. In addition to such a comprehensive review, we have also provided a list of open challenges which when addressed can take the research in this area to considerable heights.
CoCoNet-S7.5 15:45 An Innovative and Inventive IoT based Navigation Device - An Attempt to avoid accidents and avert Confusion
Chennuru Vineeth (Amrita School of Engineering, India); Shriram K Vasudevan (Amrita University, India); Juluru Anudeep, Gudimetla Kowshik and Prashant Nair (Amrita School of Engineering, India)
Many routes being created each day; it is a very tiresome job to remember every route. This is the reason maps are created for making our job easier. Due to an increase in technology and lowering of data rates in many countries these maps are accessible to most of the people in the daily commute. When we want to go to a new route, we cannot remember the whole route by seeing the route provided by the map. Therefore, we need to check the route at regular intervals. This method of the commute is very much suited for pedestrians because they can hold their mobile phone in hand and can follow the route & for some type of four-wheeler drivers as they can dock it to the dashboard and can drive the dour wheeler. The problem comes in the case of two-wheeler riders because they cannot hold their phone and drive or cannot dock the phone to their bike as it causes serious distraction from the traffic. So, to solve this situation, we have designed a device that can show the directions of the upcoming turn without using a mobile phone while driving.
CoCoNet-S7.6 16:00 IoT based smart and secure surveillance system using Video Summarization
Diana Josephine (CIT, India); Surya Priya M (Coimbatore Institute of Technology, India); Abinaya P (CIT, India)
Nowadays, security is important for every commercial property to prevent robberies and thefts and to ensure secure and safe business operations. In CCTV systems, the data is non-intelligently recorded which produces huge volumes. It makes it difficult to search for the desired content. It is found that limited work is done in the field of a secure surveillance system using real-time videos. Therefore, there is a need for video summarization, classification (action recognition), and encryption. This paper aims to make decisions about abnormal events like suspicious activity detection in surveillance applications incorporating the above-said techniques. This smart secure surveillance system allows for reduced storage of unwanted data and helps to protect the confidential data to be sent to the user by cryptographic methods. Index terms: Image encryption, feature extraction, cryptography, and video summarization.
CoCoNet-S7.7 16:15 Design and analysis of a secure coded communication system using chaotic encryption and turbo product code decoder
Khavya S, Karthi Balasubramanian and Yamuna B (Amrita Vishwa Vidyapeetham, India.); Deepak Mishra (Space Application Center (SAC), ISRO, Ahmedabad, India)
Errors in a transmitted message is unavoidable since noise is inevitable in any communication channel. For reliable transmission of messages, the bit error rate has to be kept at an acceptable rate by the use of proper error control coding schemes. To ensure that the transmission is also secure, data encryption is used as an integral part of the system. This paper deals with the design and analysis of a secure and reliable communication system accomplished using logistic map based chaotic encryption and turbo product codes. The system is simulated using Matlab and it is shown that the use of encryption for secure communication doesn't degrade the system performance. The hardware design of the decoder is also done and verified in Verilog using the same set of vectors as obtained from the system simulation. BER performance was analysed in all the different scenarios and the correctness of the design was established.
CoCoNet-S7.8 16:30 A Deep Learning based Framework for Distributed Denial of Service Attacks Detection in Cloud Environment
Amit V Kachavimath (KLE Technological University, India); Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); P S Hiremath (KLE Technological University, India)
The widespread cyber-attack is Distributed Denial of Service (DDoS). The unauthorized users will target the specific server or network infrastructure by flooding with malicious internet traffic by creating the interruption in the normal traffic. The victim server will not be able to respond to legitimate traffic. The DDoS attacks recognition in real-time is one of the challenging problem. The predictable solutions analyze the traffic and detect the different types of activities from captured traffic based on attributes of statistical differences. The alternate approach for identifying the performance of DDoS attacks is the analysis of the statistical features using machine learning algorithms. These detection techniques have a low detection rate and time delay. The new approach for the DDoS attack detection was proposed by capturing different patterns of sequences from the captured traffic and analysis of the high-level features using deep learning and can be used with a high detection rate. The results of the proposed methodology have demonstrated the better performance of long short-term memory (LSTM) approach with good accuracy compared to the convolutional neural network (CNN) and multilayer perceptron (MLP).
CoCoNet-S7.9 16:45 An efficient and innovative IoT based intelligent real-time staff assessment wearable
Juluru Anudeep (Amrita School of Engineering, India); Shriram K Vasudevan (Amrita University, India); Gudimetla Kowshik, Chennuru Vineeth and Prashant Nair (Amrita School of Engineering, India)
The traditional method for assessing the performance of the workers in industrial workspaces is by measuring their Inertial Movements. Presently in industries at the workspaces, every 20 to 30 workers will have an in-charge to monitor their work. Although workers are being assessed by those in-charges, there are still many mishaps happening in the industrial workspaces. Workers with poor work performance are earning the wages that are same as that of a worker with good performance. These expenditures may seem to be minimal in number but recent statistics reveal that this affects the company's total productivity in a disastrous way. Some studies state that on an average, companies are losing $3,156 on the workers due to their idleness [1]. Forbes magazines revealed that 31% of the workers are roughly wasting 1 hour of time per day at their work apart from their allotted leisure times [2]. Many companies in the UK with industrial workspaces claim that they lost 15.4 billion dollars annually only due to worker illness and their maintenance [3]. However, unfortunately, besides having distinct Inertial Measurement Unit (IMU) systems in place, companies are facing many difficulties in identifying and estimating the worker performances and the health of the worker. Therefore, we have developed a system for overcoming the difficulties faced by using these IMUs using the power of IoT and Android Application.
CoCoNet-S7.10 17:00 Automation for Furnace in Thermal Power Station using Public Key Cryptography
Data is facts and statistics collected together for reference or analysis and it must be protected from corruption or unauthorized access. Data security is practice as well as technology of securing or protecting valuable and sensitive information by means of encryption of data. It is also known as information security. Data can be guarded using various hardware and software technologies. Some common ones include antivirus, encryption, firewalls, etc. Safeguarding the sensitive data from corruption and unauthorized access protects from malicious use of the data. Cryptography refers to securing information using mathematical concepts and techniques. Data encryption software enhancesdata security with more efficiency. To an authorized person, the encrypted form is absolutely unreadable. The main objective of this paper is encryption and decryption of data received by temperature sensor and motion sensor thathas been done using the ECC method.
CoCoNet-S7.11 17:15 Active Dictionary Attack on WPA3-SAE
Manthan Patel (Amrita Vishwa Vidyapeetham, India); Amritha PP (Amritha Vishwa Vidyapeetham, Amrita University, India); Sam Jasper R (Amrita Vishwa Vidyapeetham, India)
In wireless network we have different protocols like WEP, WPA, and WPA2. WPA3 is currently used standard protocol in WIFI to authenticate the client with Access Point. In the WPA3-Simultaneous Authentication of Equals protocol downgrade attack is already discovered. With the downgrade attack we are able to do offline dictionary attack on WPA3-SAE protocol. WPA3-SAE is also known as WPA3-Personal. Dictionary attack is classified into active dictionary attack and passive dictionary attack. Passive dictionary attack is also known as offline dictionary attack. In this paper we proposed active attack model in which software will try different password from given dictionary word list until it connect with the Access Point. In this model computer will change their MAC address continuously so that Access Point won't detect as an attack. To speed up the process, we can use multiple virtual machines that will work as a separate wireless client to the Access Point.
CoCoNet-S7.12 17:30 Multiple hashing Using SHA256 And MD5
MD5(Message Digest 5) is a hashing function with numerous vulnerabilities such as pre image vulnerability and collision vulnerability which restrict the usage of MD5. Therefore, by using other hashing functions such as SHA prior to hashing with MD5, we can use MD5 for various applications such as data integrity without compromising the security of the hash. MD5 is widely used in file transfer or storage applications because it produces a smaller hash value of 128 bits when compared with other hashing algorithms. Also, it's simpler to implement in hardware and as a program. We propose a technique of hashing the original message(or string) with secure hashing algorithms such as SHA256 followed by hashing the hash value of SHA256 with MD5 to get the resultant hash which is less prone to various security attacks such as collision attacks. By hashing the string twice, we make it more secure and tackle the pre-image vulnerability and collision vulnerability of MD5. This makes the hashing algorithm more secure for file transfer applications. Multiple iterations will produce more secure hash values but our simulation uses 2 iterations where we upload a file onto a cloud server and check if it has been tampered with or modified.

Friday, October 16 14:45 - 18:15 (Asia/Calcutta)

ISTA-05: ISTA-05: Intelligent Tools and Techniques and Applications(Regular Papers)

ISTA-05.1 14:45 An Intelligent Approach for Automated Argument Based Legal Text Recognition and Summarization Using Machine Learning
Riya Sil (Adamas University, Kolkata); Alpana Alpana (Jawaharlal Nehru University, India); Abhishek Roy, Mili Dasmahapatra and Debojit Dhali (Adamas University, Kolkata, India)
Structured information is the key feature for successful execution of any task. For this purpose, hardcopy of documents can be converted into softcopy format using any open source scanning application. It also helps to preserve documents electronically for future retrieval and precedence. Developing countries like India can apply data analysis to use these electronic documents for service delivery to beneficiaries using less manpower, budget, and infrastructure. Indian judicial system can similarly get benefitted using data analysis to deliver prompt justice to the victims. Even the recent global pandemic of corona virus has shown the necessity of technology-based operations like online court proceedings, thereby maintaining the rules of social distancing to break the transmission chain of virus within society. To achieve this objective, authors have proposed machine learning based automated model to enhance efficiency of legal support system with an accuracy of 94%, to deliver prompt justice to the victims
ISTA-05.2 15:00 SLKOF: Subsampled Lucas-Kanade Optical Flow for Opto Kinetic Nystagmus Detection
The neurological disorders are developed in adults due to reduced visual perception, and Opto Kinetic Nystagmus (OKN) is a clinical method to detect the visual perception. For objective measurements, a computational algorithm based OKN detection is preferable rather than clinical practice. In this paper, a memory-efficient Subsampled Lucas-Kanade Optical Flow (SLKOF) is proposed. The proposal deals with the computation of OKN gain for different image Subsampling factors in the MATLAB platform. The experimental set up to observe OKN is done using computer-based rotation control of the drum through a stepper motor. The results are compared with the well established Lucas-Kanade (LK) method for Optical flow. It is observed that OKN gain corresponds to ¼th of a subsampled image of the SLKOF method, which correlates with the LK method for the majority of the cases. This validation elucidates that the proposal is computationally efficient.
ISTA-05.3 15:15 Analysis of Sentiment Based Movie Reviews Using Machine Learning Techniques
Sachin Chirgaiya (SVVV, India); Deepak Sukheja (VNR VJIET, India); Niranjan Shrivastava (India & Devi Ahilya University, India); Romil Rawat (University of RGPV, India)
The developing significance of sentiment examination concurs with the development of the web-based platform, for example, Movie audits, gathering conversations for Movie surveys, Movie survey web journals, smaller scale websites, and informal organizations identified with the Movie survey. The decisions and approach, used to impress the real world, are to a great extent adapted to how others see and assess the world about opinion and sentiment. For this sort of reason, the normally searching procedure is used, when it is required to settle down the choice, for conclusions, conduction, and assessment of others. This is genuine for people as well as for associations and organizations and society. This work is a far-reaching feeling examination alludes to the errand of normal language preparing (NLP) to decide if a bit of content contains some abstract data and what emotional data it communicates utilizing Movie survey, i.e., regardless of whether the manner behind this content is sure (+) or negative (-). Understanding the feelings behind the client produced substance and informational index consequently is of incredible assistance for business and individual use, among others. The assignment can be directed on various degrees of content handling, ordering the extremity of words, sentences or whole informational collections. Here, the approach explores a superior methodology for Movie surveys dependent on the AI approach.
ISTA-05.4 15:30 Fuzzy Hyperlattice Ordered Delta-Group and Its Application on ABO Blood Group System
D. Preethi (Alagappa University, India); Vimala Jayakumar (Alagappa University & Karaikudi, India); S. Rajareega (Alagappa University, India); Madeleine A Al-Tahan (Lebanese International University & Lebanese University, Lebanon)
This article deals with a fuzzy hypercompositional structure called fuzzy hyperlattice ordered delta- group (FHLO delta-G) which is the extension of the fuzzy hypercompositional structure namely fuzzy hyperlattice ordered group (FHLOG). Through FHLO delta-G, we can involve one more non- empty set delta with FHLOG which helps to develop new results and applications. The structural characteristics and properties of FHLO delta-G are analysed. In addition to that, an application of FHLO delta-G in ABO blood group system is proposed.
ISTA-05.5 15:45 Analyze and Visualize the Correlation Between Heart and Cancer Diseases Using Data Mining Techniques
Nuha Varier, Sriraj Vuppala and Rohan Boggarapu (Amrita Vishwa Vidyapeetham, India); Ankita Mohapatra (Amrita School of Engineering, Bengaluru, India); Sangita Khare (Amrita Vishwa Vidyapeetham, India)
Cure for diseases involves analyzing the right cause so that a treatment can be done by observing the symptoms. Accurate symptoms can be obtained by conducting appropriate medical check-up. Due to the quality of present livelihood, it is essential to diagnose diseases in regular intervals through routine checkups and to have knowledge of how one disease can lead to another. There are different types of Data Mining techniques, which can be efficiently utilized to recognize Heart and Cancer diseases. The result can be used to detect the presence or recurrence of a disease. This report brings out the correlation between Heart and Cancer diseases by identifying the common Community Health Status Indicators (CHSI). Productive conclusion on Heart and Cancer diseases is given by testing the various Data Mining algorithms. The major aim of the work is to implement Data Mining techniques to cluster states of the USA on the basis of deaths due to Heart and Cancer disease and to find out the main cause of the death due to these diseases.
ISTA-05.6 16:00 A Scalable Multi Disease Modeled CDSS Based on Bayesian Network Approach for Commonly Occurring Diseases with a NLP Based GUI
Laxmi P and Deepa Gupta (Amrita Vishwa Vidyapeetham, India); Radhakrishnan Gopalapillai (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, Bengaluru, India); Amudha J (Amrita Vishwa Vidyapeetham, India); Kshitij Sharma (Paralaxiom Technologies Private Limited, India)
A Clinical Decision Support System (CDSS) is a useful support system for healthcare professionals in initial diagnosis of diseases. Symptoms described by patients are key inputs in this decision making process. The proposed work creates a model of commonly occurring Indian diseases along with lab tests identified and medications for the diseases diagnosed. A Bayesian network is automatically generated from the inputs provided by the experts in medical domain which is used to predict the probabilities of possible diseases. The model also has the ability to learn from past experiences. The proposed work also provides a Graphical User Interface (GUI) framework in designing CDSS having an end to end design of CDSS. The GUI for this model uses natural language processing techniques for text processing.
ISTA-05.7 16:15 Operations on Complex Intuitionistic Fuzzy Soft Lattice Ordered Group and CIFS-COPRAS Method for Equipment Selection Process
S. Rajareega (Alagappa University, India); Vimala Jayakumar (Alagappa University & Karaikudi, India)
The concept of complex intuitionistic fuzzy soft lattice ordered group (CIFSL-G) was instigated by J. Vimala et al. In this paper, we introduce some new operations on complex intuitionistic fuzzy lattice ordered group such as sum, product, bounded product, bounded difference and disjoint sum and verified its pertinent properties. Moreover, we present the CIFS-COPRAS algorithm in complex intuitionistic fuzzy soft set environment. Then we apply this method for the equipment selection process.
ISTA-05.8 16:30 Redox Reaction on Homomorphism of Fuzzy Hyperlattice Ordered Group
D. Preethi (Alagappa University, India); Vimala Jayakumar (Alagappa University & Karaikudi, India)
In this paper we propose the concept of homomorphism on fuzzy hyperlattice ordered groups (FHLOG) . Hence the binary and fuzzy hyperoperations of a FHLOG can be transformed to the binary and fuzzy hyperoperations of another FHLOG. As well, we define the notion of fuzzy hypercongruence relation on FHLOG . We also establish that the redox reaction of copper, gold and americium forms three FHLOGs. Besides, we develop a homomorphism function and composition function of FHLOGs using the redox reactions. Therefore, we can create a relation among three different metal's redox reactions in which the binary as well as the fuzzy hyperoperations are preserved.
ISTA-05.9 16:45 Reversible Data Hiding in an Encrypted Image Using the Homomorphic Property of Elliptic Curve Cryptography
Anushiadevi R and Rengarajan Amirtharajan (SASTRA Deemed University, India)
Recently, Reversible Data Hiding (RDH) schemes have gained much interest in protecting the secret information and sensitive cover images. For cloud security applications, the third party's data embedding can be done (e.g., cloud service). In such a scenario, to protect the cover image from unauthorized access, it is essential to encrypt before embedding it. This can be overcome by combining the RDH scheme with encryption. However, the key challenge in integrating RDH with encryption is that the correlation between adjacent pixels begins to disappear after encryption, so reversibility cannot be accomplished. To overcome this challenge, RDH with elliptic curve cryptography is proposed in this paper (ECC-RDH) by adopting additive homomorphism property. In this method, the stego image decryption gives the sum of the original image and confidential data. The significant advantages of this method are, the cover image is transferred with high security, the embedding capacity is 0.5 bpp with a smaller location map size of 0.05 bpp. The recovered image and secrets are the same as in the original, and thus 100% reversibility is proved.

ISTA-06: ISTA-06: Applications using Intelligent Techniques(Regular Papers)

ISTA-06.1 14:45 An Effective Real-Time Approach to Automatic Number Plate Recognition(ANPR) Using YOLOv3 and OCR
Harsh Sinha, Soumya G V and Sashank Undavalli (Amrita School of Engineering, India); R Jeyanthi (Amrita Vishwa Vidyapeetham, India)
This paper aims to develop an efficient real-time Automatic Number Plate Recognition(ANPR) system to mitigate theft of services at various self-serviced locations such as gas stations. A thorough assessment of the system existing at such self-serviced gas stations has been done to find out the loopholes. An extensive literature review is completed to come up with possible solutions to the problem of non-payment by customers. After analysing limitations in the existing solutions, an end to end ANPR system based on Convolutional Neural Network, using custom trained You Only Look Once version 3 (YOLOv3) and Tesseract Optical Character Recognition(OCR) has been developed. This system is capable of performing object detection, object localisation, image processing, and Optical Character Recognition. It can be interfaced with a physical camera and deployed as such in a real-world scenario. The system has been trained regressively on an exhaustive dataset of vehicle images to achieve accurate results. Testing revealed that the system performs much better in terms of speed as compared to other solutions, while still maintaining good accuracy.
ISTA-06.2 15:00 Generating Audio from Lip Movements Visual Input: A Survey
Krishna Suresh (Amrita Viswa Vidyapeetham, India); Gopakumar G (Amrita Institute, India); Subhasri Duttagupta (Amrita Viswa Vidyapeetham, Amritapuri Campus, India)
Generating audio from visual scene is an extremely challenging yet useful task as it finds application in remote surveillance, comprehending speech for hearing impaired people, or in Silent Speech Interface (SSI). Due to the recent advancements of deep neural network techniques, there have been considerable research effort towards speech reconstruction from silent videos or visual speech. In this survey paper, we review several recent papers in this area and make a comparative study in terms of their architectural models and accuracy achieved.
ISTA-06.3 15:15 Empirical Analysis of Performance of MT Systems and Its Metrics for English to Bengali: A Black Box Based Approach
Goutam Datta (University of Petroleum and Energy Studies, Dehradun, India); Nisheeth Joshi (Banasthali University, India); Kusum Gupta (Banasthali Vidyapeeth, India)
There are numerous use cases of machine translation(MT)systems. Therefore, it has become very important to evaluate the performance of MT which can help researchers design a robust and reliable machine translation systems. Although there are number of automatic MT evaluation metrics available now a days but most of these fail to produce correct score. In this paper,we shall describe the most recent framework in MT industry that is Neural Machine Translation and our approach is to test the performances of most popular translators google translate and Microsoft's Bing translator. We use language pairs English and Bengali for our detail analysis. Experiments are performed to translate English to Bengali. Bengali is a resource poor and one of the most widely spoken language in Indian subcontinent. Unlike glass box approach, where the performance of the MT systems are tried to enhance by adjusting various hyper parameters, we will be using black box approach i.e. evaluating the performance of the already built systems. Beside performance analysis of the two translators, our main focus is to evaluate the performance of one of the very popular automatic evaluators Bilingual Under Study(BLEU) and some other automatic evaluation metrics in our work. This paper aims to measure the performance of BLEU and other automatic metrics during English to Bengali translation process by conducting survey with the help of questionnaires distributed among twenty people having moderate to high linguistic expertise on both the languages. Their responses are collected.and mean score is calculated which we have considered as human score(human judgement).BLEU and scores generated by other automatic metrics are used and compared their performance with human generated score. A correlation is measured between human score and other metrics with Pearson correlation coefficient . Finally, some important observations are reported for this language pairs.
ISTA-06.4 15:30 Lung Nodule Detection from Computed Tomography Images Using Stacked Deep Convolutional Neural Network
Mahender Gopal Nakrani (CSMSS Chh. Shahu College of Engineering, India); Ganesh S Sable (MIT BAMU Aurangabad India, India); U Shinde (Shahu Maharaj COE BAMU Aurangabad, India)
The advances in Deep convolutional neural networks (DCNNs) has made remarkable progress in image classification and object detection methods. DCNN also has been successfully implemented to medical images problem in past few years. One of the major issue in medical imaging is the detection of lung cancer. Detection of lung nodules is a crucial step in lung cancer screening. To detect lung nodule from computed tomography (CT) images, we proposed a novel stacked DCNN based Computer-Aided Detection (CAD) system. Three DCNN are stacked together to form a single stacked DCNN. In this work, we segment raw CT images to get lung region for the images using basic morphological operations. The segmented images are directly put into stacked DCNN to generate probabilities of probable lung nodules. The largest publicly available archive of lung cancer screening thoracic CT scan was used for experimentation. Our proposed method achieves sensitivity of 96.81% at 8 false positive per scan and top-5 accuracy of 96.23%.
ISTA-06.5 15:45 Improving the Performance of Imbalanced Learning and Classification of a Juvenile Delinquency Data
Takorn Prexawanprasut and Thepparit Banditwattanawong (Kasetsart University, Thailand)
The social environment has changed over recent years. Most parents have working lives and, as a result, they might not have enough time to pay sufficient attention to their children. In addition, the tendency of juvenile delinquency has increased over time. Although the policy of the Department of Juvenile Observation and Protection is to help children develop a more positive attitude about themselves and their world, the recidivism rates of children are still high. In order to solve this issue, the possibility of recidivism must be understood. The research aims to predict number of delinquents who would commit crime in the future. The experimental dataset, the Juvenile Delinquency Dataset, was considered as imbalanced data. The research employed a proposed algorithm, namely Cluster and Over (CaO), which is a notable resampling technique for imbalanced data. The results showed that the conventional prediction models without resampling techniques yielded both low accuracy and low recall. When CaO was applied with a decision tree, the values of Gmean increased by 5.3 and 3.9 percent compared with SMOTE and KmeansSMOTE, respectively. When CaO was conducted with KNN, the values of Gmean increased by 3.8 and 3.1 percent compared with SMOTE and KmeanSMOTE, respectively. Therefore, CaO yielded more efficient results than SMOTE and KmeansSMOTE.
ISTA-06.6 16:00 Intelligent-Based Decision Support System for Diagnosing Glaucoma in Primary Eye Care Centers Using Eye Tracker
Sajitha Krishnan (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Amudha J (Amrita Vishwa Vidyapeetham, India); Sushma Tejwani (Narayana Nethralaya-2, India)
It is quite alarming that the increase of glaucoma is due to the lack of awareness of the disease and the cost for glaucoma screening. The primary eye care centers need to include a comprehensive glaucoma screening and include machine learning models to elaborate it as decision support system. The proposed system considers the state of art of eye gaze features to understand cognitive processing, direction and restriction of visual field. There is no significant difference in global and local ratio and skewness value of fixation duration and saccade amplitude, which suggest that there is no difference in cognitive processing. The significance value of saccadic extent along vertical axis, Horizontal Vertical ratio (HV ratio), convex hull area and saccadic direction show that there is restriction in vertical visual field. The statistical measures (p<0.05) and Spearman correlation coefficient with class label validate the results. The proposed system compared the performance of seven classifiers: Naïve Bayes classifier, linear and kernel Support Vector classifiers, decision tree classifier, Adaboost, random forest and eXtreme Gradient Boosting (XGBoost) classifier. The discrimination of eye gaze features of glaucoma and normal is efficiently done by XGBoost with accuracy 1.0. The decision support system is cost-effective and portable.
ISTA-06.7 16:15 URLCam: Toolkit for Malicious URL Analysis and Modeling
Mohammed Ayub and El-Sayed M. El-Alfy (King Fahd University of Petroleum and Minerals, Saudi Arabia)
Web technology has become an indispensable part in human's life for almost all activities. On the other hand, the trend of cyberattacks is on the rise in today's modern Web-driven world. Therefore, effective countermeasures for the analysis and detection of malicious websites is crucial to combat the rising threats to the cyber world security. In this paper, we systematically reviewed the state-of-the-art techniques and identified a total of about 230 features of malicious websites, which are classified as internal and external features. Moreover, we developed a toolkit for the analysis and modeling of malicious websites. The toolkit has implemented several types of feature extraction methods and machine learning algorithms, which can be used to analyze and compare different approaches to detect malicious URLs. Moreover, the toolkit incorporates several other options such as feature selection and imbalanced learning with flexibility to be extended to include more functionality and generalization capabilities. Moreover, some use cases are demonstrated for different datasets.

Friday, October 16 14:45 - 18:00 (Asia/Calcutta)

SIRS-03: SIRS-03: Regular & Short Papers - Sixth International Symposium on Signal Processing and Intelligent Recognition Systems(SIRS'20)

SIRS-03.1 14:45 Comparative Study of Maturation Profiles of Neural Cells in Different Species with the Help of Computer Vision and Deep Learning
Hritam Basak and Rohit Kundu (Jadavpur University, India)
Comparative analysis of neural composition in interconnected species may focus on their neuronal differences during metamorphosis and, can be used to analyze the variation in "neural underpinnings of cognition" and the vulnerability of neural indisposition. In this paper, we compared Neuroblast Progenitor Cells (NPC) of humans and chimpanzees (and bonobos) to observe if the loss of Amyloid Precursor Protein during evolution can affect NPC migration because of distinctive migration plots of NPC between human and Non-Human-Primates can suggest how chronic phenotypic changes of human-neurons can be different from those of non-human primates'. By assessing the inter-species comparative study of the mean speed of each cell and the half-time of gap-closure in Wound Healing Assay, differential dendritic maturation timings between species can be obtained. Faster NPC-migration may suggest different cell-autonomic features. Hence, we have analyzed the in-vitro pluripotent stem cell technology to find the maturity and development of NPC. We have utilized deep learning algorithms for the segmentation of the cell body from the background and tracking of cell migration from microscopic video obtained from the experimental setup. Our results show different migration patterns between species suggesting different maturation profiles as well as different maturation rates of neuroblast cells in different species. The work laid the groundwork for further analysis in this domain.
SIRS-03.2 15:00 Deep Learning Algorithms to Detect and Localize Acute Intracranial Hemorrhages
SaiManasa Chadalavada (Amrita School of Engineering, Bengaluru, India); Bhavana V (Amrita Vishwa Vidyapeetham, India)
Intracranial brain hemorrhage (ICH) is life threatening and a devastating medical condition. It is estimated that the spontaneous rate of occurrence of ICH is 24.6 per 100,000. ICH is also known as intracranial bleed. It is bleeding within the brain that may occur due to a variety of causes such hypertension, traumatic brain injury, fracture etc. Even though the medical professional can diagnose through physical symptoms such as vomiting, neck stiffness and blood pressure only a brain scan can confirm a ICH case. The area around the bleeding point and blood vessel rupture is usually seen as a dense area. This work is an attempt to design a prototype for a complete diagnosis of ICH. This includes multi-label classification of brain scan into one of its five subtypes, intraparenchymal, intraventricular, epidural, subdural and subarachnoid. Also, the precise localization of the brain scan to localize the bleeding point. This is achieved by means of semantic segmentation of brain scans. The VGG16 algorithm, used for multi-label classification, when trained procured an accuracy of 70.29%. It has been trained on 146 cases and tested on 37 cases. The U-net algorithm, used for semantic segmentation, when trained achieved an accuracy of 99.87%. It has been trained on 2108 cases and tested on 318 cases.
SIRS-03.3 15:15 Acoustic Prediction of Elephants for Localization and Movement Tracking Using Sensors and Distance Metrics
M. Prabu and Rajkumar T (National Institute of Technology Calicut, India)
The characteristics of sound had been useful for object localization in complex environments. The object considered for localization is elephant and its further movement had been tracked through Direction of Arrival(DoA) and Time of Arrival(ToA). Location estimation had been made using the sensors by measuring the time of arrival. Methods were proposed to obtain the localization of the elephant using sensor positioning and distance metrics. The Euclidean distance measure had been applied in determining the distance between the sensors positioned for object movement tracking. The results of the proposed work shows that the elephant localization results had been more accurate and their movements were tracked compared to the existing approaches.
SIRS-03.4 15:30 Community Detection Algorithms in Complex Networks: A Survey
Sanjay Kumar and Rahul Hanot (Delhi Technological University, India)
Community detection in the complex network is the process of finding optimal clusters of vertices that are similar in characteristics. To study the properties and functions of complex networks, community detection plays a crucial role. Community detection is generally categorized as an optimization problem, and due to its inherent property it can't be solved by the traditional optimization method, and over the past few decades, various algorithms have been proposed to address this problem in multiple fields such as power system, physics, biology, or sociology. In this paper, we present a critical survey on various algorithms for community detection currently available such as genetic algorithm, evolutionary algorithms, a nature-inspired algorithm, deep learning algorithm. This survey paper outlines the challenges and constraints of different state-of-arts community algorithms detection by utilizing contemporary techniques like deep neural networks, genetic algorithms, and various topological features based methodologies
SIRS-03.5 15:45 Periocular Recognition Under Unconstrained Image Capture Distances
Vineetha Ipe (IIITMK, India); Tony Thomas (Indian Institute of Information Technology and Mangement - Kerala, India)
In recent times, the periocular region has transpired as a potential trait for unconstrained biometrics. The latest trends in biometric research focus on removing the distance constraint faced while capturing the biometric images. Relaxing the distance constraints imposed on the participants raises new challenges in the form of image resolution and quality. Employing advanced hardware solutions including high resolution image sensors and long focal lenses is not always feasible and economical. Many image enhancement techniques such as interpolation methods and superresolution were developed to address the quality issue of acquired images arising due to the relaxation of range constraints. However, the use of interpolation techniques results in a blurred image. In addition, from our investigations, we have inferred that the conventional CNN based superresolution methods improve the resolution of images alike without taking the capture range into account and hence are not quality driven. In order to improve the recognition rate irrespective of the acquisition distance, we propose to make use of transfer learning. Our approach is novel in the prospect that it is the first study that analyses the performance of a periocular recognition system based on the different image acquisition range. The framework used in our work is able to achieve a uniform accuracy of 84.3% for all the image acquisition ranges.
SIRS-03.6 16:00 Applying Neural Style Transfer to Spectrograms of Environmental Audio
Dejan Milacic and Sri Krishnan (Ryerson University, Canada)
Neural Style Transfer is a technique which uses a Convolutional Neural Network to extract features from two input images and generates an output image which has the semantic content of one of the inputs and the "style" of the other. This project applies Neural Style Transfer to visual representations of audio called spectrograms to generate new audio signals. Audio inputs to the style transfer algorithm are sampled from the Dataset for Environmental Sound Classification (ESC-50). Generated audio is compared on the basis of input spectrogram type (Short-Time Fourier Transform (STFT) vs. Constant-Q Transform (CQT)) and pooling type (max vs. average). Comparison is done using Mean Opinion Scores (MOS) calculated from ratings of perceptual quality given by human subjects. The study finds that STFT spectrogram inputs achieve high MOS when subjects are given a description of the style audio. The audio generated using CQT spectrogram inputs raises concerns about adapting visual domain techniques to generate audio.
SIRS-03.7 16:15 6G Ultra-Low Latency Communication in Future Mobile XR Applications
Zoran Bojkovic and Dragorad Milovanovic (University of Belgrade, Serbia); Tulsi Pawan Fowdur and Madhavsingh Indoonundon (University of Mauritius, Mauritius)
This work aims to provide a comprehensive overview of the most challenging aspects of future 6G mobile communication and present latest research on promising low-latency technology enabling eXtended Reality services. XR refers to a real and virtual man-machine interaction environment generated by computer technologies including augmented, virtual and immersive reality. Ultra-reliable low-latency service (uRLLC) is an important enabler of multi-modal XR applications. However, the solutions with the lowest latency and the highest reliability requires high cost. In the first part, we outlined requirements and fundamental trade-offs. In the second part, we considered technological limitations and research activities.

SoMMA-02: SoMMA-02: Symposium on Machine Learning and Metaheuristics Algorithms, and Applications - SoMMA'20 (Regular & Short Papers)

SoMMA-02.1 14:45 Smart Security and Surveillance System in Laboratories Using Machine Learning
Suneeta Budihal (Vishweshrayya Technological University, India); Sujata C (KLE Technological University, India)
The paper proposes to design and develop a smart authentication system in laboratory as a part of security and surveillance. To address the unauthorized entry in the laboratory, a smart alert system is designed and developed. The authentic entry to any laboratory will reduce the student response to hazards and accidents, risks to acceptable levels. The proposed methodology uses face detection and recognition techniques for the student authentication. Based on the results, the attendance is updated in the attendance data base if the authorized users enter the laboratory else the details will be sent to the course instructors through the registered mails. The authentic student is also verified for wearing the personal protective equipment during the entry to the laboratory. By this, we can reduce the vandalism occurring in laboratories and maintain the integrity.
SoMMA-02.2 15:00 Traffic Sign Classification Using ODENet
In the family of deep neural network models, deeper the model is, the longer it takes to predict and larger the memory space it utilizes. It is very much likely that use-cases have constraints to be respected, especially on embedded devices, i.e, low powered, memory-constrained systems. Finding a suitable model under constraints is repeated trial-and-error to find optimal trade-off. A novel technique known as Neural Ordinary Differential Equation Networks (ODENet) was proposed in NeurIPS2018, where instead of a distinct arrangement of internal hidden layers of a Residual Neural Network (RNN), they used parametrized derivatives of internal states in the neural system. Any differential equation solver can be used to calculate the final output. These models have constant depth and can trade between speed and accuracy. We propose a methodology for Traffic Sign Detection using ODENet and subsequently conclude that ODENets are more robust and perform better in comparison to ResNets. We also conclude that though training time is high in ODENets, they can trade-off between speed and accuracy when it comes to both training and testing.
SoMMA-02.3 15:15 CybSecMLC: A Comparative Analysis on Cyber Security Intrusion Detection Using Machine Learning Classifiers
Sriramulu Bojjagani and B. Ramachandra Reddy (SRM University-AP, India); Mulagala Sandhya (National Institute of Technology Warangal, India); Dinesh Reddy Vemula (SRM University-AP, India)
With the rapid growth of the Internet and widely usage of smartphone and wireless communication-based applications, new threats, vulnerabilities, and attacks also increased. The attackers always use communication channels to violate security features. The fast-growing of security attacks and malicious activities create a lot of damage to society. The network administrators and intrusion detection systems (IDS) were also unable to identify the possibility of network attacks. However, many security mechanisms and tools are evolved to detect the vulnerabilities and risks involved in wireless communication. Apart from that machine learning classifiers (MLCs) also practical approaches to detect intrusion attacks. These MLCs differentiated the network traffic data as two parts one is abnormal and other regular. Many existing systems work on the in-depth analysis of specific attacks in network intrusion detection systems. In this paper, we present a comprehensive and detailed inspection of some existing MLCs for identifying the intrusions in the wireless network traffic. Notably, we analyze the MLCs in terms of various dimensions like feature selection, ensemble techniques for identifying intrusion detection. Finally, we evaluated MLCs using the ``NSL-KDD'' dataset and summarize their effectiveness using a detailed experimental evolution.
SoMMA-02.4 15:30 Emotion Recognition from Facial Expressions Using Siamese Network
Naga Venkata Sesha Saiteja Maddula, Lakshmi R Nair and Harshith Addepalli (Amrita School of Engineering, Bangalore, India); Suja Palaniswamy (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, India)
The research on automatic emotional recognition has been increased drastically because of its significant influence on various applications such as treatment of the illness, educational practices, decision making, and the development of commercial applications. Using Machine Learning (ML) models, we have been trying to determine the emotion accurately and precisely from the facial expressions. But it requires a colossal number of resources in terms of data as well as computational power and can be time-consuming during its training.To solve these complications, meta-learning has been introduced to train a model on a variety of learning tasks, which assists the model to generalize the novel learning tasks using a restricted amount of data. In this paper, we have applied one of the meta-learning techniques and proposed a model called MLARE(Meta Learning Approach to Recognize Emotions) that recognizes emotions using our in-house developed dataset AED-2 (Amrita Emotion Dataset-2) which has 56 images of subjects expressing seven basic emotions viz., disgust, sad, fear, happy,neutral, anger, and surprise. It involves the implementation of the Siamese network which estimates the similarity between the inputs. We could achieve 90.6% of overall average accuracy in recognizing emotions with the state-of-the-art method of one-shot learning tasks using the convolutional neural network in the Siamese network
SoMMA-02.5 15:45 Detection of Obfuscated Mobile Malware with Machine Learning and Deep Learning Models
Dhanya K. A. (TIFAC CORE in Cyber Security, Amrita School of Engineering , Amrita University, India); Dheesha O K (SCMS School of Engineering and Technology, Cochin, India); Gireesh Kumar T (Amrita Vishwa Vidyapeetham, India); Vinod P (SCMS School of Engineerin & Technology, India)
Obfuscation techniques are used by malware authors to conceal malicious code and surpass the antivirus scanning. Machine Learning techniques especially deep learning techniques are strong enough to identify obfuscated malware samples. Performance of deep learning model on obfuscated malware detection is compared with conventional machine learning models like Random Forest (RF), Classification and Regression Trees (CART) and K Nearest Neighbour (KNN). Both Static ( hardware and permission) and dynamic features (system calls) are considered for evaluating the performance. The models are evaluated using metrics which are precision, recall, F1-score and accuracy. Obfuscation transformation attribution is also addressed in this work using association rule mining. Random forest produced best outcome with an F1-Score of 0.99 with benign samples, 0.95 with malware and 0.94 with obfuscated malware with system calls as features. Deep learning network with feed forward architecture is capable of identifying benign, malware, obfuscated malware samples with F1-Score of 0.99, 0.96 and 0.97 respectively.
SoMMA-02.6 16:00 Data Driven Methods for Finding Pattern Anomalies in Food Safety
Anantha Krishna S, Amal Soman and Manjusha Nair (Amrita Vishwa Vidyapeetham, Amritapuri, India)
The indigenous part of all living organisms in the world is food. As the world population increases, the production and consumption of food also increases. Since the population progresses in a rapid manner, the productivity of the food materials may not be sufficient for feeding all the people in the world. There rises the cause of food adulteration and food fraud. Adulteration is the process of adding a foreign substance to the food material which affects the natural quality of the food. As the amount of adulterants increases, the toxicity also increases. Machine learning techniques has been used previously to automate the prediction of food adulteration under normal scenarios. In this paper, we use different machine learning technique for finding food adulteration from milk data sets. This paper surveys the different concepts used in automating the detection of food adulteration and discusses the experimental results obtained by applying machine learning algorithms like Naive Bayes, Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural networks (ANN), Linear Regression, and Ensemble methods. The accuracy of the models ranged from 79% to 89%. Ensemble method outperformed other algorithms with an accuracy of 89% and Linear Regression showed least accuracy of 79%. Artificial Neural networks showed an accuracy of almost 87%. SVM and Naïve Bayes showed accuracy 84% and 80% respectively.
SoMMA-02.7 16:15 Exam Seating Allocation to Prevent Malpractice Using Genetic Multi-Optimization Algorithm
Madhav M Kashyap, S Thejas, Gaurav C G and Srinivas K S (PES University, India)
Despite unceasing debate about it's pros and cons, exams and standardized testing have emerged as the main mode of evaluation and comparison in our increasingly competitive world. Inevitably, some examinees attempt to illegally gain an unfair advantage over other candidates by indulging in cheating and malpractice. Even a single case of examination malpractice can destroy an Examination body's credibility and even lead to costly and time-consuming legal proceedings. Our paper attempts to strategically allot examinees in specific seats and rooms, such as to mitigate the overall probability of malpractice. It involves examining multiple crucial factors such as subject similarity, distancing between examinees, and human field of vision to find the most optimal seating arrangement. We have exploited the property of Evolutionary Genetic Algorithms to find globally optimal or close to optimal solutions in an efficient time for this otherwise NP-complete permutation problem.
SoMMA-02.8 16:30 Analysis of UNSW-NB15 Dataset Using Machine Learning Classifiers
Anne Dickson (University of Kerala, India); Ciza Thomas (Senior Joint Director, Directorate of Technical Education, Kerala)
Benchmark datasets are the inevitable tool required to scrutinize vulnerabilities and tools in network security. Current datasets lack correlation between normal network traffic and real-time network traffic. Behind every evaluation and establishment of attack detection such datasets are the cornerstone deployed by research community. Creating our own dataset is a herculean task. Hence analyzing the subsisting datasets aids to provide a thorough clarity on its effectiveness when deployed in real time environments. This paper work focus on analyzing UNSW-NB15 dataset using machine learning classifiers for intrusion detection. Feasibility, reliability and dependability of the dataset is reviewed and discussed by considering various performance measures such as precision, recall, F-score, specificity using various machine learning classifiers Naïve Bayes, Logistic Regression, SMO, J48 and Random Forest respectively. Experimental results reveals its noticeable classification accuracy of 0.99 with the random forest classifier with 0.998 recall and specificity 0.999 respectively. Research studies reveal the fact that threat diagnosis using conventional dataset and sophisticated technologies cover only 25% of threat taxonomy and hence the poor performance of existing intrusion detection systems. Thorough analysis and exploration of the dataset will pave the way for the outstanding performance of the intelligent IDS.
SoMMA-02.9 16:45 Machine Learning and Soft Computing Techniques for Combustion System Diagnostics and Monitoring: A Survey
Amir Khan, Mr. (Aligarh Muslim University & ZHCET, India); Mohd. Zihaib Khan (Aligarh Muslim University, India); Mohammad Samar Ansari (Athlone Institute of Technology, Ireland & Aligarh Muslim University, India)
Combustion systems are ubiquitous in nature and are employed under varied conditions to comply with the specific demands of the applications they are used in. Combustion control and optimization techniques are essential for efficient and reliable monitoring of the combustion process. This paper presents a comprehensive review of combustion monitoring diagnostics and prognostics which have been researched thoroughly using various soft-computing techniques incorporating state-of-the-art Machine Learning (ML) and Deep Learning (DL) techniques. Regarding the combustion systems, there are three primary areas which have been investigated viz. (i) combustion state monitoring, (ii) radical emissions and their concentration measurement, and (iii) 3-D flame image reconstruction. This paper reviews these areas along with recent advancements in the flame imaging techniques.
SoMMA-02.10 17:00 Activity Modeling of Individuals in Domestic Households Using Fuzzy Logic
Sristi Dyuthi (University of California San Diego, USA); Shahid Mehraj Shah (National Institute of Technology, Srinagar, J&K, India)
A model which predicts the activities of each individual in a household is developed. This model is used to simulate the activities of 25,000 households in a town in Kerala (a state in southern region of India). A fuzzy-logic based approach is used to estimate the probabilities of an individual to be in a particular state/activity. Then an optimization problem is formulated to compute the activity transitions. Further, the activity transitions of the individuals within a house are tied in an appropriate way. These activity transitions are then used to simulate a Markov chain of the activities for a sample set of households in a town in Kerala.

Friday, October 16 15:30 - 18:00 (Asia/Calcutta)

Lightning Talks

RF Receiver Front End Design for Communication (Shasanka Sekhar Rout, GIET University, Gunupur)

Application of Machine Learning in Legal domain (Riya Sil, Adamas University) Mixed-signal processing challenges (Sandhya Sandeep Save, Thakur College of Engineering and Technology)

Perimeter forensic investigation can be penetrated to unmask the true origin of cybercrime (Rachana Yogesh Patil, Pimpri Chinchwad college of Engineering, Akurdi, Pune)

Optimal Placement of Relay Stations in WiMAX Networks (J Sangeetha, M S Ramaiah Institute of Technology)

Double Band Coplanar Antenna for ISM and UWB Applications (Dhivya Raj, College of Engineering, Chengannur)

Two Dimensional (2D) and Three Dimensional (3D) Network-On-Chip (NoC) Architectures (Ashok Kumar Nagarajan, Sree Vidyanikethan Engineering College, Tirupati)

The Evolving Facade of Computations (Prakash Hegade, KLE Technological University, Hubli)

Challenges and Future Prospects of LIDAR for Smart Vehicular Application (Radhika S, Sathyabama Institute of Science and Technology, Chennai)

A Real-time IoT Based Nitrate Sensing System using Interdigital Transducer (Darsana S, College of Engineering, Chengannur)

The convergence of HPC and AI (Satyadhyan Chickerur KLE Technological University)

An Efficient Resource Management Technique for Deadlock-Free Systems (Madhavi Devi B, JNTUK)

Location Privacy Preservation through Dummy Generation: Challenges and Opportunities (Dilay Parmar, S. V. National Institute of Technology, Surat) Image Processing for Flame Monitoring in Power Station Boilers (K. Sujatha, Dr. MGR Educational & Research Institute)

Deep Learning for Computer Vision (A. Kalaivani, Saveetha School Of Engineering, Saveetha University)

Saturday, October 17

Saturday, October 17 9:30 - 10:20 (Asia/Calcutta)

Keynote: Predictive Analytics Frameworks for Forecasting High Impact Economic Impacts and Insider Trading Events

Speaker: Dr. Vipin Chaudhary, Kevin J. Kranzusch Professor and Chair, Case Western Reserve University, USA

Abstract: Financial markets are driven by complex dynamics and interplay, often stemming from convoluted investor interactions, asset and inter-market complexities. Recent financial market events such as the sub-prime mortgage crisis of 2008, have necessitated the need for the development of strategies to deal with the acute stresses of renewed economic uncertainties, monitor systemic activities, and generate actionable intelligence. Despite several advancements, the modeling of financial markets events remains elusive due to complex interactions among the market constituents. In this talk I will suggest various predictive models to forecast high impact economic events and mine illegal trading activities driven by material non-public information. Our results on real test data confirm the efficacy of the proposed solutions to forecast economic recessions over multiple horizon periods and to detect insider trading activities in the U.S. equity markets.

Saturday, October 17 10:20 - 11:20 (Asia/Calcutta)

Keynote: Quantum-inspired Processing and Networking

Speaker: Dr. Dilip Krishnaswamy, Vice President (New Technology R&D), Reliance Industries Ltd, India

Abstract: This talk will provide an introduction to quantum physics and will also introduce quantum computing-based processing based on matrix unitary transformations. The talk will also suggest possibilities for quantum-inspired processing, and discuss some recent work in the area of quantum blockchain networks.

Saturday, October 17 11:20 - 12:10 (Asia/Calcutta)

Keynote: Smart Sensors

Speaker: Dr. Jose Joseph, Assistant Professor, Indian Institute of Information Technology and Management-Kerala (IIITM-K), Trivandrum, India

Abstract: Sensors and sensor systems have been prevalent in commercial, industrial, healthcare, and military applications over the past several decades. The addition of ‘smartness' to sensors made them ‘think' and ‘act' according to the situation. This talk analyzes the technological aspects of incorporating smartness to the sensors. Contributions of a few of the process innovations such as monolithic integration and 3D IC, in converting sensors to smart sensors, will be covered superficially. The talk will try to through light on the topic with a slight inclination towards electronic and microfabrication aspects.

Saturday, October 17 11:30 - 13:00 (Asia/Calcutta)

Hot off the Press

Accepted Titles

A Psychologically Inspired Fuzzy Cognitive Deep Learning Framework to Predict Crowd Behavior (Elizabeth B Varghese, Indian Institution of Information Technology and Management- Kerala)

E-CRUSE: Energy-Based Throughput Analysis for Cluster-Based RF Shallow Underwater Communication (Hrishikesh Venkataraman and Pavan Ganesh, Indian Institute of Information Technology, Sri City)

Enabling Hardware Performance Counters for Microkernel-Based Virtualization on Embedded Systems (Deepa Mathew, Cochin University of Science and Technology)

Soft-error reliable architecture for future microprocessors (Shoba Gopalakrishnan, Muthoot Institute of Technology & Science)

Computer-Aided Diagnosis of Retinopathy of Prematurity in Preterm Infants (Siva Kumar R, College of Engineering Trivandrum)

A Psychology-Inspired Trust Model for Emergency Message Transmission on the Internet of Vehicles (Fabi A K, Indian Institution of Information Technology and Management- Kerala)

Dynamic Budget Total need Based Resource Reservation Technique (Madhavi Devi B, JNTUK)

An Evaluation of Feature Encoding Techniques for Non-Rigid and Rigid 3D Point Cloud Retrieval (Shankar Gangisetty, KLE Technological University)

Classification of Diabetic Retinopathy using Residual neural network (Priyadharsini C, Vellore Institute of Technology, Chennai)

Saturday, October 17 13:30 - 14:20 (Asia/Calcutta)

Keynote: Toward effective Network Traffic Classification via Deep LearningDetails

Speaker: Dr. Domenico Ciuonzo, DIETI, University of Naples, Federico II, Italy

Abstract: In recent years operators have experienced the tremendous growth of the traffic to be managed in their networks, whose heterogeneous composition (e.g. mobile/IoT devices, anonymity tools), dynamicity, and increasing encryption is posing new challenges toward actionable network traffic analytics. In this talk, the topic of network traffic classification will be covered, due to its applications in network management, user-tailored experience, and privacy. First, the reasoned use of the Deep Learning umbrella will be introduced and explained in such a context. Hence, lessons learned and common pitfalls will be highlighted. Subsequently, the adoption of sophisticated multi-modal multi-task architectures will be put forward. The talk will also cover the current research done in the area of AI-based network traffic analysis at the TRAFFIC group of the University of Naples Federico II, Italy.

Saturday, October 17 14:30 - 17:15 (Asia/Calcutta)

CoCoNet-S8: CoCoNet-S8: Communications, Control and Signal Processing (Regular & Short Papers)

CoCoNet-S8.1 14:30 Building a cloud integrated WOBAN with optimal coverage and deployment cost
Mausmi Verma (Institute of Engineering and Technology, Devi Ahilya University, Indore, India); Uma Rathore Bhatt (Institute of Engineering & Technology, Devi Ahilya University, Indore, India); Raksha Upadhyay (IET DAVV Indore, India)
Increasing demand of new services and application by the users poses a challenge on the communication network. Cloud computing serves this purpose by providing a shared pool of resources such as, storage, servers, services etc. Such technology uses backbone network for every applications to be served and thus results in high latency. To overcome such problem cloudlets are used which are deployed in decentralized way. Cloudlets are clusters of computers which are connected to the users either directly or in maximum two wireless hops without affecting the latency of the network. There is another important factor, cost efficiency, which plays a very important role in the deployment of cloudlets in cloud integrated wireless optical broadband access network (CIW).In this paper, we proposed an algorithm to find the optimum position in the network for the deployment of cloudlets by taking coverage and cost as a tradeoff.
CoCoNet-S8.2 14:45 Deploy-Web Hosting Using Docker Container
Sheen Sabu (APJ Abdul Kalam Technological University, India); Minto Sunny (AICTE(All India Council for Technical Education), India); Sen Shaji and Udith Uthaman (AICTE (All India Council for Technical Education), India); Gemini George (APJ Abdul Kalam Technological University, India)
In traditional Web hosting, websites/web applications are configured on a bare metal server a virtual private server. For hosting multiple websites, directories are created for each website and a Linux user is created corresponding to each website.This means that a single web server/app server daemon process is responsible for serving all these websites. This is called shared web hosting. This is not a suitable solution if your application handles secret or sensitive data such as credit card numbers, bank account information ,etc. If any application handles secret/sensitive data, such applications must be deployed on dedicated servers, this is costly. This paper presents the Docker containers technology which is currently being used in many production environments to package their applications in isolated environment. Further the work elaborates how Docker technology has overcome the previous issues which includes building and deploying large applications The docker container based deployments on the other hand isolate a website/web application and it's dependencies into self-contained units which we can run anywhere. With docker based deployment, we can achieve a docker cloud where we can horizontally scale up and scale down the containers dynamically based on the traffic volume. Further, we can run a large monolith application or a micro-service on a docker container.
CoCoNet-S8.3 15:00 Localization of Self-Driving Car Using Particle Filter
Nalini C Iyer (B.V.Bhoomaraddi college of Engg and Technology, India); Akash Kulkarni (KLE Technological University, India); Raghavendra Shet (B. V. B College of Engineering and Technology, India)
Autonomous system or self-driving car needs to localize itself very frequently or some-times continuously to determine its proper location that is essential to perform its navigation functionality. The probabilistic models are amongst the best methods for providing a real-time solution to the localization problem. Current techniques still face some issues connected to the type of representation used for the probability densities. In this paper, we attempt to localize the self-driving car using particle filter with low variance re-sampling. Particle filter is a recursive Bayes filter, non-parametric approach, which models the distribution by samples. A specially modified Monte Carlo localization method is used for extracting the local features as the virtual poles. Simulations results demonstrates the robustness of the approach, including kidnapping of the robot's field of view. It is faster, more accurate and less memory-intensive than earlier grid-based methods.
CoCoNet-S8.4 15:15 Reduction in Average Distance Cost by optimizing position of ONUs in FiWi access network using Grey Wolf optimization algorithm
Nitin Chouhan (Institute of Engineering and Technology, Devi Ahilya University, Indore, India); Uma Rathore Bhatt (Institute of Engineering & Technology, Devi Ahilya University, Indore, India); Raksha Upadhyay (IET DAVV Indore, India)
Fiber-Wireless is the promising next-generation broadband access network. FiWi integrates the technical merits of the optical access network and wireless access network. ONU Placement is the most important issue in FiWi as it affects the network cost and network performance. The present research work considers the ONU placement issue and proposes a novel algorithm for finding an optimum position of ONUs. For this, a nature-inspired Grey Wolf Optimization (GWO) algorithm is applied in the FiWi network. To the best of our knowledge, this algorithm has not been used for the ONU placement problem in the FiWi network. GWO provides the optimum position of every ONU where the Average Distance Cost (ADC) is minimum. ADC is the average of the distance of ONU and its associated wireless routers. To check the effectiveness of the proposed work, simulation is done for varying numbers of wireless routers. The proposed work is compared with well-known algorithm namely Teaching Learning Based Optimization (TLBO) Algorithm. The result shows the reduction in ADC after applying the GWO algorithm than the initial placement and TLBO algorithm for all the cases considered for simulation. Hence, to deploy a cost-efficient FiWi network, proposed work may be one of the best solutions.
CoCoNet-S8.5 15:30 Predictive Modeling and Control of Clamp Load Loss in Bolted Joints Based on Fractional Calculus
Priteshkumar Shah (Symbiosis International University, India); Ravi Sekhar (Symbiosis Institute of Technology, Pune, India)
Safety of bolted joints in industrial machinery is of paramount importance. In this paper, fractional calculus based predictive modeling has been investigated to control clamping force losses in bolted joints under service loads. Clamp load loss occurs in bolted joints due to application and subsequent removal of an externally applied separating service load on a fastener preloaded beyond its elastic limit. In this work, five different model structures were tried for system identification based predictive modeling of joint clamp load loss. These structures were the first order integer, second order integer, first generation CRONE, fractional integral and fractional order models. These models were validated by statistical parameters such as FIT, R squared, mean squared error, mean absolute error and maximum absolute error. The fractional order model with three parameters provided most accurate estimate of the system performance. It also took minimum iterations to reach the optimum controller parameter settings. This model was controlled using PID and fractional PID controllers. Fractional PID controller was designed to minimize integral of squared error (ISE) and towards the convergence of gain/order parameters. The PID controller response exhibited better time domain characteristics as compared to the fractional PID, but suffered from a maximum overshoot as well. In a physical bolted joint, clamp load loss and external service load overshoots may lead to joint failures. Maximum overshoot was totally eliminated by fractional PID controller, proving its safe applicability to the bolted joint system. By choosing a realistic set point for clamp load loss, the maximum permissible external service loading conditions were predicted successfully.
CoCoNet-S8.6 15:45 On-off Thinning in Linear Antenna Arrays using Binary Dragonfly Algorithm
Ashish Patwari (Vellore Institute of Technology, Vellore); Medha Mani (Vellore Institute of Technology - Vellore, India); Sneha Singh and Gokul Srinivasan (Vellore Institute of Technology, Vellore, India)
The aim of this work is to study the suitability of two newly-introduced bio-inspired algorithms, namely, the Dragonfly algorithm (DA) and the Salp Swarm algorithm (SSA) for thinning a linear antenna array. In array thinning, a fully populated array is chosen as a starting point and a thinned array is obtained through careful deactivation of select sensors such that the residual active sensors enable the array to achieve a desired side-lobe performance. In this paper, we apply the binary versions of DA and SSA, namely, the Binary Dragonfly algorithm (BDA) and the Binary Salp Swarm algorithm (BSSA) to thin a symmetric linear array with uniform inter-element spacing of half wavelength. Extensive simulations were performed in MATLAB by considering arrays of different sizes. The results obtained from BDA and BSSA were compared against those obtained from the binary versions of two benchmark algorithms, namely, the Genetic algorithm (GA) and the Grey Wolf Optimizer (GWO). Relative side-lobe level (RSLL) and filling percentage were used as performance comparison metrics. It has been observed that both BDA and BSSA offer promising results in line with BGA and BGWO. More specifically, BDA was found to be faster than BSSA.
CoCoNet-S8.7 16:00 Design of high speed turbo product code decoder
Gautham Shivanna, Yamuna B and Karthi Balasubramanian (Amrita Vishwa Vidyapeetham, India); Deepak Mishra (Space Application Center (SAC), ISRO, Ahmedabad, India)
In the field of digital communication, there has always been a requirement for an efficient, low complex and high speed error control encoder and decoder. Many such encoders and decoders for different error control codes have been proposed in literature by researchers. However developing such CODECs whose performance can be suitable for the requirements of modern communication systems is still an open research problem. In this paper, one such decoder namely fast Chase decoder proposed in literature has been studied. The hardware design of the decoder has been done and verified with results from Matlab simulations. An attempt has been made to improve the speed by replacing the ripple carry adder in the design with a fast adder. The hardware architecture is implemented in Xilinx XC7A35T platform and an increase in computation speed of 5% has been achieved.
CoCoNet-S8.8 16:15 Resource Allocation for 5G RAN - A Survey
Shanmugavel G and Vasanthi M s (SRM Institute of Science and Technology, India)
Resource Allocation (RA) is a fundamental task in the design and management of wireless signal processing and communication networks. In a wireless communication, we must wisely allocate some available radio resources like time slots, transmission power, frequency band, and transmission waveforms or codes across multiple interfering links as to accomplish a better framework execution while guaranteeing user fairness and quality of service (QoS). İn Fifth Generation (5G) of wireless communication system provide a better mobile service with improved QoS everywhere. Considering the dense deployment and more number of network nodes, RA and interference management are the important research issues in heterogeneous mobile networks. İn this we need to utilize the available radio resources efficiently, for that the RA is of much importance in future wireless communication systems (5G/6G). In this survey we consider various resource allocation methods for different Radio Access Network (RAN) architecture, several Authors have implemented some techniques and algorithms to achieve better resource allocation with the help of existing literature survey, we explore ways to allocate the radio resources for next generation wireless communication.
CoCoNet-S8.9 16:30 Verifying Mixed Signal ASIC-KLEEL2020 using SVM
Aishwarya HR, Saroja Siddamal, Aishwarya Shetty and Prateeksha Raikar (KLE Technological University, India)
KLEEL2020 is an in-house developed Event Logger. The ASIC is implemented in TSMC 0.18µm CMOS mixed signal technology, 3.3V/1.8V. The focus of this paper is to achieve functional precision of the design before the tape-out. The process of verification is critical stage in the design flow because any bug not detected at earlier stage will lead to overall failure of the design process. In this paper, the authors present a framework for the complete verification of KLEEL2020 using System Verilog Methodology (SVM). The proposed SV environment allows I2C protocol as communication means with DUT. Different test scenarios are developed, and reused to verify the ASIC. Event logger is verified for various test cases. This verification attempt helped identify 05 RTL bugs in the design.
CoCoNet-S8.10 16:45 Performance Analysis of individual Partial Relay selection protocol using Decode and Forward Method for Underlay EH-CRN
Kalaimagal G (Kattankulathur & SRMIST, India); Vasanthi M S (SRM University, India)
The paper investigates the performance of the underlay Cognitive radio network. We propose a relay selection protocol to enhance the throughput and to obtain reduced outage probability in the Adhoc network. The proposed work is based on relay selection aided with energy harvesting to serve communication between the secondary nodes. We have formulated a closed-form expression for the proposed work and differentiated the outage probability and throughput with other partial relay selection techniques. Further, decode and forward (DF) relaying with Rayleigh fading channel is considered in this work to improve the end to end channel gain. The simulation output indicates that the proposed cooperative relay selection scheme has marginally improved by increasing the number of relays node. Keywords: cooperative relaying, partial relay selection, outage probability, underlay CRN.

CoCoNet-S9: CoCoNet-S9: Image and Signal Processing, Machine Learning and Pattern Recognition (Short Papers)

CoCoNet-S9.1 14:30 Accurate Identification of Cardiac Arrhythmia using Machine Learning Approaches
Sumana Maradithaya (M S Ramaiah Institute Of Technology & Ramaiah Institute of Technology, India); Rashmitha HR (MS Ramaiah Institute of Technology, India)
Cardiac Arrhythmia is a condition which is brought about by the electrical wave movement from the hearts sporadic when it's quicker or more slow than conventional. An average count of death due to cardiac arrhythmia is 17.3 million as indicated by World Health Organization. Earlier, the algorithms used gave in less accuracy due to less availability of dataset and manual dataset cleaning that resulted in inaccurate models. In this paper, crude Electrocardiography readings are collected for data cleaning using mean and standard deviation methods for the missing cells and special characters. Further, salient features that assist in construction of precise models for the data are extracted from the cleaned datasets. The implementation of Machine Learning Algorithms is proficient in detecting 13 broad classes in arrhythmia. The paper discusses the construction of the various models and evaluates the usefulness of each of them on the cardiac arrhythmia dataset. The weighted k-nearest neighbor proves to provide highest accuracy in comparison to the other approaches.
CoCoNet-S9.2 14:45 Evaluation of Attributed Network Embedding algorithms for patent analytics
Jinesh Jose (Government Engineering College Idukki, India); S. Mary Saira Bhanu (National Institute of Technology-Tiruchirappalli, India)
Patent analytics is a specialized branch of data analytics where patent documents are analysed to understand behavioural information. Citation Network Analysis is one of the common techniques to examine the importance of a patent by studying its citations. Typical Patent Citation Network (PCN) will have millions of attributed nodes and edges. Inferencing on such a large network necessitates the use of Attributed Network Embedding (ANE) techniques to bring down the computational requirements by reducing the dimensionality of the network data. Identifying the suitable ANE algorithm for PCN analytics is the purpose of this study. Multiple ANE algorithms are applied on the patent dataset to create low dimensional embeddings and these embeddings are used as the input for performing the innovation value prediction using Linear Regression model. Mean Square Error (MSE) is calculated between the predicted innovation values and the actual innovation values. MSE values obtained with different ANE algorithms are analysed to identify the most suitable ANE algorithm for patent analytics. Graph-SAGE with mean-based aggregator resulted in the least MSE compared to all other ANE algorithms evaluated for patent analytics.
CoCoNet-S9.3 15:00 Extraction and Analysis of Facebook Public Data and Images
Gutam Bala Gangadhara (Siddartha Educational Academy Group of Institutions, Tirupati, India); Subhash Chandra Mouli D and Sudhakar Majjari (MRR Institute of Technology and Science, Udayagiri, India)
Social networks play vital role in human communication and to improve the business applications. The Facebook is one of the most popular social networking application. However, the Facebook generates huge amount of data in the form of text, advertisements, posts, images and videos etc. By analyzing Facebook data we can find the location where our business promising in the real world. While sharing personal data users demand security. Trust becoming an essential parameter in Social Networking. In this paper, a new technique is presented to identify duplicate logo images and profile pictures to prevent the fraud in business by keeping secret information in the profile picture or logo without distraction along with theoretical description of Facebook.
CoCoNet-S9.4 15:15 Bag of Science: A Query Structuring and Processing Model for Recommendation Systems
Prakash Hegade, Vibha Hegde, Sourabh Jain, Rajaram M Joshi and K L Vijeth (KLE Technological University, India)
Technological advancements and the changing needs drive the process workflow, meeting the need-of-the-hour requirements, and calibrating system components. While the perception evolves, the fundamental principles stay put and wrap around generational disparities. In a changing scenario of the physical market to an e-commerce site, the recommendation systems have had substantial roles. The present systems customarily use the item or user profiles for recommendations. The existing recommendation systems rely heavily on data and learning algorithms. An improved recommendation system given by considering the query's semantics rather than using only historical data of numerous worldwide queries can create a paradigm shift in the technologies involved in computer recommendations. Bag of Science attempts to take on this challenge. The model dwells on inferring a query's meaning in all contexts to create an order in which the words relate. By constructing a word-definition graph, the methodology explores the possibility of enhancing the recommendation systems to improve the e-commerce platforms' business. The paper presents the model's architecture with its chief components, including a parser and scraper, graph generator, graph traversal, and results. The model presents the traversal results and analysis of the constructed e-commerce graphs using hops as the threshold metric. The paper also presents the model's abstract data type to make it applicable and extend to other domain contexts that involve query engines and need recommendations.
CoCoNet-S9.5 15:30 Announcer Model for Inter-organizational Systems
Prakash Hegade, Nikhil Lingadhal, Usman Khan, Tejaswini Kale and Srushti Basavaraddi (KLE Technological University, India)
From barter systems to shopping online, markets have evolved with institutional design characteristics with the objective of providing a platform for buying and selling. The technological investment portfolio has brought in significant changes to market dynamics. Though there are apprehensions to migrate the offline features online, along with online benefits, there are also inherent challenges to be managed. In supply chain management, which manages from raw materials to customers, inventory management has a substantial role and acts as a key player affecting the entire chain directly or indirectly. Nevertheless of automation, inventory management is a tedious task and could use a computational helping hand. The announcer model proposes an alternative to computationally solve resource management between the business cycle's various stages through this paper. It attempts to establish a strong relationship between the intermittent by announcing the iterative status flags via tags and further utilizing it to improve work efficiency. The model eases the interaction and provides an automated channel for communication. This paper proposes the model and discusses its architecture and a sample workflow from a simulated industry transaction. The announcer space can also be integrated to live web data, making the system dynamic and self-learning to current market needs. The learning capability of the announcer contemplates modern challenges. The system attempts to achieve a natural order by balancing the system components' workflow through the announcer model. The announcer model promises to provide an intellectual space for coordination and collaboration.
CoCoNet-S9.6 15:45 Ranking of Educational Institutions based on user priorities using Multi-Criteria Decision-Making methods
Angitha Au (Amrita Vishwa Vidyappetham, India); Supriya M (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India)
Today, education plays a major role in bringing values to the coming generations. With right to education taken up seriously by the government, there is a vast increase in the number of students taking up various courses and degrees. To cater to these needs, several educational institutions have also come up with the intention of providing a variety of courses to the students. However, as the numbers increase, the need to assess these institutions also increase in order to help students decide the institution of their choice and also for the institutions to better themselves. To accomplish evaluation of such institutions, a number of ranking systems were established with a fixed set of criteria. However, a ranking system which would cater to the needs of a student individually was never developed. This paper aims to develop a ranking system, which ranks the educational institutions based on the needs of an individual, using Analytical Hierarchy Process and the Multi-Criteria Decision-Making method PROMETHEE. In order to make this system automized, the data required for this process has been retrieved using the process of web crawling. Web crawlers are automated scripts that are used to browse the world wide web in a systematic manner. This work will be useful for parents and students to find institutions based on their set of preferences.
CoCoNet-S9.7 16:00 Scalable Blockchain Framework for a Food Supply Chain
Manjula K Pawar (KLE Technological University, Hubballi, India); Prakashgoud Patil, P S Hiremath, Vaibhav S Hegde, Shyamsundar Agarwal and Naveenkumar P B (KLE Technological University, India)
Of late, in a food supply chain(FSC) management, many incidents related to the mislabeling and mishandling of food items are found to occur frequently, which often leaves the customers with a question of how safe and reliable is the food they buy and consume. Since the information regarding the tracking of food items is distributed across different locations widely, and the data is vulnerable to being recorded wrongly, the reliability of tracking of the food items through FSC is suspected. Further, an FSC has various stakeholders that interact and transact continuously. Thus, the scalability issue also plays a vital role in making FSC more efficient. The present-day traceability solutions lack efficiency in terms of scalability and reliability. The scalability can be categorized as throughput, cost, capacity, and response time. In this paper, the proposed method for the traceability solution of a food supply chain(FSC) is based on Blockchain, which plays a vital role in providing transparency and integrity along with other salient features like decentralization, immutability, and verifiability. The proposed FSC is made scalable in terms of throughput and cost using state channeling off-chain algorithm for blockchain implementation.

Saturday, October 17 14:30 - 17:00 (Asia/Calcutta)

ISTA-07: ISTA-07: Intelligent Tools, Techniques and Applications (Short Papers)

ISTA-07.1 14:30 A Hybrid Deep Learning Approach for Predicting the Spread of COVID-19
Dhanya NM (Amarita Vishwa Vidyapeetham); Prakash P (Vellore Institute of Technology, Chennai, India); Manojkumar V K (Amrita Vishwa Vidyapeetham, India)
COVID 19 pandemic is creating a devastating effect on the global population in terms of health and wealth. The number of patients detected with COVID is increasing day by day and touched a total case count of around 11,000,000 and death count of 5 Lakhs by July 2020. Along with other nations, India is also witnessing an exponential growth in the number of COVID cases. Providing proper treatment to infected patients seeks very close attention. There should be an efficient mechanism to predict the expected number of patients daily to prepare our medical system to accommodate them. This paper applies various deep learning models to predict the number of COVID cases. The paper also compares different deep learning techniques and traditional time series analysis, such as ARIMA, and concludes that the deep learning hybrid model of CNN-LSTM predicts better compared to other models.
ISTA-07.2 14:45 A Spot Rainfall Prediction During Cyclones by Using Time Series Analysis Model
Forecast models of climate and rainfall are exceptionally nonlinear and complex, which requires definite models to acquire precise predictions. Weather forecasting is an application to predict the atmospheric changes in a particular location, but unexpected changes in the weather condition is a major threat for the forecasts. Numerous tools and procedures have been created to predict weather forecasting, but still, more accurate methods are needed to predict weather forecasting in the right time to avoid the destruction of properties and loss of lives. In this work, time series analysis models with scatter plots and smooth lines have been used to predict the forthcoming rainfall. This research article focuses on correlating the cyclones and observed rainfalls using statistical as well as data analytical model. The rainfall data during the cyclone period in the Nagapattinam station (10°76' N and 79°85' E) of Bay of Bengal during north-east monsoon season for the period 2015 to 2019 are extracted from the National Oceanic and Atmospheric Administration (NOAA) of the World Meteorological Organization. There are fourteen cyclones occurred during the five years periods in this region and are analyzed using statistical as well as data analytical model. This model is verified, validated with training and testing data sets and the results are found to be encouraging.
ISTA-07.3 15:00 An Experimental Stack Overflow Chatbot Architecture Using NLP Techniques
Rahul Namboodiri and Kanishka Singla (Mukesh Patel School Of Technology Management & Engineering, NMIMS University, India); Priyanka Verma (Nmims University, India)
In computer programming, it is very common to come across faults or bugs that may result in unintended outcomes or none at all. To get the program to function correctly so as to obtain the desirable results, debugging is vital. In some scenarios, errors are noticeable, obvious and easy to understand while in others, with only a cryptic code present, it sometimes becomes tedious to debug. While there are several community question answering (CQA) websites such as Stack Overflow, it is noted that there is a general difficulty in finding the appropriate post similar to the user's query, especially for beginners. In this paper, we have proposed and implemented a task-specific chatbot model that is also capable of handling general conversations. The bot uses the community question answering website Stack Overflow's posts to be capable of answering programming related queries. For implementation of the proposed model, a custom Stack Overflow general dialogue dataset has been used. An intent classifier classifies the user's input and if the user's query is unique to a task, the task-specific classifier classifies the programming language for which the question is being asked. Long Short-Term Memory (LSTM), a powerful Recurrent Neural Network (RNN) deep learning model has been utilized to achieve this classification. Additionally, word embeddings enable transformation of the dataset and users' queries into word vectors to facilitate similarity-based fetching. The experimental analysis indicated that the chatbot could identify user motives with 99.15 percent accuracy and the question programming language with 82.77 percent accuracy.
ISTA-07.4 15:15 Comparison of Face Embedding Approach Vs CNN Based Image Classification Approach for Human Race Detection from Face
Rupesh Bapuji Wadibhasme, Amit Nandi and Bhavesh Wadibhasme (University of Pune, India); Sandip Sawarkar (IIT Bombay, India)
Human race detection using face with deep-learning technique is an active research area. It helps expand growing areas like Human-Computer-Interface, Understanding user demographics. It provides great insight into a better understanding of demographics and diversity among the population. Understanding ethnic diversification among user base can help many commercial applications to improve and optimize their products and services better suited for the community needs. Development around race detection is already an active area of research and improving its performance with speed is one of them. In this study, we compared FaceNet architecture-based features extraction technique to detect the race and compared with plain CNN based classification techniques. Comparative results support the claim that race detection problem is better handled by embedding based approach than a plain image classification approach. Embedding based techniques also provide a competitive edge over other methods used in this comparative study.
ISTA-07.5 15:30 Deep Learning Classification to Improve Diagnosis of Cervical Cancer Through Swarm Intelligence Based Feature Selection Approach
Priya S and Karthikeyan Nk (Coimbatore Institute of Technology, India)
Cervical cancer is one of the predominant cancers that cause death in women globally. This disease progresses slowly and is a curable cancer, if detected well in advance. Various studies on different data mining models used for diagnosing this disease have been carried out. Different approaches including SVM-RFE (Support Vector Machine and Recursive Feature Elimination) and SVM-PCA (Support Vector Machine and Principal Component Analysis) had been designed for cervical cancer diagnosis. However, these pose various challenges including low accuracy and high processing time for classification. The proposed system addresses these issues by designing a Long Short-Term Memory with Artificial Bee Colony (LSTM-ABC) algorithm for cervical cancer detection. The study takes the cervical cancer dataset as an input and uses Synthetic Minority Oversampling Technique (SMOTE) for solving the class imbalance issue. From the preprocessed data, the feature selection is performed using Artificial Bee Colony (ABC) algorithm. Long Short-Term Memory (LSTM) scheme is then employed for classifying cervical cancer based on the selected features. Experimental outcomes demonstrate that the proposed system delivers superior results than the previous works with respect to sensitivity, accuracy, and specificity.
ISTA-07.6 15:45 Deep Neural Network Based Multi-Class Arrhythmia Classification
Akhila Naz K A (CET, India); Jeena R S (University of KERALA, India); Niyas P (CET, India)
An arrhythmia is a condition which represents irregular beating of the heart, beating of the heart too fast, too slow, or too early compared to a normal heartbeat. Diagnosis of various cardiac conditions can be done by the proper analysis, detection, and classification of life-threatening arrhythmia. Computer aided automatic detection can provide accurate and fast results when compared with manual processing. This paper proposes a reliable and novel arrhythmia classification approach using deep learning. A Deep Neural Network (DNN) with three hidden layers has been developed for arrhythmia classification using MIT-BIH arrhythmia database. The network classifies the input ECG signals into six groups: normal heartbeat and five arrhythmia classes. The proposed model was found to be very promising with an accuracy of 99.45 percent.
ISTA-07.7 16:00 Diabetes Prediction Using Machine Learning Techniques
Sriraj Vuppala, Nuha Varier and Sangita Khare (Amrita Vishwa Vidyapeetham, India)
One of the most common and serious diseases worldwide is Diabetes Mellitus, which is observed in people of all age groups ranging from children to older population. Annually it costs a lot of money for the cure and diagnosis of people suffering from Diabetes. The customary strategies for diagnosis are tedious. Therefore, accurate prediction and usage of reliable methods become the most important and top priority concern. Diabetes Type-1 is rare, and the only cure is insulin. Whereas Type-2 Diabetes is more common and there are many factors leading to it that need to be predicted accurately. Most of the diabetics are unaware of their health condition or are in dark regarding the risk factors. Nowadays there is availability of a huge range of tools and computational methods for data analysis. The availability of advancements in technology for developing the classifications models can very well be implemented to detect the presence of Type-2 Diabetes in a patient. The goal of this work is to detect the existence of Type-2 Diabetes Mellitus in a person. If the existence of Diabetes in a patient is known beforehand, it would be easy for the doctors to understand the complexity of the disease. In this era of technological advancement, where the diagnosis is done using various sophisticated technologies, it is easy to identify a patient with Diabetes using technology.
ISTA-07.8 16:15 Overview of Deep Learning in Food Image Classification for Dietary Assessment System
Bhoomi Shah and Hetal Bhavsar (The Maharaja Sayajirao University of Baroda, India)
In modern days people are very attentive about their health and diet. High-calorie intake food can be harmful and may lead to serious health condition. Food image identification plays a very important role in today's era. Food domain can be divided into two parts. Firstly, to recognize food items and second is to estimate the calorie. Accurate methods for Food identification and calorie estimation can help people to fight against obesity which is the cause of overweight. So, Recognition of food is first step for a successful healthy diet. Classification of food images is a very challenging task as the datasets of food images are not linear varying due to the large variations in food shape, volume, texture, color, and compositions. To recognize food items accurately, Image processing can be used. Image processing techniques include image preprocessing, image segmentation, feature extraction, and image classification for recognition of food objects. Deep learning is an active research field nowadays in the field of object recognition, natural language processing, speech recognition, and many more. This paper defines the role of deep learning techniques based on a convolutional neural network for food object recognition. Many papers have been studied that have used deep learning as a tool for food object recognition and calorie estimation and achieved impressive results. This encourage us to do a detailed study in the field of food domain through deep learning. This paper analyses, a deep learning framework, services, food items datasets, methods of segmentation and classification, and various food recognition techniques. Every method has its pros and cons as well. The main idea of this survey is to carry out a detailed study of current food item recognition techniques through deep learning. The result achieved by deep learning for recognition of food images will attract more re-searcher to put efforts in the field of food domain through deep learning in the future.
ISTA-07.9 16:30 FEM Simulation of Palladium Thin Film Coated Surface Acoustic Wave Hydrogen Sensor for High Frequency Applications
Sheeja P George (College of Engineering Chengannur, India); Johney Isaac (Cochin University of Science and Technology, India); Jacob Philip (Amaljyothi College of Engineering, Kanjirappally, India)
A higher operating frequency is desirable for Surface Acoustic Wave (SAW) based sensors as they become more sensitive at high frequencies. The acoustic wave gets more confined near the surface at high frequencies and become more sensitive to the external stimulations. This makes SAW devices a suitable device for sensing gaseous state chemicals. SAW devices have become the basic building block of wireless sensor networks with its advantages enabling remote sensing. In this paper, a SAW based Hydrogen sensor is realized through the Finite Element Analysis tool ANSYS. Hydrogen even though has a significant role in many industries, its explosive nature demands constant monitoring. SAW delay line made up of XY-LiNbO3 as substrate with a thin layer of Palladium coated along the delay length as the sensing element is modeled. Palladium with its high affinity for Hydrogen absorbs the same and undergoes changes in properties like density and stiffness. This disturbs the surface wave propagation and in turn, affects the operating frequency which is the sensor response parameter. The frequency shift of 1.91 MHz for Hydrogen concentration of 0.3 a.f. as compared to 0.49 MHz with YZ- LiNbO3. The operating frequency also shifts to a higher range as the acoustic velocity of the substrate increases

ISTA-08: ISTA-08: Intelligent Tools, Techniques and Applications (Short Papers)

ISTA-08.1 14:30 Case-Based Expert System for Smart Air Conditioner with Adaptive Thermoregulatory Comfort
Nalinadevi Kadiresan (Amrita University & Amrita School of Engineering, Ettimadai, Coimbatore, India); Akshaya Sundaram (Amrita Vishwa Vidyapeetham University, India); Hamsini Ravishankar (Amrita School of Engineering, Coimbatore Amrita Vishwa Vidyapeetham, India); Uma Subbiah (Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India); R Karthika (Amrita Viswa Vidyapeetham, India)
With the onset of the internet revolution and development in hardware, IoT has become an integral part of our lives, optimizing various aspects of day-to-day activities. An important role of IoT is played in providing thermal comfort through smart HVAC (Heating, ventilation, and Air Conditioning) systems. The smart AC proposed in this paper aims to create a pleasant thermal environment with minimal human intervention, according to various personal and environmental parameters using a case-based expert system. The novelty of this system is a case based expert system that has been implemented with querying and analytics to identify and set the temperature of the AC. Based on users' prior settings, preferences, and ambient conditions, the knowledge base will continue to expand. Moreover, if the user overrides the actuated temperature with a preference of his/her own, this will be stored as a new case in the case database. This smart AC system also considers the theory that resulted from studies on the variation in human temperature caused by human thermoregulation. Integrating all of these factors, the most similar case is extracted from the case database using the k-NN regression similarity criteria. The system has been optimized to minimize the number of re-computations with respect to actuation and the number of retrievals performed on the knowledge base. Equipped with cloud-based data utilization, behavioral query analytics, and historical query analytics, this futuristic system possesses the ability to automatically perform optimal actuation on the air conditioner, making it more convenient and user friendly than its present-day counterparts.
ISTA-08.2 14:45 Data-Driven Based Disruption Prediction in GOLEM Tokamak with Missing Values
Jayakumar Chandrasekaran (SASTRA Deemed to be University, Thanjavur, Tamilnadu, India); Surendar M and J Sangeetha (SASTRA DEEMED TO BE UNIVERSITY, TIRUMALAISAMUDRAM, TAMILNADU)
The sensor readings from the vessel of the Tokamak are inconsistent where some sensors always produce output; others provide output occasionally resulting in missing values on entire data. Typically, imputation methods are applied to both train and test data while training and validating the model offline with complete data. But in the real-time application where the decision must be taken based on the instream sensor data, imputation techniques are not practical. Hence, in this article, a data-driven approach on algorithms that inherently handle missing values and algorithms that have the provision to deal with missing values through a replacement technique is employed. Individual, bagging and boosting algorithms are utilized to classify normal and disruption charges on the Golem Tokamak dataset, which consists of 117 normal and 70 disruptive shots. Boosting algorithms, having an in-built feature to handle missing values provided better results amongst other algorithms. Categorical Boosting (CatBoost) with its ordered boosting feature, gave the best metrics. Optimal thresholds for Receiver Operating Characteristics (ROC) and Precision-Recall (P.R.) curves on the models are determined. The optimal P.R. values are utilized to get improved results. A comparison with the widely employed stand-alone machine learning algorithms and ensemble algorithms are demonstrated. The results show the excellent performance of the CatBoost model with an F1 score of 0.943 through optimal P.R. values. The developed predictive model would be capable of warning the human operator with feedback about the feature(s) causing the disruption.
ISTA-08.3 15:00 Hybrid Deep Neural Architecture for Detection of DDoS Attacks in Cloud Computing
Aanshi Bhardwaj (Academic Block 1, UIET Panjab University Sector 24 Chandigarh India, India); Veenu Mangat (Panjab University Chandigarh, India); Renu Vig (Academic Block 1, UIET Panjab University Sector 24 Chandigarh India, India)
Detection and prevention of Distributed Denial of Service (DDoS) attacks are considered to be a keystone of network security. Though a good number of potential solutions have been provided for the detection of attacks but due to frequent change in attack vectors, a competent technique is essential for combating these new attacks. In this paper, we propose a hybrid method which uses Deep Neural Network (DNN) model for distinguishing DDoS attacks in cloud environment using Ant Colony Optimization (ACO) for learning prime or important hyperparameters for effective classification of DNN. The use of optimal parameters in DNN makes it more accurate for detection of attacks. The proposed approach is validated by comparing its performance w.r.t. parameters detection accuracy, detection rate with the results of three other recent methods based on machine learning. Experiments have been conducted on the CICIDS2017 dataset which is a new benchmark dataset in the area of network security. Proposed approach gives promising results over CICIDS2017 dataset. The detection rate and accuracy are 95.74% and 98.25% respectively which are better than state-of-the-art methods.
ISTA-08.4 15:15 Identifying Network Intrusion Using Enhanced Whale Optimization Algorithm
Anne Dickson (University of Kerala, India); Ciza Thomas (Senior Joint Director, Directorate of Technical Education, Kerala)
Advances in networking technology and automation through numerous interfaces lead to an exponential hike in network threats and flaws in information security. Limited capabilities of intrusion detection and prevention systems enormously reduced the detection rate. Optimizing the conflicting objectives is the best way to alleviate this security challenge. Population based search techniques are deployed for optimization by selecting the best optimal element from available set of possible solutions.Whale optimization is a neoteric methodology for solving complex problems through computational intelligence. Intelligence technologies proved that even a tiny creature like ant can be the source of inspiration. This paper proposes a methodology to locate the attack detection point through the process of spiral updation. The motto of this work is to enhance the detection rate of IDS thereby searching the most promising regions located by search agents using an Enhanced whale optimization algorithm (EWOA). NSL-KDD dataset is used for evaluating the procedure. Analysis is done using the five machine learning classifiers such as Naive Bayes, logistic, SMO, J48 and Random Forest. Empirical results shows Random Forest classifier performs better with maximum true positive of 0.998 followed by a true positive value of 0.914 for J48 classifier through spiral updation for variant sets of random particles in the search space.
ISTA-08.5 15:30 IoT Based Home Vertical Farming
Abhay V S, Fahida V H, Reshma T R and Sajan C K (Mar Athanasius College of Engineering, Kothamangalam, India); Siddharth Shelly (Mar Athanasius College of Engineering Kothamangalam)
Worldwide, around three million hectares of agricultural land are lost each year as a result of soil degradation and conversion for various development purposes, which in turn reduced the crop yield. Vertical farming is a type of farming method which cultivates crops using vertical structures and controlled environment which maximize the production and efficiency from minimum area. The proposed system is cultivating in a carefully controlled environment where saplings will be planted on nutrient media on chambers illuminated with LED lights. This study will also integrate the vertical farming structure with the Internet of Things. Hence, automate the farm activities with less human intervention and eradicate all the issues that may arise in between and control the farming activities from remote places using the mobile application. Crop Yield forecasting features use Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to improve the yield.
ISTA-08.6 15:45 Map Reduce Driven Rough Set Fuzzy Classification Rule Generation for Big Data Processing
Hanumanthu Bhukya and Sadanandam Manchala (Kakatiya University, India)
The term "Big data" has become one of the essential research discussions nowadays. Due to a large amount of data availability and data processing nowadays, the topic of data science and big data becomes of prominent interest in the present research. Big data applications can be accomplished through the MapReduce Programming model because these are mostly concerning scalability. The MapReduce models are intended to categorize data into various groups that are processed in parallel and whose outcome gathered to offer a single solution. There are various incremental frameworks for analyzing and extracting data from vast data sources presented by different authors. But, the large amount of data and diversity of the data sources there is a necessity for instant intelligent response pretense a severe problem to the current learning algorithms. Various classification models modified to this new framework, this paper proposes a Rough Set Fuzzy Classification Rule Generation Algorithm (RS-FCRG) to present attractive results with a MapReduce model for big data. This algorithm achieves an interpretable pattern that can handle massive data. It provides a significant accuracy with better execution time because the algorithm applies the MapReduce programming model in the Hadoop platform; it is one of the most beneficial frameworks to dispense with significant collections of data. The experiment takes on the UCI census (KDD) info dataset. The experimental results show that the proposed algorithm gets high accuracy with 95% when compared with the chi-FRBCS-Bigdata max algorithm.
ISTA-08.7 16:00 Multivariate VMD Based Analysis on Stock Sectors
Silpa B (Amrita Viswa Vidyapeetham, India); Vijay Krishna Menon, Gopalakrishnan E A and Soman K P (Amrita Vishwa Vidyapeetham, India)
The price of a single stock is seldom independent. It has been known to brokers and fund managers, that, they heavily influence each other. Portfolios are built, on the premise of minimizing such dependencies between stocks. There have been several efforts to quantify these dependencies, predominantly using conventional statistics and correlations. Most approaches use single independent variable based approaches to compare two stock price signals at a time. In this paper we analyze stock sectors with multiple stocks in each, presenting a multivariate case. We apply two multivariate analysis techniques, namely Multivariate Variational Mode Decomposition (MVMD) and Spatio-Temporal Intrinsic Mode Decomposition (STIMD), to analyze stock sectors based on their individual stock day-wise price series. Sector-wise data is downloaded from Google Finance. Furthermore, we quantify the dependence of a sector on each of its constituent stock, by decomposing this multivariate signal and selectively reconstructing it multiple times, excluding each one from the reconstruction process. The error of reconstruction in each case serves as a measure of how much other stocks depend on the one that was excluded.
ISTA-08.8 16:15 Prediction of Solar Power in an IoT Enabled Solar System in an Academic Campus of India
Kumar Padmanabh (EBTIC, United Arab Emirates); Dhruvraj Singh Rawat (LNM Institute of Information Technology, India)
Solar power is available in abundance in India. Not only it helps in minimizing carbon emission, but it is also very cost effective. There are multiple arrays of solar panels installed in the academic and the residential buildings of the LNM Institute of Information Technology (LNMIIT) campus. These installations are helping LNMIIT to reduce electricity bills. However, the solar panel is not reliable because LNMIIT doesn't know how much power these installations would produce at a particular instant of time. Hence, solar power is used in non-trivial applications. Production of power from such a system depends upon the weather condition of the moment. We argue that if the weather can be predicted, then the prediction of solar power is also possible. This paper proposes to make the solar power more reliable by predicting the power output. In this paper, many existing algorithms are customized for prediction and a mechanism is proposed to dynamically select the best performing algorithm. The accuracy of the proposed prediction model has been more than 92%.
ISTA-08.9 16:30 Transformed WLS Based Data Reconciliation for a Large-Scale Process Network
Boddu Chakradhar, Gautham Shanmugam and K L N Sree Datta (Amrita School of Engineering, India); R Jeyanthi (Amrita Vishwa Vidyapeetham, India); Oguri Sravani and S M Dhakshana (Amrita School of Engineering, India)
Modern process plants and industries have to modify their systems in order to constantly maintain highest performance standards. Process data is used to estimate the current state of the system, which helps in the decision-making process to ensure smooth and safe operation of the plant. These process data readings are taken from sensors but are not transmitted accurately and are inconsistent due to online errors. These erroneous measurements will affect the normal plant operation, which in turn will hamper the safety and economics of the plant. Hence Data reconciliation (DR) techniques have been implemented on the measured data to suppress the errors and obtain the actual measurements. Weighted Least Squares (WLS) a popular DR technique has been used to eliminate random errors from process data. In this paper, along with conventional WLS various modified DR techniques namely Moving Average WLS (MA-WLS) and Exponentially transformed WLS (E-WLS) have implemented. The reconcilability of each of these techniques have been analysed. A comparative study has been carried out to identify the best performing DR technique. The analysis is done for a large-scale process network in Python based simulation environment.

Saturday, October 17 16:30 - 17:20 (Asia/Calcutta)

Keynote: Artificial Intelligence and the Internet of Things: Security and Autonomy

Speaker: Michael Losavio, Department of Criminal Justice, College of Arts and Science, University of Louisville, KY, USA

The Internet of Things and Artificial Intelligence offer tremendous opportunities through the powerful analytics against a vast distribution of sensors. They provide for the unprecedented collection of data for new, powerful analytics creating new knowledge, from government processes to industrial systems.

Yet these all have elements of the human condition that may affect the impact of their implementations, for good and for ill. The AI/IoT/Big Data world presents significant challenges in law, ethics and public policy. Such analytics and data-driven actions manifest all the challenges relating to the use of expert systems and information assurance, including authentication, validation and protection of this data. And we have seen that the data generation, transmission and collection begin to parallel issues similarly seen in information security and assurance.

With more and modeling and analysis, the predictive ability for behavioral inferences has increased, posing different legal, administrative and political concerns. We examine analyses of applications of these new powerful ways to learn for the cautionary lessons they can teach.