Program for 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI)

Monday, August 10

Monday, August 10 9:00 - 14:00 (Asia/Kolkata)

R1: Registration Starts

Room: Atrium

Monday, August 10 10:00 - 11:00 (Asia/Kolkata)

I: Conference Inauguration

Room: MSH

Monday, August 10 11:30 - 12:30 (Asia/Kolkata)

K1: Main Track Keynote-1: Visual Information Retrieval: Advances, Challenges and Opportunities

Prof. Oge Marques, Florida Atlantic University (FAU), USA
Room: MSH

Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) from large, unstructured repositories. The goal of VIR is to retrieve matches ranked by their relevance to a given query, which is often expressed as an example image and/or a series of keywords. During its early years (1995-2000), the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the lack of coincidence between an image's visual contents and its semantic interpretation, also known as semantic gap, required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semantic gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on. In this lecture, we pay special attention to the field of content-based image retrieval (CBIR) and highlight the most relevant advances, pending challenges, and promising opportunities in CBIR and related areas.

Monday, August 10 12:30 - 13:30 (Asia/Kolkata)

K2: Main Track Keynote-2: New Acceleration Approaches for Evolutionary Computation and Evolutionary Approaches for Analyzing Human Characteristics

Prof. Hideyuki TAKAGI, Kyushu University, Japan
Room: MSH

First, we introduce two approaches for accelerating evolutionary computation (EC). The first approach is to use frequency information of a fitness landscape. Assume a fitness landscape as sound or image signal. Then, you may think whether its frequency information can be used for EC search. This first acceleration approach is a method to approximate a fitness landscape using a sin curve obtained from its frequency domain and estimate rough area of the global optimum. The second acceleration approach is to estimate the global optima using the moving trajectories of individuals analytically. EC is a method to approach to the global optimum iteratively. This proposed method estimates the convergence point of individuals mathematically. These two acceleration approaches are still in the early research stage and must face several difficulties until they become completed methods. However, their viewpoints are unique and informative for future research.Secondly, we introduce a new research direction of interactive EC (IEC). IEC is a method optimize a target system based on human evaluations. Then, we may analyze the human evaluation characteristics indirectly by analyzing the target system optimized by the human. It is somehow similar to reverse engineering. We introduce some such challengeable approaches including analyzing human mental scale, finding unknown facts, and modeling human awareness mechanism.

Monday, August 10 14:30 - 19:30 (Asia/Kolkata)

S1: S1-Data and Knowledge Engineering- I

Room: 302
Chairs: Philip Samuel (Cochin University of Science& Technology, India), Shahidul Khan (Bangladesh University of Engineering & Technology (BUET), Bangladesh)
S1.1 Simulation of Flow Past a Wing Inspired by Flying Snakes
Vertika Saxena and Balajee Ramakrishnananda (Amrita Vishwa Vidyapeetham, India); Rajesh Senthil Kumar T (Amrita Vishwa Vidhyapeetham, India)

Wings of airplanes, ornithopters and micro-aerial vehicles were inspired by the wings of birds and insects. A flying snake found in South and South-East Asia converts its entire body into a morphing wing which aids it to glide very efficiently. The aerodynamics of this species of snake is not well understood. Two dimensional computational and experimental studies of the snake's rather unusual cross-section have been done in earlier works. A three dimensional simulation of the flow over a wing inspired by the snake's body geometry is solved in the current work using steady laminar assumptions. Solutions were obtained from an angle of attack of 0 to 55 degrees in steps of 5 degrees. Interesting features like wake interaction with downstream sections and complex vortex shapes come into light. At low angles of attack, transverse flows reduce the strength of the wake leaving the wing. Gentle stall characteristics with a high stall angle and almost linear increase in drag with angle of attack are noticed. Bending of streamlines indicative of high lift production are clearly visualized at the maximum lift condition.

S1.2 Data Cleaning: An Abstraction-based Approach
Dileep Koshley and Raju Halder (IIT Patna, India)

Bertossi et al. proposed a data-cleaning technique based on matching dependences and matching functions, which is, in practice, intractable for some cases during the application of matching dependences in random orders. Moreover, the result of the application of a single matching dependence on a dirty database instance is a set of clean instances depending on the number of dirty tuples, which results a high computational overhead as well as large space requirement. The aim of this paper is to propose an improvement of the Bertossi's approach based on the Abstract Interpretation framework. This yields a single clean abstract database instance which is a sound approximation of all possible concrete clean instances. The convergence of the cleaning process can also be guaranteed by using widening operators in the abstract domain. The proposal improves significantly the efficiency and performance of the query systems w.r.t. the Bertossi's one.

S1.3 Exploring Effect of Preprocessing on Classifier Ensembles in Imbalanced Dataset Classification
Uma Salunkhe (Pune University, India); Suresh Mali (University of Pune Pune, India)

During the last few years, imbalanced data classification issue has gained great deal of attention. Many real life applications suffer from imbalanced distribution of data that can be handled by using different approaches such as data level, algorithm level or classifier ensembles. Single level as well as multi level classifier ensemble technique has shown improvement in classification performance. Also data level approaches are independent of classifier being used. In past few years, combination of data level and classifier ensemble techniques has been applied and has proved to be effective. This paper explores the impact of preprocessing algorithm on the performance of classifier ensemble approach for imbalanced data set. The aim of this study is to investigate the effect of preprocessing on two level classifier ensemble approaches. Experimental work and analysis of results shows that use of pre-processing is not beneficial for Random Subspace Method since results reflect performance degradation while AdaBoost has shown improvement due to application of pre-processing.

S1.4 Novel Self-learning based Crawling and Data Mining for Automatic Information Extraction
Arun Kumar AV (TCS Research & Innovation, India); Hemant Kumar Rath (Tata Consultancy Services, India); Shameemraj Mohinuddin Nadaf (Tata Consultancy Services Ltd, India); Anantha Simha (Tata Consultancy Services, India)

In this paper, we propose techniques using a novel combination of self-learning based crawling and rule based data mining. Using the crawling techniques smaller relevant data sets can be obtained pertaining to a domain from multi-dimensional data sets available in on-line as well as off-line sources. We then process the crawled data sets and mine to extract meaningful information. Our techniques are generic in nature and can be used for automatic information extraction in different domains such as biomedical, health-care, enterprise infrastructure planning, etc. The proposed schemes are of reduced time, space and processor complexity due to the assisted and learning nature of the crawling. The data mining is based on configurable classification rules and decision trees, which are scalable and easy to implement in practice. We evaluate our proposed techniques through Java based implementation and integration with TCS in-house enterprise network design tool "NetDes".

S1.5 Security Maturity in NoSQL Databases - Are they Secure Enough to Haul the Modern IT Applications?
Sethuraman Srinivas (IBM, USA); Archana Nair (Amrita Vishwa Vidyapeetham, India)

NoSQL phenomenon has taken the database and IT application world by storm. Growth and penetration of NoSQL applications, driven by Silicon Valley giants like Facebook, Twitter, Yahoo, Google and LinkedIn, has created an unprecedented database revolution, to inspire smaller companies to join the NoSQL bandwagon. While expansion and growth of these databases are adding glory and success to many corporate IT departments, it is very pertinent to explore the security aspects of these new era databases. Confidentiality, integrity and availability (CIA) are the very foundation of data protection and privacy. In this paper a sincere attempt is made to survey and assess the maturity of NoSQL databases through the lens of CIA triad. While CIA for data has its origins in relational databases it is very important to understand, survey and delineate the security capabilities of this new generation databases, in terms of CIA fulfillment.

S1.6 A Hybrid Approach for Recommendation System with Added Feedback Component
Kavinkumar V, Rahul Reddy Rachamalla, Rohit Balasubramanian, Sridhar M, Sridharan K and Venkataraman Durai subbu (Amrita Vishwa Vidyapeetham(University), India)

With the increasing E-Commerce and online shopping there is a need for recommendation systems which help the customers in decision making and to suggest potential goods of purchase. In domains such as automobiles there are many websites but most of them are not having enhanced recommendation systems to enable easy decision making. Thus we have taken the initiative of building a dataset with multiple parameters based on a survey of the communities needs using potential blogs and created a recommendation system using user based and item based collaborative filtering. In addition to the combined collaborative filtering techniques we propose a framework which includes a feedback analysis to improve the recommendation system. The enhanced model aids the customers in decision making. We have proposed the feedback system at two levels. One is external feedback where the comments are gathered from public platforms like social media and automobile websites. The other is internal feedback i.e. the feedback is taken from users who have been provided with recommended items. The opinions extracted from such varied comments broadens the system and results. Our proposed hybrid model with feedback analysis has improvised the current system by providing better suggestions to customers.

S1.7 An Efficient Approach for Privacy Preserving Distributed Mining of Association Rules in Unsecured Environment
Chirag Modi (NIT Goa, India); Ashwini Patil (Sardar Vallabhbhai National Institute of Technology, India); Nishant Doshi (PDPU, India)

Distributed data mining techniques are widely used for many applications viz; marketing, decision making, statistical analysis etc. In distributed data environment, each of the involving sites contains local information which will be collaborated to extract global mining result. However, these techniques have been investigated in terms of privacy and security concerns of individual site's information. To solve this problem, many cryptography techniques have been investigated. Still there is a room for further improvement. In this paper, we propose an efficient approach for privacy preserving distributed association rule mining. We use onion routing protocol in order to exchange information among involving sites. We use an elliptic curve (EC) based cryptography in order to achieve security and privacy of individual site's information in unsecured distributed environment. Finally, we analyze proposed solution in terms of security, privacy, computational cost and communication cost.

S1.8 Automatic Detection of k with Suitable Seed Values for Classic k-means Algorithm Using DE
Chayan Bala, Tripti Basu and Abhijit Dasgupta (Jadavpur University, India)

k-means algorithm, in spite of its computational efficiency and capacity for faster convergence has some serious drawbacks like its tendency to stick into local optima and the requirement of supplying number of cluster before execution. Our algorithm used Differential Evolution (DE) as preprocessor to overcome those bottlenecks. Experiments show that the augmented version of clustering algorithm produces improved results.

S1.9 User Interest Expansion using Spreading activation for Generating Recommendations
Punam Bedi (University of Delhi, India); Richa Singh (Central University of South Bihar, India)

In this paper, a novel user interest expansion approach for generating recommendations is proposed. The approach utilizes interest of user as well as semantic relatedness between the items along with context to generate recommendations. Ontologies are used to represent domain knowledge. Spreading activation technique uses the relatedness between concepts of user interest in domain ontology to expand user interest which results in generation of diverse recommendations. A prototype of the system has been designed and developed using various JAVA technologies for Restaurant domain and its performance is evaluated using precision, recall, F1 and diversity metrics.Performance of the proposed Context Aware Recommender System with Expansion of user interest (ECARS) is compared with Context Aware Recommender System (CARS) and Content Based Recommender System(CB)for generating recommendations.

S1.10 Component Based Reliability Assessment from UML Models
Vaishali Chourey (Medicaps Institute of Technology and Management, India); Meena Sharma (IET - DAVV Instituute of Engineering & Technology, DAVV, Indore India, India)

Model based development and testing techniques have ventured diverse research directions to assure quality of the software product. Models developed during architecture and design phases are efficient tools to assess quality at an early development stage. However, testing the extra-functional or non-functional properties of software systems is not frequently practiced, eg. reliability. The motivation to our work is to model the context of execution which is significant in system reliability analysis. In this paper we visualize the components of complex software systems and their interactions in the form of Functional Flow Diagram (FFD). This notation specifies the dynamic aspect of system behavior as the context of execution. To further asses reliability, the FFD is translated into Reliability Block Diagram (RBD). The relative importance of the components in terms of reliability is evaluated and is associated with prioritization of the component. The model is simple but significant for system maintenance, improvisation and modification. This model supports analysis and testing through better understanding of the interacting components and their reliabilities.

S1.11 Normalized Weighted and Reverse Weighted Correlation Based Apriori Algorithm
Amimul Ehsan (NITK, India); Nagamma Patil (National Institute of Technology, Karnataka, India)

Data mining, a need in the modern era of technology where data matters the most, is a prodigious role player.Among the existing techniques of data mining, Association rule is one of the most important tasks which is devoted to discover frequent itemsets and draw the correlations among the items in them. In the recent researches for association rule mining, different supporting threshold and pruning techniques have been inculcated in Apriori algorithm which is supposed to control the generation of frequent itemsets without neglecting any item that matters i.e. affects the transaction. Normalized weighted and reverse weighted correlation (NWRWC) based apriori algorithm is important for mining frequent and infrequent itemsets in a repository where items have different importance. Some researches have proposed the method of applying weights according to the importance of the items but in these methods many items with high supporting degree but low weight get pruned. NWRWC based Apriori algorithm is proposed to deal with this situation by applying direct normalized weights as well as reverse normalized weights to the items. It further establishes the relevance between itemsets using weighted correlation methods. Since not only frequent but also infrequent itemsets plays pivotal role in the association rule mining, both of them have been calculated using both weight and reverse weight. The experimental results demonstrate the efficiency and effectiveness of this approach.

S3: S3-Medical Informatics/ Fourth Symposium on Recent Advances in Medical Informatics(RAMI'15)

Room: 303
Chairs: Aaradhana Arvind Deshmukh (Aalborg University Denamrk & University of Pune India, India), Martin Staemmler (Fachhochschule Stralsund, Germany)
S3.1 Wavelet based Non Local Means Filter for Despeckling of Intravascular Ultrasound Image
Debarghya China (Indian Institute of Technology Kharagpur, India); Pabitra Mitra (Indian Institute of Technology, Kharagpur, India); Chandan Chakraborty (IIT Kharagpur, India); K Mandana (Cardiothoracic & Vascular Surgery, India)

In this paper, a novel methodology has been proposed for speckle noise reduction of intravascular ultrasound (IVUS). IVUS, a standard coronary artery diagnosis imaging protocol, is degraded with speckle noise due to the coherent interferences of the ultrasound reflected from the scatters. Presence of such noise creates problem in segmentation and classification of these images. Non local means filter has been applied in wavelet domain to smooth out noise interferences to improve the visual characteristics of IVUS insight. Eventually, a comparison study has also been performed with various filters viz. anisotropic diffusion filter, nonlinear median filter and geometric nonlinear diffusion filter. Mean squared error (MSE) and peak signal to noise ratio (PSNR) has been used to evaluate the efficiency of the proposed filtering methodology.

S3.2 An Affordable Approach for Detecting Drivers' Drowsiness using EEG Signal Analysis
Ahmed Abdel-Rahman (Helwan University & Faculty Of Engineering, Egypt); Ahmed Farag Seddik (Helwan, Egypt); Doaa Shawky (Cairo University, Faculty of Engineering, Egypt)

Recently, the number of accidents caused by drowsy drivers has been increased. So, avoiding these types of accidents is very important. To achieve this goal a real-time monitoring system is required for continuously monitoring the driver's behavior and awareness. This paper proposes a new technique that instantaneously detects whether the driver is awake or in stage-one sleep condition. The proposed technique is based on a real-time system that monitors and analyzes the EEG signal of the driver using a single dry-sensor EEG headset. Using this system, we were able to detect the driver's state in real-time. When the EEG waves of the driver match that of stage-one sleep, the system produces an audible alarm to alert him. The system has an average accuracy of 97.6% detecting sleep in a sample of 60 subjects using the statistical characteristics of the EEG waves. Compared with similar approaches, the proposed approach is more affordable, more time-efficient and less complex.

S3.3 Trimiti - A Real-time Stereoscopic Vision System for Neurosurgery Training using Surgical Microscope
Pritam Prakash Shete (Bhabha Atomic Research Centre, India); Ashish Suri (All India Institute of Medical Sciences, India); Dinesh Sarode, Mohini Laghate and Surojit Bose (Bhabha Atomic Research Centre, India)

In this research work, we propose and realize a real-time stereoscopic vision system for neurosurgery training using a surgical stereoscopic microscope. We connect a pair of high resolution IP cameras to a surgical microscope through a dual port microscope beam splitter and a pair of microscope video adaptors. We make use of the specifically designed chessboard calibration pattern and an open source OpenCV library for system calibration. We perform local calibration to align individual IP camera views with each other, whereas global calibration is carried out to line up local calibrated camera views parallel with the user defined reference coordinate system. We calibrate our stereoscopic system using a single input image of the calibration pattern. We perform real-time stereo image remapping for the comfortable stereoscopic vision using a GPU. A media server is developed and introduced in between the IP camera pair and viewer applications to support multiple stereo image streams as well as to provide one more layer of protection to IP cameras. We make use of a NVIDIA 3D Vision Pro system and a 3D ready 120Hz monitor for active stereoscopic vision. We utilize HDMI-1.4 frame packing for passive stereoscopic vision on a standard 3D-TV. We measure network latency of our stereoscopic system with and without using the media server application, which is within acceptable limits. Finally, we compare NVIDIA 3D Vision Pro based active stereoscopic vision and passive stereoscopic vision using a 3D-TV with their pros and cons.

S3.4 UI Design for Language Translator module in Swasthya Slate (m-Health Tool)
Sangeetha Rajesh (K. J. Somaiya Institute of Management Studies and Research, India); Lifna Challissery Samu (VES Institute of Technology, University of Mumbai, India)

Modern information technology is increasingly used in healthcare with the goal to improve and enhance medical services and reduce costs. Nowadays in urban areas the advancements in the information technology is adapted fast by the health care industry and its outcome is offered to the patients. The obstacles faced by health care providers and patients in rural areas are vastly different than those in urban areas. Rural areas often suffer from a lack of access to healthcare. Many appreciated initiatives are taken from the Government side to overcome the problems in healthcare of rural areas. In this paper we propose a framework for getting the details of ailments from patients of rural areas in their regional language and get the prescription as soon as possible by using the emerging techniques in Information technology like Speech recognition and Language Translator

S3.5 TCmed - A secure Telecollaboration Network for Medical Professionals including Workflow Support and Patient Participation
Martin Staemmler (Fachhochschule Stralsund, Germany); Klaus Dieter Luitjens (Westküstenklinikum Heide, Germany); Thomas van Bömmel (BGU Murnau, Germany); Uwe Engelmann and Heiko Münch (CHILI GmbH, Germany); Ulrich Hafa (Klinikverbund der Gesetzlichen Unfallversicherung, Germany); Johannes Sturm (Akademie der Unfallchirurgie AUC, Germany)

Telecollaboration between medical professionals is well established for images and covering medical domains like radiology, cardiology or pathology. While images profit from functionality of the DICOM standard the exchange of non- DICOM objects like documents or reports lacks support and in particular workflow integration with existing legacy systems. Electronic health records serve for managing images, documents and reports but their spreading varies as well as the support of standardized interfaces. This paper presents a light-weight approach for establishing telecollaboration supporting images, documents and reports. It pays particular attention to process integration with legacy systems by relying on standards like DICOM and HL7. In addition, telecooperation is extended to allow patient participation by accessing and providing data. The approach has been implemented and is in routine use.

S3.6 Cardiogenic Shock Monitoring System for Ambulance
Suma Kv (M. S. Ramaiah Institute of Technology, India); S Sandeep (MS Ramaiah Institute of Technology & VTU, India); S Vikram, Karthik Hanjar and S m Sudarshan (MS Ramaiah Institute of Technology, India)

Cardiogenic shock is an life-threatening medical condition with mortality rates of 70-90% , which can be reduced to 40-60% with aggressive and immediate treatment. The aim of this paper is to design wireless transmission system of a three lead ECG, temperature and pulse monitoring system for use in ambulance during cardiogenic shock, which can serve the purpose of immediate intimation to the hospitals and timely diagnosis of the victim. In such cases ECG, pulse and body temperature become the most vital parameters, and they gives information about the current cardio-activity of the patient. The ECG measuring instruments available today are of high cost and hence there is also a need to build an economical system. The patient has to be continuously monitored and the data has to be immediately sent to the doctor in the hospital to analyze and suggest any first aid, which will help the crew in the ambulance to take necessary action according to the doctor's suggestion. Also, the doctors at the hospital are informed of the severity of condition of the patient and hence can make appropriate preparations for the treatment, particularly in cases of life threatening circulatory shock. Thus, considering all the design challenges, in this project we have built a health monitoring system to acquire the signals and have used wireless RF modules to transmit the acquired signal.

S3.7 A Service Oriented Collaborative Model for Cattle Health Care System
Ajay D Parikh (Gujarat Vidyapith, India); Bhairavi Shah (E-infochips, India); Kanubhai K Patel (Charotar University of Science & Technology & CMPICA, India)

Animal husbandry and dairy development generate gainful employment in the rural India. There is a huge gap between the information available and information utilization between stakeholders involved in this area. By taking steps to make better health care, proper insemination technique and information management of cattle may increase milk production. It also improves the breed quality and production. This paper proposes the innovative ways of intervention of information and communication technology to bridge the gap between stakeholders involved in this area. That is a collaborative approach to manage cattle health care related services to the cattle owners based on web services and Services Oriented Architecture (SOA) approach. This paper provides methodology and concept of the technical solutions proposed and planned to overcome the cattle owner and stakeholders' difficulties. So that animal husbandry becomes an encouraging and lucrative business opportunities for young rural people to start their own ventures in their village.

S3.8 Multi-modal evolutionary ensemble classification in medical diagnosis problems
Søren Atmakuri Davidsen and Padmavathamma Mokkala (Sri Venkateswara University, India)

Expert systems for classification tasks in medical diagnosis systems require two properties. The true positives should be very high, as well as the true negatives, i.e. the system should correctly catch those who are ill, and correctly dismiss those who are healthy. The multi-modal evolutionary classifier uses a genetic algorithm to learn a reference vector for each class, and classification is done by measuring the distance of the new example to reference vectors. For complex datasets such as medical diagnosis, interactions between features are typically complex and the multi-modal classifier's single reference vector is not able to capture this. In this work an extension to the algorithm is proposed, which learn sets of multi-modal classifiers using resampling and form an ensemble from these, using a genetic algorithm. The algorithm is evaluated on a sample of publicly available medical diagnosis datasets. While this is a work-in-progress, initial findings are that compared to the base classifier, using evolutionary learned ensembles improves accuracy in all cases, and is a direction for future work.

S3.9 Statistical Features based Epileptic Seizure EEG Detection - An Efficacy Evaluation
Gopika Gopan K (India); Neelam Sinha (International Institute of Information Technology & Bangalore, India); Dinesh Babu Jayagopi (IIIT Bangalore, India)

Electroencephalographic (EEG) patterns are electrical signals generated in association with neural activities. Most anomalies in brain functioning manifest with their signature characteristics in EEG pattern. Epileptic seizure, which is a brain abnormality well-studied through EEG analysis, is an abnormal harmonious neural activity in the brain characterized by the presence of spikes in EEG. An automated detection of epileptic seizures proves useful to Neurologists in the diagnosis of epileptic patients. This work contributes towards the study of efficacy evaluation of statistical features towards classification of EEG data as Ictal, Inter-Ictal and Normal. The statistical features considered are energy, entropy, median absolute deviation, interquartile range, skewness and kurtosis. The features extracted from a real dataset of 500 time series, comprising of 100 Ictal, 200 Inter-Ictal and 200 Normal are given to classifiers such as Support Vector Machine(SVM), Fuzzy k-Nearest Neighbor (Fuzzy k-NN), k-Nearest Neighbor(k-NN) and Naive Bayes for three class classification. Each of the features were used separately for classification to determine their individual efficacies. Alongside, the popular feature ranking method 'ReliefF' has been used to rank the features. Both the evaluations resulted in "entropy" being ranked as the feature with maximum efficacy.

S3.10 Glottal pathology discrimination using ANN and SVM
Ashwini Visave (Veermata Jijabai Technological Institute, India); Pramod Haribhau Kachare (IIT Bombay & Ramrao Adik Institute of Technology, India); Amutha Jeyakumar (Veermata Jijabai Technological Institute (V.J.T.I.), India); Alice N Cheeran (Department of Electrical Engineering & Veermata Jijabai Technological Institute, India); Jagannath Nirmal (Mumbai University, India)

Use of modern technological advances in real-time biomedical analysis is very crucial. Current work focuses on glottal pathology discrimination based on non-invasive speech analysis techniques. Primary set back in developing such method is irregular performance depreciation of several state of the art acoustic features. To excuse such problems, we have used glottal to noise excitation ratio, which predicts the breathiness quotient of the speech signal and is supported by characteristic mean pitch value. To build a judicial model, we have used Artificial Neural Network (ANN) and Support Vector Machine (SVM). Categorization performance is compared using well known parameters like true positive rate, true negative rate and accuracy. Results of the analysis show slightly favored performance for SVM based decisive system.

S3.11 Prototyping an NFC based Personal Health Device
Javaid Nabi and Abhijit Doddamadaiah (Samsung Research Institute, Bangalore, India); Raghav Lakhotia (Indian Institute of Management, Udaipur, India)

Near field Communication, an emerging player in the healthcare sector offers tremendous potential for penetrating the healthcare and medical fields driven by the smartphone boom. NFC Forum's recently released technical specification for Personal Health Device Communication (PHDC) addresses the role of NFC in personal health management. This specification enables medical devices like blood pressure monitors, weighing scales and glucose meters to exchange health data via NFC technology with external computers for monitoring and analysis by physicians. As of writing this paper there is no known implementation or medical device in the market supporting the NFC forum's PHDC. In this paper we demonstrate a prototype of a health device based on Tizen platform wherein simulated health data is exchanged using NFC PHDC specification.

S3.12 A Statistically Resilient Method of Weight Initialization for SFANN
Apeksha Mittal (Guru Gobind Singh Indraprastha University, India); Pravin Chandra (Guru Gobind Singh Indraprastha University, New Delhi, India); Amit Prakash Singh (Guru Gobind Singh Indraprastha University, India)

Proper weight initialization is one of the important requirements for faster training in feedforward artificial neural networks. Conventionally, these weights are initialized to small uniformly distributed random values so as to break the symmetry of weights during training, that is allow the weights to acquire different values. In this work, we have proposed a new weight initialization technique (NWIT) for sigmoidal feedforward artificial neural networks. The proposed method NWIT ensures that the output of neurons are in the active region and the range of activation function is fully utilized. The proposed routine is compared with random weight initialization method for 11 function approximation task. The proposed method NWIT is as good as if not better when compared to random weight initialization technique (RWIT).

S3.13 Integrating contemporary technologies with Ayurveda: examples, challenges, and opportunities (Invited Paper)
Oge Marques (Florida Atlantic University, USA)

In this paper we examine examples and provide suggestions of how information and communication technologies (ICT), modern engineering, and computer science techniques, devices, and algorithms can be used to expand the reach of the wisdom of Ayurveda, the traditional Indian medicine system.

Monday, August 10 14:30 - 18:30 (Asia/Kolkata)

SSCC-01: SSCC-01:- Authentication and Access Control Systems/SSCC - Security in Cloud Computing

Room: 305
Chair: Kester Quist-Aphetsi (University of Brest France, France)
SSCC-01.1 An Efficient Fingerprint Minutiae Detection Algorithm
Y. Prashanth Reddy and Kamlesh Tiwari (Indian Institute of Technology Kanpur, India); Vandana Kaushik (Harcourt Butler Technological Institute, India); Phalguni Gupta (Indian Institute of Technology Kanpur, India)

Fingerprint is one of the most preferred biometric traits for automatic human authentication. Similarity between two fingerprints is determined by matching, which is mostly dependent on the properties of minutiae points. A false minutiae that can be induced due to bad quality of fingerprint or erroneous evaluation of localization algorithm adversely affects the performance of the system. This paper proposes an algorithm to extract the true minutiae from fingerprint images. Extraction of minutiae points involves background suppression, image enhancement, binarization, thinning, minutiae localization, and cleaning. Experimental results on two databases have shown that the proposed algorithm has higher accuracy of being true.

SSCC-01.2 An Optimal Authentication Protocol Using Certificateless ID-based Signature in MANET
Vimal Kumar (MMM University of Technology, Gorakhpur); Rakesh Kumar (MMM University of Technology, India)

Mobile Ad hoc Networks (MANETs) represent an emerging research fied of ubiquitous computing. Nowadays, security issues regard- ing MANETs are gaining remarkable research interest. However, security in mobile ad hoc networks(MANETs) is frequently hampered by resource constraints. An ID based cryptosystem enables an user to generate their public keys without exchanging any certificates. In this scheme, users can construct public/private key without exchanging any certificate. The idea of bilinear pairings make the system easy and efficient to providing basic security. In order to provide secure communication in MANET, many researchers used diffrenet schemes to provide authentication. However, existing techniques have some drawbacks such as forgery on adaptive chosen plain text message and back secrecy in traditional authentica- tion schemes. In this paper, we propose an ID based signature scheme for achieving reliability and secure authentication in MANET. We show that the proposed scheme is secure against existential forgery in the ran- dom oracle model under the inverse CDHP assumption. Proposed scheme is computationally more efficient than other traditional schemes.

SSCC-01.3 Cryptanalysis and Improvement of ECC - Based Security Enhanced User Authentication Protocol for Wireless Sensor Networks
Anup Kumar Maurya (Goa Institute of Management, India); Sastry N Vinjamuri (IDRBT, India); Siba Kumar Udgata (University of Hyderabad, India)

User authentication and secret session key exchange between a user and a sensor node are important security requirements of wireless sensor networks for retrieving the important, confidential and real time information from the sensor nodes. In 2014, Choi et al. proposed a elliptic curve cryptography based user authentication protocol with enhanced security for wireless sensor networks and after security analysis of their protocol we find that their protocol has some security drawbacks such as (1) no resilient against node capture attack, (2) insecure against stolen smart card attack (3) vulnerable to sensor node energy exhausting attacks. Based on the security analysis we propose a scheme to withstand the various security weaknesses of WSNs. Furthermore, the comparative security and computational performance analysis indicate that our proposed scheme is relatively more secure and efficient.

SSCC-01.4 Heart Rate Variability for Biometric Authentication Using Time-Domain Features
Nazneen Akhter (Babasaheb Ambedkar Marathwada University, India); Hanumant Gite (Dr Babasaheb Ambedkar Marathwada University, Aurnagabad, India); Gulam Rabbani (Maulana Azad College of Arts, Science and Commerce, India); Karbhari Kale (Babasaheb Ambedkar Marathwada University, India)

Heart Rate Variability (HRV) is a natural property found in heart rate. Medical science since last two decades has been beholding on it as a diagnostic and prognostic tool. This study is intended towards harnessing the HRV property of heart for authentication purpose. For measuring the RR-Interval for HRV analysis, we used photoplethysmography (PPG) based pulse sensor and in-house designed microcontroller based RR-Interval measurement system. Data acquisition is done on PC via serial-to-USB bridge adaptor. Seven Time domain features are generated using standard statistical techniques. Out of 10 samples of each subject, five are used for template creations and other five are used for testing. The system resulted in 6% EER. The FAR & FRR graph against threshold and the ROC curve are presented.

SSCC-01.5 Modeling Fuzzy Role Based Access Control Using Fuzzy Formal Concept Analysis
Chandra Mouliswaran Subramanian (VIT University, India); Aswani Kumar Cherukuri (Vellore Institute of Technology, Vellore, India); C Chandrasekar (Periyar University Salem, Tamil Nadu, India)

Role based access control (RBAC) is the widely accepted and used access control model. However, mappings among the set of users, roles and permissions in RBAC is a major challenging. This leads to errors in practical applications. Incorporating human decisions on mappings of RBAC could resolve this issue. But, in real time, human decisions are fuzzy in nature. So, fuzzy techniques can be incorporated into RBAC through fuzzy role based access control (FRBAC). Fuzzy formal concept analysis (FFCA) is a mathematical model for representation of uncertain information in the form of formal context. However to the best of our knowledge, there are no works on modelling fuzzy RBAC through fuzzy FCA. The objective of this paper is to propose the model of representing FRBAC in the form of FFCA. The initial results of our experiments show that the proposed model could implement the major features of RBAC.

SSCC-01.6 Polynomial Construction and Non-Tree Based Efficient Secure Group Key Management Scheme
Purushothama B R (National Institute of Technology Goa, India); B B Amberker (National Institute of Technology, Warangal, India)

Designing an efficient key management scheme for secure group communication is challenging. We focus on non-tree based group key management scheme. In this paper, we propose a non-tree based secure group key management scheme based on polynomial construction method. In the proposed scheme, an user is supposed to store only two keys (private shared key and group key). When a new user joins the group, only one encryption is required for rekeying and when an existing user leaves the group, only one polynomial construction is required for rekeying. The storage at the KDC is reduced in the proposed scheme. We analyze the security of the scheme and show that collusion of any subset of users cannot obtain the secret key of any non-colluded user. We compare the proposed scheme with the non-tree based schemes relying on polynomial construction method and show that the proposed scheme is efficient.

SSCC-01.7 Reducing Vulnerability of a Fingerprint Authentication System
Athira Ram A and Jyothis T S (University of Calicut, India)

A fingerprint authentication system usually suffers from privacy problem. A third party intruder can steal the information stored in database and try to recreate the original fingerprint. Here a system is proposed which prevents the possibility of generating fingerprints from the information in database. Two different fingerprints are acquired from a person. Orientation is calculated from the first fingerprint and minutiae points are extracted from a reference area in second fingerprint. They are combined to form a mixed template which is encrypted using blowfish cipher. The encrypted template serves as a virtual biometric. This prevents the revealing of original fingerprints to third party intruders. Moreover the attacker may not be aware that it is a mixed template that is used rather than the original fingerprint.

SSCC-01.8 Secure and Privacy Preserving Biometric Authentication using Watermarking Technique
Doyel Pal (LaGuardia Community College, CUNY, USA); Praveen Khethavath (LaGuardia Community College, USA); Johnson Thomas (Oklahoma State University, USA); Tingting Chen (California State Polytechnic University, Pomona, USA)

Biometric authentication ensures user identity by means of users' biometric traits. Though biometrics is unique and secure, it can be still stolen or misused by any adversary. In this paper we propose a secure and privacy preserving biometric authentication scheme using watermarking technique. Watermarking is used for content authentication, copyright management, tamper detection, etc. In this paper we watermark the user's face image with finger print and encrypt the watermarked biometric to protect its privacy from adversary. We use the watermarked biometric for privacy preserving authentication purpose. The analysis proves the correctness, privacy and efficiency of our scheme.

SSCC-01.9 A Dynamic Multi-domain Access Control Model in Cloud Computing
Dapeng Xiong (Academy of Equipment, China); Peng Zou (Academy of Equipment); Jun Cai (National Innovation Institute of Defense Technology, China); Jun He (Academy of Equipment, China)

Access control technology is an important way to ensure the safety of the cloud platform, but the new features of cloud computation environment have brought new challenges to access control technology. Direct at the existing problems of flexibility, timeliness and other aspects in multi-domain access control in the current cloud, on the basis of task driving idea, this paper put forward a dynamic access control policy. New method combined the advantage of RBAC and task driving model, to implement a more flexible and efficient access control model. Through comparative experiment we draw that new policy was improved to be contributory in improving the flexibility and availability of role-based multi-domain access control model.

SSCC-01.10 Comparing the Efficiency of Key Management Hierarchies for Access Control in Cloud
Naveen Kumar (DA-IICT, India); Anish Mathuria (Dhirubhai Ambani Institute of Information and Communication Technology, India); Manik Lal Das (DAIICT, India)

Existing key management solutions for data access control in cloud rely on user-based hierarchies, where each node represents a group of users. Although, such hierarchies provide essential features like granting read authorization, the key management operation becomes expensive for dynamic operations such as extending read authorization and revoking a user. In this work, we discuss the effectiveness of resource-based hierarchy for data access control in cloud, then analyze and compare it with existing user-based hierarchies. We show that a resource-based hierarchy is more efficient in terms of communication and computation cost when considering above dynamic operations, without sacrificing other essential features.

SSCC-01.11 Design of an Efficient Verification Scheme for Correctness of Outsourced Computations in Cloud Computing
Ronak Vyas and Alok Singh (National Institute of Technology Goa, India); Jolly Singh (National Institute of Technology, Goa, India); Gunjan Soni and Purushothama B R (National Institute of Technology Goa, India)

As cloud computing allows consumers or clients to store and delegate their sensitive data, cloud service providers have a lot of power over this data. Hence, the service providers in cloud computing cannot be trusted. To protect itself from being cheated on, by the untrusted service provider, the client needs a way to verify the correctness of computations returned by the cloud server, on its own. This should have very low computational cost as compared to the original computational cost of the outsourced computations. Operation on vectors is one of the major computations performed by cloud servers as many applications hosted on it use vectors. One of the major operations on vectors is calculating the inner product of the vector and matrix multiplication extensively uses inner products. In this work, we present an efficient algorithm to verify the correctness of inner products of vectors, on the client side. Also, we efficiently apply this scheme to verify the product of matrices. We also present an extensive security analysis and prove mathematically that, the client can verify the correctness of the computations, with a significant probability of success. We also demonstrate the efficiency and correctness of the proposed algorithm and show that the scheme is efficient in comparison with the existing schemes.

SSCC-01.12 Intelligent Intrusion Detection System for Private Cloud Environment
Balasundaram Muthukumar (Sathyabama University, India); Praveen Kumar Rajendran (Cognizant Technology Solutions, India)

From the day cloud computing got its popularity, security and performance is the two important issues faced by the cloud service providers and the clients. Although cloud computing has made the Information technology to face another dimension, securing the data stored in the cloud and securing the transactions made using the cloud are the major issues. Since cloud computing is a virtual pool of resources provided in an open environment (Internet), identifying intrusion of unauthorized users is one of the greatest challenges of the cloud service providers and cloud users. Identification of Intrusion is a tedious process, when the number of transactions is more. The artificial intelligence technique has been proposed in this paper in order to identify the intrusion of unauthorized user in a cloud environment. Application or research on cloud computing is always primarily focused upon any one of the issues. In our paper, the proposed algorithm satisfies the security aspects of cloud computing and the performance testing of the implementation satisfies the performance issues of cloud computing.

SSCC-01.13 Multilevel Threshold Secret Sharing in Distributed Cloud
Doyel Pal (LaGuardia Community College, CUNY, USA); Praveen Khethavath (LaGuardia Community College, USA); Johnson Thomas (Oklahoma State University, USA); Tingting Chen (California State Polytechnic University, Pomona, USA)

Security is a highlighted concern in cloud and distributed cloud systems. Threshold secret sharing scheme is a widely used mechanism to secure different computing environments. We split secret into multiple shares and store them in different locations using threshold secret sharing scheme. In this paper we propose a multilevel threshold secret sharing scheme to enhance security of secret key in a distributed cloud environment. We create replicas of secret shares and distribute them among multiple resource providers to ensure availability. We also introduce dummy shares at each resource provider to realize the presence of any outside attacker. Our experiment results show that our scheme is feasible and secure.

SSCC-01.14 Secure Sharing of Data in Cloud Computing
Deepnarayan Tiwari (Institute for Development and Research in Banking Technology & Central University of Hyderabad, India); Gr Gangadharan (IDRBT, India)

Cloud computing is emerging as an increasingly popular computing paradigm. Sharing of data in the cloud environments raises the issues of confidentiality, integrity, and availability. In this paper, we propose a framework and methodology for data sharing over untrusted cloud, using proxy re-encryption based on elliptic curve discrete logarithm problem. The proposed methodology imperatively imposes the access control policies of data originator, preventing the cloud storage providers from unauthorized access and illegal authorization to access the data.

SSCC-01.15 Adaptive and Secure Application Partitioning for Offloading in Mobile Cloud Computing
Dhanya NM (Amrita University, India); Kousalya Govardhanan (Anna University & Coimbatore Institute of Technology, India)

Smartphones are capable of providing smart services to the users very similar to laptops and desktop computers. Despite of all these capabilities battery life and computational capabilities are still lacking. By combining mobiles with cloud will reduce all these disadvantages because cloud is having infinite resources for processing. But in cloud security is a major concern. Since mobile devices contain private data a secure offloading of application is necessary. In this paper we are proposing a secure partitioning of application so that the most sensitive or vulnerable part of the application can be kept in the mobile and rest of the application can be offloaded to the cloud.

Monday, August 10 14:30 - 19:30 (Asia/Kolkata)

S4: S4-Advances in Adaptive Systems and Signal Processing

Room: 306
Chairs: Asutosh Kar (BITS Pilani, Hyderabad, India), Mini M G (Model Engineering College, Ernakulam, Kerala, India)
S4.1 An Unique Adaptive Noise Canceller with Advanced Variable-Step BLMS Algorithm
Asutosh Kar (BITS Pilani, Hyderabad, India); Monali Dhal (IIIT Bhubaneswar, India); Monalisa Ghosh (IIT Kharagpur, India)

In the recent times noise reduction is a vital issue, as it is responsible for producing undesired disturbances in the process of communication. Active Noise Cancellation (ANC) is the most effective technique to cancel noise. ANC has been an active area of research and various adaptive methodologies have been employed to achieve a better ANC scheme. In the ANC technique, the aim is to minimize the noise interference that corrupts the original signal. ANC has a broad variety of applications in common commercial products, industrial uses and other machinery. An analysis of the prevailing adaptive methodologies is necessary for future research as we need to know the demerits of all the prevailing adaptive methodologies in ANC. This paper provides an analysis of various adaptive algorithms for noise cancellation and a comparison is made between them. The strengths, weaknesses and practical effectiveness of all the algorithms have been discussed. This paper deals with cancellation of noise on speech signal using three existing algorithms- Least Mean Square (LMS) algorithm, Normalized Least Mean Square (NLMS) algorithm and Recursive Least Square (RLS) algorithm and a proposed algorithm- advanced Block Least Mean Square (BLMS) algorithm. The algorithms are simulated in MATLAB platform and a tabular comparison of the algorithms is drawn. Finally, conclusions have been drawn by choosing the algorithms that provide efficient performance with less computational complexity.

S4.2 Multiple Predictors based RW Scheme with Adaptive Image Partitioning
Hirak Maity (College of Engineering and Management, Kolaghat, India); Santi Prasad Maity (Indian Institute of Engineering Science and Technology, Shibpur, India)

Reversible watermarking (RW) is one of the best possible solutions for content authentication of a digital data. In RW the decoder may recover the hidden and original information losslessly. Existing works suggest that prediction error expansion (PEE) based RW scheme ensures higher embedding capacity with low imperceptibility. In general, PEE based RW schemes use a single predictor. But this is seen that the different types as well as different regions of an image behave in different way during embedding. So, this work presents a RW scheme based on local characteristics of an image, where multiple predictors are used to enhance the embedding bit rate. To this aim, an image is partitioned into smooth, texture and edge regions using adaptive threshold values. The threshold values are calculated by maximizing the fuzzy conditional entropy of the gray values; where the optimal set of parameters for the fuzzy membership functions is specified by differential evolution method. A large set of simulation results are shown to highlight its improved rate-distortion performance over the existing works.

S4.3 Performance Evaluation of Band-Limited LPC Vocoder and Band-limited RELP Vocoder in Adaptive Feedback Cancellation
Ankita Anand (Guru Gobind Singh Indraprastha University, India); Richa Bhatia (NSUT, India)

Feedback oscillation is one of the major problems faced by the hearing aid users. Adaptive feedback cancellation is used to suppress the feedback by using an adaptive filter to estimate the feedback path. However, this estimate is biased due to presence of high correlation between the original input signal and the loudspeaker input signal. Bias problem produces modeling error and can result in cancellation of the desired signal. Bias reduction can be obtained by using band-limited linear predictive coding based technique (BLPC) and band-limited residual excited linear predictive coding (BRELP) based technique. The paper compares performance of band-limited linear predictive coding based vocoder and band-limited residual excited linear predictive coding based vocoder in sound quality and bias reduction. The BLPC-AFC achieves more reduction in high-frequency correlation than BRELP-AFC. However, BRELP vocoder has a better output sound quality.

S4.4 Multilevel digital sonar power amplifier with modified unipolar SPWM
Bineesh P Chacko and Panchalai Vadamalai (NPOL, Kochi); Sivakumar Narayanan (NPOL, Kochi, India)

Sonar projectors are resonant transducers that are operated around the resonance frequency and the power amplifiers of sonar transmitters are designed to drive a non linear frequency-dependent load. The input electrical admittance of these projectors varies rapidly in the frequency band of operation. So the limits of load offered to a power amplifier for a wide bandwidth projector has to be maximized. For limited bandwidth operation, the existing digital power amplifiers will do their best for limited range of load variation, but for wide bandwidth operation other concepts should be explored and evaluated. This paper discusses a multilevel power amplifier with shared leg concept with modified unipolar switching in order to take-care the increased load limits with reduced voltage stress on over the switching devices.

S4.5 Design and Implementation of Multipurpose Single Channel Bio Signal Amplifier
Sandesh R, S and Nithya Venkatesan (VIT University, India)

Here in this study the authors have designed a multipurpose single channel bio signal amplifier for recording of ECG/EEG/EMG for clinical applications. To accomplish this we have used only three dry electrodes along with simple EEG leads to obtain EEG signals. The authors have used the amplifiers INA 217 which has low noise and OP-07C. The results indicate that the signal extracted is very much free from noise. Furthermore the performance analysis of this pre amplifiers has been carried out and shows low Noise of 1.69nV /√Hz. Also the outcome gives the SNR approximately 57.79 dB with a very high input impedance of 711.11KΩ and low output impedance 150Ω.

S4.6 Multiband Circularly Polarized Symmetrical Fractal Boundary Microstrip Antenna for Microwave Applications
Navya sree Yadavalli, Ajay Kumar Koduru and Avinash Reddy P v (K L University, India); Gokul Krishna Paruchuri (K L University); Lakshmi Narayana Jammula (K L University, India)

A new fractal antenna is designed for multi band operation. The designed antenna is single layered symmetrical fractal boundary microstrip antenna. By the alteration in the geometry of fractal boundary of the square patch multiband operation is attained. The proposed antenna worked nearly in ten bands of frequency. Four antennas were intended by having a square slot at the middle for circular polarization (CP) operation. Antennas proposed worked at frequencies 3-5 GHz , 6.9 GHz , 11-19.5 GHz and at 6.9 GHz the invented antennas shown excellent gain and 10-dB return loss.

S4.7 On Spectrum Utilization in CRN using Sequential Sensing applying Adaptive Weighting in Time Varying Channel
Amit Baghel (ABV-IIITM Gwalior, India); Aditya Trivedi (ABV-Indian Institute of Information Technology and Management Gwalior, India)

Cognitive radio opportunistically utilizes wireless spectrum when licensed user is not present. Accordingly, in cognitive radio, spectrum sensing is an important technique to watch primary user's presence. The sequential spectrum sensing technique considering time varying channel has been discussed in this paper. In sequential sensing technique, current and previous energy observations of cognitive radio hubs are averaged to enhance the spectrum utilization. The moving-normal method applying adaptive weighting has been discussed in this paper. In the event when the primary user changes it's state, results show that adaptive weighting gives better performance in terms of spectrum utilization by the cognitive radio user.

S4.8 Performance of DP-QPSK Transmission for a Nyquist-WDM system using all Raman Amplification and Spectral Inversion Technique
Raman Jee (MeitY Govt. of India, India); Somnath Chandra (DEITY, India)

We have estimated the transmission performance of DP-QPSK based Nyquist-WDM transmission system over 800 km using a combination of optical spectral inversion technique (OSI) and distributed Raman amplifier (DRA). This study is focuses on the effectiveness OSI-DRA towards minimizing fiber nonlinear effects in terms of eye-opening penalty (EOP) and Q value for BER.

S4.9 Synchronization in IEEE 802.15.4 Zigbee Transceiver using Matlab Simulink
Gorantla Kavya and Venkata Mani Vakamulla (National Institute of Technology Warangal, India)

Zigbee standard has all set of protocols which supports low data rate, low power consumption, low cost and short range communications. The main application of Zigbee is Wireless Sensor Networks(WSNs). In digital communications, data is being transmitted by converting input bit stream to sample functions of analog waveforms. The analog RF signal passes through band limited channel which results signal degradation in terms of symbol delay, carrier frequency and phase offsets. Various synchronization techniques can be used to estimate these characteristics in coherent receivers. In this paper, phase offset can be estimated using costas loop carrier recovery circuit and early late gate timing recovery algorithm is used to estimate symbol timing before data being decoded. Transceiver system design is developed using Matlab Simulink and its performance can be analysed when it undergoes through Additive White Gaussian Noise (AWGN) channel.

S4.10 Three-Dimensional Geometrical Channel Modeling with Different Scatterer Distributions
Priyashantha Tennakoon (Sri Lanka Technological Campus, Sri Lanka); Chandika B. B. Wavegedara (Women's Campus, United Arab Emirates)

The problem of three dimensional (3D) stochastic geometrically-based channel modeling with non-uniform scatterer distributions is addressed for multistory indoor environments. To this end, we consider a geometrical channel model, where scatterers are assumed to be Gaussian or Rayleigh distributed about the receiver within a spheroid having the transmitter and the receiver located at its focal points. Closed-form expressions are obtained for the joint and marginal probability density functions (PDFs) of the angle of arrival (AOA) in both the elevation and azimuth planes and the time of arrival (TOA). The analytically-derived PDFs of the AOA and TOA obtained for Gaussian and Rayleigh scatterer distributions are compared against those obtained from ray-tracing simulation of a typical indoor office environment. The standard deviation values of Gaussian and Rayleigh scatterer distributions are chosen to provide the best possible approximation to the PDFs of the AOA and the TOA obtained from simulation. Our results clearly indicate that the analytically-derived PDFs of the AOA and the TOA for Gaussian and Rayleigh scatterer distributions are in much closer agreement with those obtained from ray-tracing simulation than for uniform scatterer distribution.

S4.11 A Novel Variable Step-Size Feedback Filtered-X LMS Algorithm for Acoustic Noise Removal
Asutosh Kar (BITS Pilani, Hyderabad, India); Ashok Behuria (IIIT Bhubaneswar, India)

The priority of current era in noise cancellation field aims at blocking the low frequency noise since most real life noises operate below 1 KHz. The noise which creates obstruction in everyday communication needs to be dealt in an effective way. Acoustic Noise Cancellation (ANC) is hence regarded as most sought after solution. ANC has created its own niche in this field where a wide range of industrial and commercial products rely unanimously for rescue. While the traditional solutions like enclosures, barriers, etc. had shortcomings like large, costly, and ineffective at low frequency, the modern approaches envisaged noise being readily cancelled by continuous adaptation of adaptive filter. This change in stance accredits its success to the advent of suitable adaptive algorithms in ANC which blocks selectively with potential benefits in size, weight, volume, and cost. In this paper we look forward to provide an improved approach for ANC. After an initial analysis of existing Filtered x algorithms the mathematics of new proposed algorithm has been provided. The proposed algorithm is then applied to noise cancellation along with the existing FxLMS, FB-FxLMS algorithms and results of each process were produced to make a suitable comparison between the existing and proposed one.

S4.12 Study of Preprocessing Sensitivity on Laser Induced Breakdown Spectroscopy (LIBS) Spectral Classification
Tapan Kumar Sahoo (Indian School of Mines, Dhanbad); Atul Negi and Manoj Gundawar (University of Hyderabad, Hyderabad)

Laser induced breakdown spectroscopy (LIBS) is an atomic emission based spectroscopy that uses a laser pulse as the source of excitation. The laser is focused to form hot plasma, which atomizes and excites the sample. In the LIBS spectrum each "feature" is the amplitude or intensity detected at different wavelengths in the range of 200-1000 nm. Pattern recognition techniques were applied on samples with similar elemental composition resulting in almost similar LIBS spectra which are visually very difficult to differentiate. It was observed that the classification results obtained from different classifiers were sensitive to data preprocessing. The outlier detection and removal techniques PCA, Dendrogram using Agglomerative Algorithm, Editing by Nearest Neighbour (NN) and Distance Matrix approaches were used in preprocessing step. After removing outlier(s) the resulting training patterns were used to model the k-Nearest Neighbour (k-NN), Principal Component Analysis (PCA), Dendrogram, Multiclass Support Vector Machine (SVM) and Decision Tree classifiers. In k-NN after removing outlier(s) the average classification accuracy was increased by 2% for high energy materials (HEM), but no improvement in non high energy materials (Non HEM) or in top level classification (decide either HEM or Non HEM). But, for other classifiers the classification accuracy gets reduced. Finally instead of removing outlier(s) dimensionality reduction by thresholding was applied and the classification accuracy increased by 4% in k-NN for HEM and 38% in multiclass SVM for HEM and 4% for Non HEM.

S4.13 Performance Analysis of MIMO Systems under Multipath fading channels using Linear Equalization Techniques
Khushboo Pachori (JUET, India); Amit Mishra (Jaypee Institute of Engineering & Technology, Raghogarh, Guna, M.P., India)

MIMO systems provides number of features like spatial diversity, multiplexing gain, and spectral efficiency gain by keeping the bandwidth expansion or transmission power similar to other systems. The major concern in the system is inter symbol interference (ISI) caused by the channel. An equalizer is deployed on the receiver side to detract the effect of ISI. In this paper, simulation is done under the Rayleigh fading environment for linear equalization techniques for MIMO systems. Result shows that ISI occurred by the channel can effectively diminished with the help of Inter Equalization techniques. In this paper, a combined scheme is proposed for canceling the interference with lower bit error rate and diminishing the effect of noise enhancement. The results show that, performance of this technique is comparatively better than conventional techniques.

Monday, August 10 14:30 - 17:30 (Asia/Kolkata)

T1: Tutorial -1: Antennas for Millimeter-wave based Wireless Communications

Dr. T Rama Rao (SRM University, India)
Room: 308

In coming years, New Generation Wireless Networks (NGWN) are expected to provide new paradigms over the existing networks with wide variety of wireless applications in all walks of our lives and aims to provide increase in multimedia application traffic/data for billions of wireless devices (WD) with low latency, more coverage, flexible operation in different radio access technologies, reliable performance, low cost and energy efficient designs/architectures. In this perspective, progress in computational electromagnetics (EM) and advances in utilizing millimeter‐wave (MmW) radio frequency (RF) bands propelling as candidates for highresolution and high‐speed wireless systems/applications.Antenna performance plays a critical role in determining the communication range and quality of service for WDs especially at MmWs. As devices must increasingly support challenging requirements of wideband and small product size, design becomes crucial to the success of new wireless products/applications. This tutorial offers participant's technical insights into the vital aspects of MmW based antenna design from an academic / practical perspective particularly at unlicensed band of 60 GHz. It covers fundamental theory, concepts and definitions of the features, specifications and performance of different types of commonly‐used and advanced antennas utilizing MmWs including SAR distributions. Practical implementation strategies and approaches to overall product design for optimum antenna performance will also be presented.

Monday, August 10 14:30 - 19:30 (Asia/Kolkata)

S6: S6-Third International Symposium on Control, Automation, Industrial Informatics and Smart Grid (ICAIS'15) - I

Room: 309
Chair: Ramesh Kumar P (Government Engineering College Thrissur, India & Kerala Government, unknown)
S6.1 Direct Torque Control of Induction Motor Drive with Flux Optimization
Hadhiq Khan (University of Kashmir, India); Shoeb Hussain and Mohammad Abid Bazaz (National Institute of Technology Srinagar, India)

MATLAB / SIMULINK implementation of the Direct Torque Control Scheme for induction motors is presented in this paper. Direct Torque Control (DTC) is an advanced control technique with fast and dynamic torque response. The scheme is intuitive and easy to understand as a modular approach is followed. A comparison between the computed and the reference values of the stator flux and electromagnetic torque is performed. The digital outputs of the comparators are fed to hysteresis type controllers. To limit the flux and torque within a predefined band, the hysteresis controllers generate the necessary control signals. The knowledge about the two hysteresis controller outputs along with the location of the stator flux space vector in a two dimensional complex plane determines the state of the Voltage Source Inverter (VSI). The output of the VSI is fed to the induction motor model. A flux optimization algorithm is added to the scheme to achieve maximum efficiency. The output torque and flux of the machine in the two schemes are presented and compared.

S6.2 Analysis of Power Transfer Capability of a Long Transmission Line using FACTS Devices
Soubhik Bagchi (Budge Budge Institute of Technology, India); Dr Rupam Bhaduri (Dayananda Sagar Institutions- 2nd Campus (DSATM), Bangalore, India); Priyanath Das (National Institute of Technology, India); Subrata Banerjee (NIT, Durgapur, India)

This paper gives insight for the improvement of power transfer capability by analyzing and comparing several FACTS devices such as Static Var Compensator (SVC) , Static Synchronous Compensator (STATCOM) and Unified Power Flow Controller (UPFC) in long transmission line. These devices have been used in different locations such as sending end, middle and receiving end of transmission line. The suitable location and performance of each model has been analyzed. Firstly, real and reactive power profiles have been studied for uncompensated system and subsequent results are produced. Then these results are compared with the results obtained after compensating the system using afore mentioned FACTS devices. Overall analysis indicates that better power (i.e. 87.24% power) has been transferred when SVC is connected at the middle of the transmission line. All simulations have been done in MATLAB/SIMULINK software.

S6.3 Dynamics and Control System Design of a Polar Low-Earth Orbit Nano-Satellite 'Parikshit'
Raunaq Rakesh, Smit Kamal, Carina Pereira, Naman Saxena, Revathi Ravula and Faraz Haider (Manipal Institute of Technology, India); Sidhharth Mayya (Georgia Institute of Technology); Karun Potty (Manipal Institute of Technology, India)

This paper describes the control systems employed by a nano-satellite 'Parikshit' to satisfy the payload requirement to limit the angular velocity of the satellite within 1 deg/sec. Also described is the satellite kinematics and dynamics model in space environment to test and evaluate the performance of the controller to meet the desired stability. The entire dynamics of the satellite have been modelled in terms of quaternions and its advantages over euler angles have also been mentioned. A concise description of the numerical integration techniques and step size determination have also been mentioned. In order to simulate the environment conditions, influential disturbance torques along with their effects have been evaluated and estimated. The results from these simulations used to test the controller in space environment for attitude control have been analysed and used to evaluate the performance of the controller. All graphs pertaining to the simulations have been analysed and discussed.

S6.4 Super Twisting Controller for the Position Control of Stewart Platform
Ramesh Kumar P (Government Engineering College Thrissur, India & Kerala Government, unknown); Bijnan Bandyopadhyay (Indian Institute of Technology Bombay, India)

This paper proposes a super twisting algorithm based control strategy for the position control of a Stewart platform. The conventional sliding mode controller is making use of a discontinuous feedback control, hence the control effort is discontinuous in nature. The discontinuity in feedback control induces the dangerous high frequency vibrations called chattering, which is highly undesirable for practical applications. The proposed super twisting controller is continuous in nature and the chattering is eliminated. The desired position of the platform has been achieved using the proposed method even in the presence of matched disturbances. The effectiveness of the proposed controller has been proved with the simulation results.

S6.5 Control System Design to Counter the Effect of Tether Ejection System on a Nano-satellite
Raunaq Rakesh, Smit Kamal, Carina Pereira, Naman Saxena, Revathi Ravula, Faraz Haider, Sidhharth Mayya and Karun Potty (Manipal Institute of Technology, India)

This paper describes the nominal control system design to counter the effect of deployment of an electro-dynamic tether on a polar low-earth orbit nano-satellite. The complete tether assembly including the deployment mechanism, choice of material, length of the tether, preferred door axis for deployment of tether have been explained in detail. Analysis of the effect of deployment of the tether on the satellite has been described along with the controller's ability to stabilise the satellite there after. Also described, is the satellite kinematics and dynamics model in space environment to test and evaluate the performance of the controller to meet the desired stability. The results from these simulations used to test the controller in space environment for attitude control have been analysed and used to evaluate the performance of the controller. All graphs pertaining to the simulations have been analysed and discussed.

S6.6 An Empirical Analysis of Implicit Trust Metrics in Recommender Systems
Swati Gupta (NSIT, DELHI, India); Sushama Nagpal (Netaji Subhas Institute of Technology & University of Delhi, India)

Recommender system is an intelligent solution to information overload problem. Classical collaborative filtering based recommender system suffers from cold start and data sparsity problems. Incorporation of trust in classical recommender systems has potential to improve the overall performance of recommender system. Trust has been enormously researched and its influence is manifested in recommender systems. Because of unavailability of explicit trust information, various implicit trust metrics are developed to deduce trust from user's online behavior. In this paper, we have conducted an empirical study of six implicit trust metrics on two different real world datasets. A comparative analysis of these metrics with classical user based collaborative filtering is performed.

S6.7 Frequency estimator to improve short range accuracy in FMCW radar
Anuja Chaudhari (Mumbai University, India); Sapna Prabhu (Fr. Conceicao Rodrigues College of Engineering, India); Raymond Pinto (SAMEER, IIT MUMBAI, India)

Frequency Modulated Continuous Wave (FMCW) radars have been widely used for short and medium range detection. Applications of FMCW radar, where measuring short distances is critical, need to have a high accuracy. In the FMCW radar systems, the received signal is a delayed version of the transmitted signal. The measurement of beat frequency determines the range. The range resolution of radar is inversely proportional to the transmitting bandwidth of the radar. In order to achieve a better range resolution we need to increase the bandwidth, which has its own limitations. To increase the bin resolution of FFT of the beat signal, the sampling frequency must be increased which increases the system complexity. A frequency estimator is implemented to enhance the accuracy of short range detection in FMCW radar without increasing the bandwidth or sampling frequency of the system. It compensates the error by calculating an offset value to provide fine beat frequency.

S6.8 Observations from Study of Pre-Islanding Behaviour in a Solar PV System Connected to a Distribution Network
Shashank Vyas (The Energy and Resources Institute, India); Rajesh Kumar (Malaviya National Institute of Technology, India); Rajesh Kavasseri (North Dakota State University, USA)

Unintentional islanding is a more relevant issue today due to rising penetration of solar photo voltaic systems on distribution grids. As the smart grid matures, better techniques for mitigating this problem have to evolve. Commercially available, inverter-resident, islanding detection based methods are aplenty and a lot others are being developed at research level. This work is oriented towards trying to understand the distributed-generation system dynamics that build up to the island forming situation. It is felt that inverter dynamics along with power system transitory events can provide a clue to an impending unintentional islanding event. Approaches resulting from such a study can train the system with predictive intelligence with regards to the possibility of an unwanted islanding condition. The study also results into interesting observations regarding the impact of capacitive reactive power present on the network, in excess of load requirement, on the grid-side current in the load containing phase of the islandable feeder section. Arcing current inside the island and distortion in this current before islanding are the effects that are recorded.

S6.9 Multilevel Converter for Excitation of Underwater Transducers
Anand Sreekumar (Cochin University of Science and Technology, India); Panchalai Vadamalai (Scientist, India); Bineesh P Chacko (NPOL, Kochi, India); Preethi Thekkath (TocH Institute of Science and Technology, India)

Switched mode class-D power amplifiers are efficient than linear power amplifiers and they are compact in nature. Digitally controlled class-D power amplifier provides an additional advantage of programmability. However the class-D power amplifier's usage is limited to frequencies less than 15 kHz. The class-S power amplifiers are another option in the frequency range 15 kHz to 100 kHz, but at the cost of quality of the sine waveform. The proposed system aims to design and implement a class-S multilevel converter for driving underwater acoustic transducers with improved quality of the waveform. Two full bridge converters supplied from a single source, operated in class-S mode are connected in series to form a multilevel converter. In class-S technique the switches are operated at modulating frequency where only two switching transitions are present, thereby increasing the efficiency of the system. The gating signals for this converter are generated using low cost digital controller to provide staircase waveform (which is a close approximation of the input sine signal) at the output of the multilevel converter. Simulation studies followed by prototype development and analysis are planned in this project. The advantages and disadvantages are discussed.

S6.10 A Key Based Security Mechanism for Payment Status in Cloud Service Access Control System
Gagandeep Kaur and Arvinder Kaur (Chandigarh University, India)

Cloud computing is widely accepted dominant paradigm that provides a number of resources and cost effective software services to their clients as they demand such as Software as a Service, Platform as a Service, Infrastructure as a Service. However these services provide a lot of benefits for their clients, but still there is a need of data security against unauthorized access of services (data). Access control is an important aspect of cloud computing to enhance the system security. This paper focusses on securing the pay as you go model used by cloud service access model (CSAC) using combination of RSA and AES algorithm to ensure that only authorized user access the services from cloud. It also provides the mechanism to detect the conflicts between policies and inhibits the access due to conflict in the environment of cloud computing. This paper gives an insight into, how proposed access control model enhances the security to access the services from cloud. With this aim, this paper presents a secure access control model which allows legal access of services.

S6.11 Error Propagation in Linear and Non-Linear Systems for False Data Injection Attack
Sindhuja Mangalwedekar (FRCRCE, Mumbai University, India); Sunil Surve (FrCRCE, India); Harivittal Mangalvedekar (VJTI-Mumbai University, India)

Due to technological advancement, the integration of cyber systems with the physical power system has increased security concerns. The cyber security issues and the impact of various attacks on the smart grid have become an integral part of the smart grid. False Data Injection Attack (FDIA) is one of the many ways to compromise a system. In this, measurements are biased by deliberate addition of errors, which in turn, affect the state variables of the system. This paper discusses the impact of FDIA on the smart grid. The paper analyses the effect of FDIA on the non-linear state estimator. It further compares the impact of FDIA on the non-linear state estimator with that of the linear state estimator. This comparison is explained using propagation of error.

S6.12 A Node Scheduling Approach in Community Based Routing in Social Delay Tolerant Networks
Nikhil Gondaliya and Mehul B Shah (Gujarat Technological University, India); Dhaval Kathiriya (Anand Agriculture University, India)

Delay Tolerant Networks (DTNs) also provide the communication to the group of people who carries the mobile devices like smart phone, laptop and palmtop. By exploiting the human characteristics it becomes easy to pretend the social properties which are useful in selecting the appropriate relay node. Many routing protocols have shown the capability of these properties (eg. community and centrality) and improved the performance in terms of delivery ratio and delivery delay. The performance of community based routing protocols mainly depends on the formation of the correct community structure. The admission criteria to add the node in the local community is based on aggregated contact duration or contact frequency between node pair. Until it exceeds the predefined threshold value, node cannot be added in to the local community of each other. Even it is difficult to predict the value of threshold in advance for any data set. By analyzing a community structure at different times, it is found that a community detection similarity is very low in early phase of the simulation. Within the entire network, some nodes are more popular and interact with more nodes than others called hub or central nodes. These nodes play important role in improving the delivery ratio. Hence, we propose the simple approach to schedule the nodes for message transmission based on the two centrality measures: betweenness and degree in the absence of the community information about message's destination. We validate our proposed scheme using real time traces of two different environments: conference and campus and compared with existing scheme. The simulation results show the higher delivery ratio and low average hop count value for the traces of both the environments.

S6.13 Onboard processor validation for space applications
Savitha A (ISAC & ISRO, India)

ISRO Satellite Centre (ISAC) is the lead centre of the Indian Space Research Organisation in the development and operationalisation of satellites for communication, navigation and remote sensing applications. A well orchestrated highly advanced embedded system plays a major role in success of all such satellites. Dynex MAR31750 was the processor used for our all GSAT, INSAT, IRS type of satellites. Now moved to HX1750 processor based CPU designed for standard AOCE of GSAT 17/18 programmes. The validation of this processor and result analysis of module time measurement and its automation are presented in this paper. Special attention is given on verification and comparison of results of Dynex processor and HX1750 processor. The onboard computational load increases as per the new requirements and to incorporate autonomy onboard. To accomplish all the computational requirements, time segment based design is adopted. Hence timing measurement of modules play a major role in evaluating the processor. This paper mainly states the realization of the same.

S2: S2-Biomedical Computing/Biomedical Imaging and Instrumentation in Healthcare

Room: 310
Chairs: Jessy John (Indian Institute of Technology Bombay, India), Kiran Sree Pokkuluri, KSP (Sri Vishnu Engineering College for Women, India)
S2.1 Modeling Pattern Abstraction in Cerebellum and Estimation of Optimal Storage Capacity
Asha Vijayan (Amrita University, India); Chaitanya Medini (Amrita Vishwa Vidyapeetham ( Amrita University), India); Anjana Palolithazhe and Bhagyalakshmi Muralidharan (Amrita University, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Precise fine-tuning of motor movements has been known to be a vital function of cerebellum, which is critical for maintaining posture and balance. Purkinje cell (PC) plays a prominent role in this fine-tuning through association of inputs and output alongside learning through error correction. Several classical studies showed that PC follows perceptron like behavior, which can be used to develop cerebellum like neural circuits to address the association and learning. With respect to the input, the PC learns the motor movement through update of synaptic weights. In order to understand how cerebellar circuits associate spiking information during learning, we developed a spiking neural network using adaptive exponential integrate and fire neuron model (AdEx) based on cerebellar molecular layer perceptron-like architecture and estimated the maximal storage capacity at parallel fiber-PC synapse. In this study, we explored information storage in cerebellar microcircuits using this abstraction. Our simulations suggest that perceptron mimicking PC behavior was capable of learning the output through modification via finite precision algorithm. The study evaluates the pattern processing in cerebellar Purkinje neurons via a mathematical model estimating the storage capacity based on input patterns and indicates the role of sparse encoding of granular layer neurons in such circuits.

S2.2 Identification of Motor Imagery Movements from EEG Signals Using Dual Tree Complex Wavelet Transform
Syed Khairul Bashar (Bangladesh University of Engineering and Technology, Bangladesh); Ahnaf Rashik Hassan (Goran, Dhaka & Bangladesh University of Engineering & Technology (BUET), Bangladesh); Mohammed Imamul Hassan Bhuiyan (Bangladesh University of Engineering and Technology, Bangladesh)

In this paper, Dual Tree Complex Wavelet Transform (DTCWT) domain based feature extraction method has been proposed to identify left and right hand motor imagery movements from electroencephalogram (EEG) signals. After first performing auto-correlation of the EEG signals to enhance the weak brain signals and reduce noise, the EEG signals are decomposed into several bands of real and imaginary coefficients using DTCWT. The energy of the coefficients from relevant bands have been extracted as features and from the one way ANOVA analysis, scatter plots, box plots and histograms, this features are shown to be promising to distinguish various kinds of EEG signals. Publicly available benchmark BCI-competition 2003 Graz motor imagery dataset is used for this experiment. Among different types of classifiers developed such as support vector machine (SVM), probabilistic neural network (PNN), adaptive neuro fuzzy inference system (ANFIS) and K-nearest neighbor (KNN), KNN classifiers have been shown to provide a good mean accuracy of 91.07% which is better than several existing techniques.

S2.3 Performance Analysis of Adaptive Filtering Algorithms for Denoising of ECG signals
Nasreen Sultana (Bhoj Reddy Engineering college for women, Vinaynagar, Santoshnagar cross roads, Saidabad, Hyderabad); Yedukondalu Kamatham (CVR College of Engineering & Vastunagar, Mangalpally V, Ibrahimpatnam M R R Dist Hyderabad, India); Kinnara Bhavani (Vignan Institute of Technology and Science, India)

Electrocardiogram (ECG) can help to diagnose range of diseases including heart arrhythmias, heart enlargement, heart inflammation (pericarditis or myocarditis) and coronary heart disease. ECG consists of noise which is non stationary that affects the reliability of ECG waveform. In this paper an adaptive filter for denoising ECG signal based on Least Mean Squares (LMS), Normalized Least Mean Square (NLMS), Affine Projection LMS (APA-LMS) and Recursive least Squares algorithm (RLS) is presented with experimental results and the results are found to be encouraging. The performances of these algorithms are compared in terms of various parameters such as SNR, PSNR, MSE and SD. To validate the proposed methods, real time recorded data from the MIT-BIH database is used. RLS algorithm is found to exhibit lower MSE, and higher SNR compared to other algorithms. Therefore the results demonstrate superior performance of adaptive RLS filter for denoising of ECG signal.

S2.4 Clustering of Dynamic Functional Connectivity Features Obtained from Functional Magnetic Resonance Imaging Data
Vajarala Ashikh (Dayananda Sagar College of Engineering, India); Gopikrishna D and Rangaprakash D (Auburn University, USA); Narayana Dutt D (Dayananda Sagar College of Engineering, India)

Clustering is one of the most important methods for organizing database into groups. In this paper, Ordering Points To Identify Clustering Structure (OPTICS) algorithm has been used to perform clustering of functional Magnetic Resonance Imaging (fMRI) data. Dynamic functional connectivity features on fMRI data (ADNI database) obtained from subjects with early mild cognitive impairment (E-MCI), late mild cognitive impairment (L-MCI), Alzheimer's disease and healthy controls has been used for the study. On performing clustering, it has been observed that OPTICS is able to cluster the subjects into four inherent groups with a very high success rate. This result gives rise to applications in determining latent groups indicating various brain disorders.

S2.5 Brain Computer Interface Based Assistive Device
Ramesh C R (MHRD, National Institute of Technology & Calicut, India); Lyla B Das (NITC, India)

Human brain is one of the complex structures in the universe. A numerous experiments and studies were carried out on it in a proper and systematic manner to analyze its characteristics. As the consequence of availability of high-speed electronics and efficient algorithms various high efficient Brain Computer Interface devices were developed. The dry electrode based Brain Computer Interface device implemented here falls to the non-invasive Brain Computer Interface category and shall assist a speech impediment person by restoring the ability to call a person on demand situations artificially. Currently, the brain electrophysiological signals Brain Computer Interface used can be divided into three categories: specific frequency components of electroencephalography produced spontaneously during the process of brain thinking , such as alpha waves , beta waves and so on; next is the brain evoked event- related potentials, i.e. neural electrical activity of cerebral cortex induced by specific sensory stimulus or event operation; and the third category is the electrical activity signals of neurons in group directly acquired from the detection electrode implanted into the specific region of the brain cortex. Here the advantages of ERP signal is exploited for implementing the various functionality of the Brain Computer Interface based Assistive device.

S2.6 Low Power Amplifier For Biopotential Signal Acquisition System
K Pratyusha, Sanjeev Kumar and Anita Kumari (Lovely Professional University, India)

In biomedical field there is a great need of VLSI in designing integrated bioamplifier circuits which performs the operation of amplifying low amplitude and low frequency signals. Due to excessive demand of implantable and wearable devices for processing biopotential signals, a low power bioamplifier is proposed in this paper and power dependency of bioamplifier on different current mirror configuration is discussed. In proposed bioamplifier architecture, cascode configured OTA is used to increase the gain and lower the power consumption of the circuit. All transistors in OTA are operating in weak inversion region consuming nano-amperes of current. Proposed bioamplifier is designed using 180nm CMOS process technology with a supply voltage of 1.8 V. Simulated results shows, bioamplifier is operating with a low power dissipation of 6.25 uW, mid-band gain of 45.38 dB and passes signals in the frequency range of 5.02 Hz to 2.927 KHz.

S2.7 Brain Tumor Extraction from MRI Brain Images Using Marker Based Watershed Algorithm
Benson C C and Lajish Lajish V L (University of Calicut, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India)

Human brain is the most complex and mysterious part of human body. Many complex functions are controlled by brain. Brain imaging is a widely applicable method for diagnosing many brain abnormalities like brain tumor, stroke, paralysis etc. Magnetic Resonance Imaging (MRI) is one of the method used for brain imaging. It is used for analysing internal structures in detailed. Brain tumor is an abnormal mass of tissue in which cells grow and multiply uncontrollably, seemingly unchecked by the mechanisms that control normal cells. The aim of this paper is to extract tumor region from the brain MRI image using watershed algorithm based on different feature combinations such as colour, edge, orientation and texture. The results are compared with the ground truth images. Here we used marker based watershed algorithm for extracting tumored region and Dice and Tanimoto coefficients are used for comparison of the results. The method proposed here found to be producing a promising result.

S2.8 Historical Document Enhancement using Shearlet Transform and Mathematical Morphological Operations
Ranganatha D (VTU, India); Ganga Holi (Global Academy of Technology)

Document image binarization is a process of converting the document image into binary image containing text as foreground and plain white as background or vice versa. Characters from the document image should be extracted from the binarized image, in order to recognize them. So performance of the character recognition system is completely depends on the binarization quality. In this paper, an efficient hybrid binarization method is proposed to binarize the degraded document image. The proposed method is based on: Adaptive Histogram Equalization(AHE), Shearlet Transform(ST) and simple morphological reconstruction techniques and method proved to better in enhancing degraded documents images. The proposed technique is efficient to tolerate the high inter and intra intensity variation in the degraded document image.

S2.9 Parallelization of Searching and Mining Time Series Data using Dynamic Time Warping
Ahmed Shabib (PES Institute of Technology, India); Anish Narang (PESIT, India); Chaitra Prasad, Madhura Das, Rachita Pradeep, Varun Shenoy and Prafullata Auradkar (PES Institute of Technology, India); Vignesh TS (GE Global Research, India); Dinkar Sitaram (Pes University, India)

Among the various algorithms present for data mining, the UCR Dynamic Time Warping (DTW) suite provided a solution to search and mine large data sets of time series data more efficiently as compared to the previously existing method of using Euclidean Distance. The UCR DTW algorithm was developed for a single CPU core. In this paper, we consider 2 methods of parallelizing the DTW algorithm. First, we consider a multi-core implementation, followed by a cluster implementation using Spark. From the multi-core implementation, we achieve nearly linear speedup. In the Spark implementation, we find that a straightforward implementation of DTW does not perform well. This is because; a major step in DTW is parallel computation of a lower bound. This paradigm is not supported well by Spark, which supports (i) broadcast variables that are broadcasts of read-only variables (ii) accumulation variables that represent distributed sums. We show how to compute distributed lower bounds efficiently in Spark and achieve nearly linear speedup with DTW in a Spark computation as well.

S2.10 Improve the channel performance of Wireless Multimedia Sensor Network using MIMO properties
Arjav Bavarva (Marwadi University, India); Preeti Vinayakray-Jani (Dhirubhai Ambani Institute of Information and Communication Technology, India)

Wireless Multimedia Sensor Networks (WMSN) are designed to transmit audio and video streams, still images and scalar data. Multimedia transmission over wireless sensor network has many killer applications like, video surveillance system, object tracking, telemedicine, theft control system and traffic monitoring. Researchers are facing many challenges such as higher data rate, lower energy consumption, reliability, signal detection and estimation, uncertainty in network topology, Quality of Service and security related issues to accomplish various applications of WMSN. Multiple Node Multiple Input Multiple Output (MN-MIMO) properties have been used to improve system performance in terms of data rate, energy consumption and channel capacity. In this paper, mathematical model is presented to calculate and analyze various parameters of the network like, SNR, channel capacity and data rate. Simulation results demonstrate the effect of various channel models on output in deep fading environment and proposed channel model performs better for WMSN compare to non-adaptive system in terms of Bit Error Rate.

S2.11 The Effect of DC Coefficient on mMFCC and mIMFCC for Robust Speaker Recognition
Diksha Sharma and Israj Ali (KIIT University, India)

In Speaker Recognition (SR) system, feature extraction is one of the crucial steps where the particular speaker related information is extracted. The state of the art algorithm for this purpose is Mel Frequency Cepstral Coefficient (MFCC), and its complementary feature, Inverted Mel Frequency Cepstral Coefficient (IMFCC). MFCC is based on mel scale and IMFCC is based on inverted mel (imel) scale. There are two another set of features we proposed as mMFCC and mIMFCC. In state-of-the-art system, we neglect the DC co-efficient of DCT from the feature set. In this paper, the DC coefficient and its effect on recognition accuracy on MFCC-IMFCC, as well as, mMFCC-mIMFCC has been studied. This has been verified on two standard different types of databases, like, YOHO for clean speech signal and POLYCOST for telephone based speech. The recognition accuracy of the proposed feature is better than their respective baseline feature when the DC coefficient was included, as well as, when it was not included.

S2.12 Exploration of carboxyl functionalized eugenol for preventing metastasis of pancreatic cancer
Arun PS, Lekshmysre Nair and Karthik Manogaran (IIITMK, India); Vidya Vinodini M D and Manojkumar TK (IIITM-K, India)

Insilico exploration has been carried out on carboxyl functionalized eugenol to explore it's activity towards preventing the interaction with MUC-16 to evade further metastasis and increased cell motility in PC. The carboxyl functionalized eugenol was tailored and explored on the basis of HOMO-LUMO analysis using DFT, docking study and ADME analysis.

S2.13 Pyranone Benzene complexes as potential nano-flippers -: A DFT study
Bindu P Nair (Sir Syed College, Thaliparamba, India); Manojkumar TK (IIITM-K, India); Sreedhar KM (Amrita Vishwa Vidyapeetham University, Amritapuri Campus, India); Mohamed Asraf and Zeinul Hukuman (Sir Syed College, Thaliparamba, India)

Computational studies were carried out on Pyranone - Benzene complex at different charged states to study the properties of the complexes and the preferred orientation of the complexes in neutral state and anionic states. M05/6-31+G* studies indicate the orientation of neutral as well as anionic molecules are entirely different and this properly can be used for designing novel nano-mechanical devices.

S8: S8-Multimedia Security and Forensics

Room: 310
Chairs: Ajinkya S. Deshmukh (Uurmi System Pvt. Ltd., India), Ram Ratan (SAG, DRDO, India)
S8.1 Multi-band Sum of Spectrogram based Audio Fingerprinting of Indian film songs for Multi-lingual Song Retrieval
Sriranjani Seetharaman (Indian Institute of Technology, India); Kannan Karthik (IIT Guwahati, India); Prabin Kumar Bora (Indian Institute of Technology Guwahati, India); V Abdulkareem (College of Engineering, Cherthala, India)

Film music compositions are highly diversified, exhibiting not just changes in background scores and singer's voices, but even the lyrical embellishments are morphed into different languages to suit regional audiences. Given this diversified prevalence amongst recorded film music, retrieval becomes extremely challenging. In this paper we propose an approach based on a multi-band sum of spectrogram, executing a delicate tradeoff between sensitivity to pitch jitters incurred by lyrical and singer voice changes while keeping the melodic signature intact. The top-3 retrieval accuracy for the multi-band sum of spectrogram has been found to be around 91% for an STFT window size of 128ms.

S8.2 Improving Performance Analysis of Multimedia Wireless Sensor Network: A Survey
Budhewar Anupama Shankarrao (SGGSIE&T, India); Ravindra Thool (SGGSIE&T, Vishnupuri Nanded, India)

In Recent times Wireless sensor network (WSNs) has become one of the most interesting networking technologies since, it can be expanded without communication infrastructures. Wireless sensor network face certain limitations such as power, data redundancy and high bandwidth requirement when used for multimedia data .The researchers have focused on the video encoding at the multimedia sensors side,applying the real-time compression on the video data at a base station. This analytical study, represents the size of video which affects the bandwidth and time. These networks now support a wide range of applications. Real-time multimedia applications have rigorous requirements for one-to-one delay loss during network transport. The objective of this paper is to use classification of the block matching algorithms and an evolutionary criteria for video compression . Finally, a detailed study has been made on comparison of various algorithms that have been implemented for video compression along with advantages and disadvantages they face.

S8.3 Robust Audio Steganography based on Advanced Encryption Standards in Temporal Domain
Aniruddha Kanhe (National Institute of Technology Puducherry Karaikal, India); Aghila Gnanasekaran (National Institute of Technology Puducherry Karaikal); Ch. Yaswanth Sai Kiran (National Institute Of Technology Puducherry, Karaikal, India); Ch. Hanuma Ramesh, Gabbar Jadav and M. Gowtham Raj (National Institute of Technology Puducherry, Karaikal, India)

In this paper a robust Audio steganography technique is proposed by randomizing and dynamically changing the embedding sequence. Advance Encryption standards is used for providing additional security and robustness to the algorithm and tested for the 30 speech files. The addition of cryptography in steganography increases the robustness and introduces a higher level of security since the key is required to decrypt the secret message. To evaluate the quality of the stego files SNR and Correlation coefficients are calculated and an experimental test is also performed over 10 listeners to identify the change in the original audio and cover audio.

S8.4 An efficient shadow removal method using HSV color space for video surveillance
Shraddha Singh and Tushar Patnaik (CDAC, Noida, India)

An approach to detect and remove cast shadows of moving objects was proposed in this paper. Gaussian mixture model with only one learning rate was used for background subtraction and modeling. Initial classification of foreground pixels into object pixels and shadow pixels were performed using saturation property of HSV color space. In the hue difference or brightness ratio based shadow detection step, mixture of two Gaussian density functions were used to model the density function computed on the values of hue difference or brightness ratio. Expectation Maximization (EM) algorithm was used to estimate the Gaussian parameters. Threshold calculations were based on estimated parameters used to obtain set of shadow pixels. Local region property based shadow detection step uses local brightness ratio property to obtain the set of shadow pixels. Results of experiments performed on different scenarios shows that the proposed approach is robust and accurate.

S8.5 Detection of Purple Fringing Aberration Patterns and its Application to Forensic Analysis of Spliced Images
Parveen Malik and Kannan Karthik (IIT Guwahati, India)

Morphing, fusion and stitching of digital photographs from multiple sources is a common problem in the recent era. While images may depict visual normalcy despite a splicing operation, there are domains in which a consistency or an anomaly check can be performed to detect a covert digital stitching process. Most digital and low-end mobile cameras have certain intrinsic sensor aberrations such as purple fringing (PF), seen in image regions where there are contrast variations and shadowing. This paper proposes an approach based on Fuzzy clustering to first identify regions which contain Purple Fringing and is then used as a forensic tool to detect splicing operations. The accuracy of the Fuzzy clustering approach is comparable with the state-of-the-art PF detection methods and has been shown to penetrate standard interpolation and stitching operations performed using ADOBE PHOTOSHOP.

S8.6 Dissolve Detection in Videos Using an Ensemble Approach
Hrishikesh Bhaumik (RCC Institute of Information Technology, Canal South Road, Beliaghata, Kolkata, India); Siddhartha Bhattacharyya and Manideepa Chakraborty (RCC Institute of Information Technology, India); Susanta Chakraborty (Indian Institute of Engineering Science and Technology, India)

Detection of shot transitions is an important preprocessing step for content based video retrieval systems. Out of the various types of transitions present in videos, dissolve detection is the most challenging one. This is due to the inherent complexity induced by the component frames making up the dissolve transition. In this work, a two-phased approach is presented for detecting the dissolve sequences. The first phase is concerned with identifying candidate dissolve sequences based on the parabolic nature of the mean fuzzy entropy data computed on the composing frames of the video. In the second phase, the candidates are filtered through multiple stages where each stage is based on a low-level feature of the video stream. The threshold in each stage is based on the data obtained for that feature from the constituent frames of the video. The final set of dissolve sequences are obtained at the end of the filtration stage. The proposed method is also able to detect the span of the dissolve sequence with an error of maximum one frame. Comparisons reveal that the proposed method outperforms the state-of-the-art methods in terms of both recall and precision.

S8.7 Text Localization in Video/Scene Images using Kirsch Directional Masks
Smitha ML (KVG College of Engineering, Sullia, India); B H Shekar (Mangalore University, India)

Text plays its vital role in visual content analysis and understanding. Videos contain text with diversity in its text patterns and complex backgrounds. In this paper, we propose an approach based on compass operator for detecting the edges. We obtain the edge maps by convolving the Kirsch Directional Masks along the eight different directions for the preprocessed video frame. The resultant images are binarized and the edge maps are analyzed and concatenated to obtain a single binarized image. Further, we employ geometrical rules and shape properties to eliminate the false positives. Then, the text regions are identified using connected component analysis and bounding boxes are placed for the detected text regions so that the text gets localized. Experimental results obtained on the standard datasets reveal that the proposed method effectively detect and localize texts of various sizes, fonts and colors in videos and scene images.

S8.8 Classification of Distorted Text and Speech Using Projection Pursuit Features
Rajesh Asthana, Neelam Verma and Ram Ratan (SAG, DRDO, India)

Information to be exchanged between two parties needs compression for achieving its efficient transmission. Encoded information gets distorted during its transmission over a channel due to noise. For monitoring and analysis of such noisy traffic of an adversary over communication networks, it is required to find the type of information, whether it is text or speech, then to restore it for further interpretation. Identification of text and speech helps to take preventive measure to avoid plain communication of sensitive information. In this paper, we consider a minimum distance criterion based pattern classification technique to classify distorted (noisy) encoded text and speech using multidimensional feature vectors and their projection pursuits obtained through Sammon's and Chang's algorithms. Feature extraction technique computes longest runs of one's in blocks of bit-stream of noisy text and speech data. The classification results show that the highly noisy text and speech could be classified with almost 100% success using Chang's projection pursuit technique.

S8.9 logarithmic dyadic wavelet transform based face recognition
Rajesh D S and B H Shekar (Mangalore University, India); Sharmila Kumari M (PA College of Engineering, India)

In this paper, we present a new local descriptor based on logarithmic dyadic wavelet transform. The interest points at different scales have been identified and a region of size 40x40 is considered around each interest point to obtain gradientorientation histogram in the transformed space. The chi square distance metric is used for classification. Extensive experiments have been conducted on standard face datasets such as ORL and LFW to demonstrate the suitability of the proposed descriptor for face recognition. A comparative analysis with some of the well known methods is also provided to argue that the proposed method is comparatively perform much better than the existing methods.

S8.10 A Coupled Chaos Based Image Encryption Scheme Using Bit Level Diffusion
Kavitha C (Anna University, Chennai, India); Devaraj P (Anna University, India)

A chaos based image encryption scheme with bit level operation and nonlinear chaotic map is proposed in this paper. The bit plane permutation and diffusion using bits from neighboring pixel and other two colour channels introduce confusion and diffusion within the colour components as well as between the colour components. Simulation results show that the security of the scheme is enhanced when compared with certain known existing schemes.

S8.11 Dual Stage Text Steganography Using Unicode Homoglyphs
Sachin Hosmani and H G Rama Bhat (National Institute of Technology Karnataka); Chandrasekaran K (National Institute of Technology Karnataka, India)

Text steganography is hiding text in text. A hidden text gets hidden in a cover text to produce a plain looking stego text. This plain looking stego text is posted as the message which no one suspects to contain anything concealed. Today, text messages are a common mode of communication over the internet and it is associated with a huge amount of traffic. Steganography is an added layer of protection that can be used for security and privacy. In this paper, we describe a text steganography approach that provides a good capacity and maintains a high difficulty of decryption. We make use of approaches of space manipulation, linguistic translation and Unicode homoglyphs in our algorithm. Our implementation is in Python. Also, we explain a parallel approach for hiding large hidden text messages in large cover text messages.

S9: S9-Hybrid Intelligent Models and Applications

Room: 311
Chairs: Manjunath Aradhya (Sri Jayachamarajendra College of Engineering, India), Sanjaya Kumar Panda (National Institute of Technology, Warangal, India)
S9.1 Design, Simulation and Implementation of Cascaded Path Tracking Controller for a Differential Drive Mobile Robot
Vinodraj N (National Institute of Technology Calicut, India); Abraham T Mathew (NIT Calicut, India)

The problem of motion planning and control of mobile robots is a key research area in view of their relevance in applications. This paper proposes a cascaded control topology for reference path tracking of a differential drive mobile robot. Kinematic and dynamic modelling of the robot is presented. Both master and slave controllers have PID configuration. Control parameters obtained from simulation are fine tuned for hardware implementation using pcb mounted pots. Simulation and experimental results demonstrates the performance and robustness of the proposed controller.

S9.2 Automated Surveillance of Computer Monitors in Labs
Khalid Babutain, Saied Alaklobi, Anwar Alghamdi and Sreela Sasi (Gannon University, USA)

Object detection and recognition are still challenging fields in computer vision. They commonly find application in surveillance systems to enhance the level of security. University computer labs may or may not have surveillance video cameras, but even where a camera is present, there may not be any automated, intelligent software system that can detect the absence of a computer monitor. In the present study, a system called Automated Surveillance of Computer Monitors in Labs (ASCML) was designed and developed for automatic detection of any such absence. This system can also detect the person responsible for removing the monitor. If this is an authorized person, the system will display their name and facial image from the database of all employees. If the person in question is unauthorized, the system will generate an alarm and will display that person's face, and an automated email will instantly be sent to the security department of the university with that facial image. The research confirms that this automated system could be used for monitoring any computer labs.

S9.3 Classification and Clustering for Neuroinformatics: Assessing the efficacy on reverse-mapped NeuroNLP data using standard ML techniques
Nidheesh Melethadathil (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, India); Priya Chellaiah (Amrita University, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Neuroinformatics Natural Language Processing (NeuroNLP) relies on clustering and classification for information categorization of biologically relevant extraction targets and for interconnections to knowledge-related patterns in event and text mined datasets. The accuracy of machine learning algorithms depended on quality of text-mined data while efficacy relied on the context of the choice of techniques. Although developments of automated keyword extraction methods have made differences in the quality of data selection, the efficacy of the Natural Language Processing (NLP) methods using verified keywords remain a challenge. In this paper, we studied the role of text classification and document clustering algorithms on datasets, where features were obtained by mapping to manually verified MESH terms published by National Library of Medicine (NLM). In this study, NLP data classification involved comparing 11 techniques and unsupervised learning was performed with 8 clustering algorithms. Most classification techniques except meta-based algorithms namely stacking and vote, allowed 86% or higher training accuracy. Test accuracy was high (=>95%) probably due to limited test dataset. Logistic Model Trees had 30-fold higher runtime compared to other classification algorithms including Naïve Bayes, AdaBoost, Hoeffding tree. Grouped error rate in clustering was 2-16%. Runtime-wise, clustering was faster than classification algorithms on MESH-mapped NLP data suggesting clustering methods as adequate towards Medline-related datasets and text-mining big data analytic systems.

S9.4 Estimator Based Controllers for Hybrid Systems with Uncertainties - An Experimental Validation
Shijoh Vellayikot (NIT Calicut, India); Mathew Varghese Vaidyan (National Institute of Technology Calicut, India)

Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) based controllers for a class of hybrid systems, namely autonomous hybrid system (AHS), are proposed in this work. The stability in the performance of EKF and UKF based controllers were analyzed using experimental setup of hybrid three-tank system under variety of real-time uncertainties and operating conditions such as servo-regulatory operations, process-model parameter mismatch, initial condition variations and some hand valve faults. Quantitative comparison of the performance was made in terms of integral square error criterion and control signal generation time. The result confirms the efficacy and robustness of the controllers under the considered operating conditions.

S9.5 An Adaptive Method for Mining Frequent Itemsets Efficiently:An Improved Header Tree Method
Jamsheela Nazeer (University of Kannur, India); Raju G (Kannur University, India)

Data mining has become an important field and has been applied extensively across many different areas. Mining frequent itemsets from a transaction database is crucial for mining association rules. FP-growth algorithm has been widely used for frequent pattern mining and it is one of the most important algorithm proposed to efficiently mine association rules because it can dramatically improve the performance compared to the Apriori algorithm. Many investigations have proved that FP-growth method outperforms the method of Apriori-like candidate generation. The performance of the FP-growth method depends on many factors; the data structures, recursive creation of pattern trees, searching, sorting, insertion and many more. In all of the algorithms which are using fp-tree, a header table is used with sorted items. Header table is an important data structure in the mining process. The main datastructure (frequent trees) is created with the use of the header table. In this paper we suggest a new Binary Search Header Tree (BSHT) and an Improved Header Tree mining (IHT-growth) to improve the performance of the frequent pattern mining. Experimental results show that the mining with BSHT is efficient for frequent pattern mining.

S9.6 Hybrid Carrier Based Space Vector Modulation for PV Fed Asymmetric Cascaded Multilevel Inverter
Sujitha Nachinarkiniyan and Krithiga S. (VIT University, Chennai)

In this paper, hybrid carrier based space vector modulation technique (HCBSVM) is designed for asymmetric cascaded multilevel inverter (CMLI). The asymmetric CMLI is used in the area of high power medium voltage applications. In the proposed system, photovoltaic (PV) array and dc source is used as asymmetric sources. The proposed HCBSVM scheme reduces the harmonics in the output voltage of CMLI. This cascaded inverter has been analysed for both symmetric and asymmetric sources. Simulation studies of the proposed system have been carried out in MATLAB/simulink environment and results of symmetric and asymmetric inverter topology are compared.

S9.7 Novel Leases for IaaS Cloud
Sanjaya Kumar Panda (National Institute of Technology, Warangal, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

Infrastructure as a Service (IaaS) cloud offers the virtualized resources to the customer in the form of leases. IaaS cloud provider allocates the resources to the leases using various modes such as advance reservation, best effort and immediate. Advance reservation lease requires the resources at a specific time. Best effort lease is placed in a FIFO (first in first out) queue and allocated to resources only when sufficient resources are available. Immediate lease is assigned on its arrival if the resources are available, otherwise it is rejected. Haizea is an open source lease manager that provides these leases by creating virtual machines. However, it is not possible to satisfy the requirements of the leases as no cloud provider has the unlimited resource capacity at a time. In this paper, we propose four new modes of leases, namely period reservation (PR), token reservation (TR), advertisement (AD) and soft and hard deadline (SHD) for the lease managers such as Haizea and Amazon EC2. These modes are more efficient and flexible in contrast to the existing leases. The theoretical analysis clearly shows that the proposed leases outperform the existing leases in terms of cost and acceptance of leases.

S9.8 Securing the contents of Document Images using Knight Moves and Genetic Approach
Jalesh Kumar and S Nirmala (J. N. N. College of Engineering, India)

In this paper, a novel approach is proposed to secure the contents of document images. In the proposed approach, input document image is simulated as chessboard. Encryption process is based on moves of knight pawn used in chess board and cross over operation of genetic algorithm. The proposed approach comprises two stages. In first stage, a number of knight pawns with initial positions are selected. Legal movement of knight pawns for a specific number of times is performed. Knight pawns acquire new positions after each move. Crossover operation is applied on randomly selected two positions of knight pawns in second stage to generate the encrypted image. Exhaustive experiments are carried out by varying the number of knight pawns and number of moves of knight pawns. The performance evaluation of the approach is measured in terms of the parameters entropy and Peak Signal to Noise Ratio. The proposed approach is compared with the encryption method in [12]. The comparative analysis reveals that the proposed algorithm enhances the security of the information contents in input document images.

S9.9 Approximate Radial Gradiant Transform based Face Recognition
Rajesh D S and B H Shekar (Mangalore University, India)

In this paper, we propose a new local descriptor based on the approximate radial gradient transform. Initially, the interest points are detected using the difference of box filters. Then, the circular region of certain size around these interest points is represented using the approximate radial gradient transform and hence the descriptor is formed. We have conducted experiments on some of the well known face datasets such as ORL and LFW datasets to reveal the success of the proposed method and comparative analysis is provided to argue that the proposed method is on par with the other contemporary face recognition methods.

S9.10 Recognition of Stonefish from Underwater Video
Sreela Sasi and Hattan Ashour (Gannon University, USA)

There is thousands of organisms under the water and every group of organisms have many types or species. Some are dangerous and will attack when touched and some others will attack directly without any reason. In this research, a method that can recognize a stonefish, which is the most venomous fish in the world from a video to help divers or swimmers in open water to avoid danger, is presented. When a stonefish hides, it keeps its head visible until it can attack the small fish quickly. This property is used to formulate the algorithm. The video is enhanced using Gaussian filter, Median filter, and Wiener filter. The method uses a database of images that contain the images of the head of stonefish. The features of the stonefish head is detected from the video by comparing the features of head images in the database by using a part of the Speeded Up Robust Features (SURF) method. The features in the video are compared with the features of the images of the database by using the k-Nearest Neighbor algorithm and the Histogram. The results of the comparison will decide if there is a stonefish in the video or not. Finally, if there is a stonefish in the video, the system will generate a warning signal to help the divers or swimmers to move away from it immediately.

S9.11 On Selection of Attributes for Entropy Based Detection of DDoS
Sidharth Sharma (Indian Institute of Technology Bombay, India); Santosh Sahu (National Institute of Technology, Rourkela, India); Sanjay Jena (NIT, Rourkel, India)

Distributed Denial of service (DDoS) attack is an attempt to prevent the legitimate users to use services provided by service providers by flooding their server with unnecessary traffic. These attacks are performed on some prestigious web sites like Yahoo, Amazon and on various cloud service providers. The severity of the attack is very high as a result the server goes down for indefinite period of time. To detect such attempts, various methods are proposed. In this paper, an entropy-based approach is used to detect the DDoS attack. We have analyzed effect on entropy of all the useful packet attributes during DDoS attack and tested their usefulness against famous types of Distributed denial of service attacks. During analysis, we have explained the proper choice of attributes one should make to get a better threshold during DDoS detection if entropy-based detection is being used.

S9.12 A Modified MFCC Feature Extraction Technique For Robust Speaker Recognition
Diksha Sharma and Israj Ali (KIIT University, India)

In Speaker Recognition (SR) system, feature extraction is one of the crucial steps where the particular speaker related information are extracted. The state of the art algorithm for this purpose is Mel Frequency Cepstral Coefficient (MFCC), and its complementary feature, Inverted Mel Frequency Cepstral Coefficient (IMFCC). MFCC is based on mel scale and IMFCC is based on inverted mel (imel) scale. In this paper, another complementary set of features are proposed which is also based on mel-imel scale, and the filtering operation makes these set of features different from MFCC and IMFCC. On the background of this proposed features, the filter banks are placed linearly on the nonlinear scale which makes the features different from the state-of-the-art feature extraction techniques. We call these two features as mMFCC, and mIMFCC. mMFCC is based on mel scale, whereas, mIMFCC is based on imel scale. mMFCC is compared with MFCC and mIMFCC is compared with IMFCC. The result has been verified on two standard databases YOHO, and POLYCOST using Gaussian Mixture Model (GMM) as the speaker modeling paradigm.

S5: S5-Artificial Intelligence and Machine Learning- I

Room: 501
Chair: Kannan Balakrishnan (Cochin University of Science and Technology, India)
S5.1 Sign Language Recognition Through Kinect Based Depth Images And Neural Network
Varun Tiwari and Vijay Anand (Visvesvaraya National Institute of Technology, India); Avinash Keskar (Visvesvaraya National Institute of Technology); Vishal Satpute (VNIT Nagpur, India)

Sign language is the language of the people with hearing and speaking disabilities. In it mostly hands are moved in a particular way which along with some facial expression produces a meaningful thought which the speaker would like to convey to others. Using the sign language people with speaking and hearing disabilities can communicate with others who know the language very easily but it becomes difficult when it comes to interacting with a normal person. As a result there is a requirement of an intermediate system which will help in improving the interaction between people with the hearing disabilities as well as with the normal people. In this paper we present a sign language recognition technique using kinect depth camera and neural network. Using the kinect camera we obtain the image of the person standing in front of the camera and then we crop the hand region from the depth image and pre-process that image using the morphological operations to remove unwanted region from the hand image and find the contour of the hand sign and from the particular contour position of the hand we generate a signal on which Discrete Cosine Transform (DCT) is applied and first 200 DCT coefficient of the signal are feed to the neural network for training and classification and finally the network classify and recognize the sign. A data set of sign from 0 to 9 are formed using kinect camera and we tested on 1236 images in the database on which training is applied and we achieved 98% training and an average accuracy for all the sign recognition as 83.5%.

S5.2 RTWPCAMARM: A Dynamic Real Time Weather Prediction System with 8 Neighborhood Hybrid Cellular Automata and Modified Association Rule Mining
Kiran Sree Pokkuluri, KSP (Sri Vishnu Engineering College for Women, India); Ssssn Usha Devi N (University College of Engineering, India)

Weather prediction at real time has been a vital problem among the most technologically and innovatively difficult issues in the most recent century. In this research paper we propose a classifier which considers 8 neighborhood Cellular Automata and modified association rule mining technique to predict the variations in temperature, wind speed and rainfall. This prediction method is based on statistical and CA-MAR based climatologically methods united with discovery of knowledge. We have collected local SCADA data sets pertaining to the East Godavari region and rigorous experiments were conducted to improve the adaptability and accuracy of the classifier. We can expect the outputs of this classifier to identify temperature, wind speed and rainfall in advance will definitely improve the standard of life particularly to farmers

S5.3 Cloud Hopfield Neural Network Analysis And Simulation
Narotam Singh (IMD, Ministry of Earth Sciences, India); Amita Kapoor (University of Delhi & Shaheed Rajguru College of Applied Sciences for Women, India)

In this paper we present modifications in the dynamics of Hopfield neural network. We compare our modified retrieval algorithms with both synchronous and asynchronous retrieval algorithms used in Hopfield dynamics. Our results show that a modified Hopfield neural network consisting of a cloud with r number of unique neurons, (in the simulation given in this paper r=3% i.e. 4 neurons out of total 120) is better in terms of both retrieval capabilities and convergence time in comparison to the asynchronous retrieval algorithm. Moreover, unlike synchronous retrieval algorithm it does not enter oscillation states.

S5.4 Extraction of relevant dataset for support vector machine training: A Comparison
Adeena K d (Amrita University, India); Remya R (Amrita Vishwa Vidyapeetham, India)

Support Vector Machine (SVM) is a popular machine learning technique for classification. SVM is computationally infeasible with large dataset due to its large training time. In this paper we compare three different methods for training time reduction of SVM. Different combination of Decision Tree (DT), Fisher Linear Discriminant (FLD), QR Decomposition (QRD) and Modified Fisher Linear Discriminant (MFLD) makes reduced dataset for SVM training. Experimental results indicates that SVM with QRD and MFLD have good classification accuracy with significantly smaller training time.

S5.5 Mining the Impact of Object Oriented Metrics for Change Prediction using Machine Learning and Search-based Techniques
Ruchika Malhotra and Megha Khanna (Delhi Technological University, India)

Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.

S5.6 A Technique for Associating Political Alignment to Users
Rishi Singh (Iit Roorkee, India); Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

Today people use social media (eg, Twitter) to express their views on different topics. Many political parties have utilized such social media platforms for their election campaigns. Twitter data may provide a lot of information about a users personality. We study how the personality and inter- est of an individual is linked with his/her political associations. We have used the data set of two political parties (AAP and BJP) pertaining to the 2014 general elections of India. We obtained 1000 users from the network around the prominent leaders of these parties and manually annotated the relevant data and trained it using SVM classifier. The results showed that interest, personality, and both considered together have relevance in determining the political associations for a user, yielding the classifier accuracy 0.60.

S5.7 Centroid Based Binary Tree Structured SVM for Multi Classification
Aruna Govada and Bhavul Gauri (BITS-Pilani KK Birla Goa Campus, India); Sanjay K. Sahay (BITS Pilani, India)

Support Vector Machines (SVMs) were primarily designed for 2-class classification. But they have been extended for N-class classification also based on the requirement of multi classes in the practical applications. Although N-class classification using SVM has considerable research attention, getting minimum number of classifiers at the time of training and testing is still a continuing research. We propose a new algorithm CBTS-SVM (Centroid based Binary Tree Structured SVM) which addresses this issue.In this we build a binary tree of SVM models based on the similarity of the class labels by finding their distance from the corresponding centroids at the root level. The experimental results demonstrates the comparable accuracy for CBTS with OVO with reasonable gamma and cost values. On the other hand when CBTS is compared with OVA,it gives the better accuracy with less training time and testing time.Furthermore CBTS is also scalable as it is able to handle the large data sets.

S5.8 Tsallis entropy and particle swarm optimization-based cyclone image vortex localization
Harish Anil Jamakhandi and Tilak D (M S Ramaiah Institute of Technology, India); Manikantan K (Visvesvaraya Technological University & M S Ramaiah Institute of Technology, India); Seetharaman Ramachandran (Visvesvaraya Technological University, India)

Cyclone vortex localization under varying conditions of saturated spiral bands is challenging. This paper presents a unique combination of image processing techniques, viz., Sequential Cross-Correlation (SCC) and Multi-Level Thresholding (MLT) for vortex localization. SCC is used for cyclone detection in a full-disk satellite imagery, and is based on the high degree of correlation in the sequence of cyclone stages. MLT is used for vortex localization in the detected Tropical Cyclone (TC), and is based on Tsallis entropy and particle swarm optimization (PSO). These consider unimodal distribution of the pixel intensity and non-extensive nature of cyclone for image segmentation. The vortex co-ordinates thus obtained will be the authentic estimate of the TC's vortex and is further used for the TC tracking. The proposed algorithm is applied on the full-disk visible and infrared (IR) imagery of size 744 x 676 obtained from Geostationary Operational Environmental Satellites, namely GOES-12 and 13 and the experimental results indicate that proposed algorithm efficiently tracks the TC with the best average Euclidean distance error of 23 per TC.

S5.9 Opposition Based Particle Swarm Optimization with Exploration and Exploitation through gbest
Biplab Mandal (Bankura Unnayani Institute of Engineering, India); Tapas Si (Bankura Unnayani Institute of Engineering, Bankura, WB, Inida, India)

Particle Swarm Optimizer is a swarm intelligent algorithm which simulates the behaviour of bird's flocking and fish schooling. This paper presents an improved opposition based Particle Swarm Optimizer. In the proposed method, generalized opposition based learning is incorporated first in population initialization and particle's personal best position. Second, a controlled mechanism of exploration and exploitation is employed through global best position of the swarm. The proposed method is applied on 28 CEC2013 benchmark problems. A comparative study is made with standard Particle Swarm Optimizer and its other opposition based variants. The experimental results show that the proposed method statistically outperforms other methods.

S5.10 A Model for Controlling Variance in the Artificial Bee Colony Algorithm
Satish Chandra and Vivek Kothari (Jaypee Institute of Information Technology, India); Mudita Sharma (FJG, United Kingdom (Great Britain))

The problem of solving large optimization problems has gained attention due to increasing numbers of constraints. Out of the numerous techniques, which have been used for solving such problems, one, which has gained much popularity in recent times, is that of the stochastic population based search. Evolutionary or Swarm based search algorithms are based on this principle. Artificial Bee Colony (ABC) algorithm is new entrant into this fray. Unlike several other population based search algorithms, the ABC, however, shows large amounts of variance in its runs. The algorithm has also not been modeled with any significant degree of success. After briefly covering the Artificial Bee Colony algorithm (used in recent solution to the variation problem), this paper proposes and analyzes a mathematical model for this algorithm (and others showing the same characteristics). This work then gives overview of Genetic Algorithm and looks at a recently proposed strategy for controlling variation using Genetic operators. It concludes by analyzing the modification, outlining the generality and applicability of proposed model and discussing future work.

S5.11 A Parallel GWO Technique for Aligning Multiple Molecular Sequences
Jayapriya Jayakumar and Michael Arock (National Institute of Technology, India)

Sequence analysis paves way for structural and functional analysis in Bioinformatics. The preliminary step for this sequence analysis is aligning the molecular sequences. This paper introduces parallelism in aligning multiple sequences by parallelizing a bio-inspired algorithm called Grey Wolf Optimizer (GWO) technique. Owing to the tradeoff between accurate solutions and less computational time, many heuristic algorithms are developed. The GWO algorithm involves search agents, which are treated as initial solutions for the optimization problem. Data parallelism is employed in the initialization phase and generation phase. This technique is implemented in Quadro 4000 a CUDA based GPU using threads. The results show that the proposed algorithm reduces the computational time than other existing ones.

S5.12 An Integrated Approach for Robot Training using Kinect and Human Arm Kinematics
Abhishek Jha and Shital Chiddarwar (Visvesvaraya National Institute of Technology Nagpur, India); Mayur Andulkar (BTU Cottbus Senftenberg & Chair of Automation Technology, Germany)

In this paper, a new approach based on amalgamation of Kinect based motion sensing system and human arm kinematics is proposed for motion control of an industrial robot. The proposed approach makes use of the human arm motion during task demonstration to attain positional control over robot motion. To achieve this, the human arm configurations from demonstration workspace are mapped to the robot workspace by integrating the incremental inverse kinematic models of the human arm and robot with the Kinect. The proposed method is implemented on an industrial robot showing hand path imitation in real time. The performance of the proposed approach is evaluated on the basis of reproduction of the geometrical path and repeatability in task imitation. The experimental results obtained indicate that the method provides a consistent way to control robot motion by utilizing natural motions of the human arm.

S5.13 Mining Defect Reports for Predicting Software Maintenance Effort
Rajni Jindal (Delhi College of engineering, India); Ruchika Malhotra and Abha Jain (Delhi Technological University, India)

Software Maintenance is the crucial phase of software development lifecycle, which begins once the software has been deployed at the customer's site. It is a very broad activity and includes almost everything that is done to change the software if required, to keep it operational after its delivery at the customer's end. A lot of maintenance effort is required to change the software after it is in operation. Therefore, predicting the effort and cost associated with the maintenance activities such as correcting and fixing the defects has become one of the key issues that need to be analyzed for effective resource allocation and decision-making. In view of this issue, we have developed a model based on text mining techniques using the statistical method namely, Multi-nominal Multivariate Logistic Regression (MMLR). We apply text mining techniques to identify the relevant attributes from defect reports and relate these relevant attributes to software maintenance effort prediction. The proposed model is validated using 'Camera' application package of Android Operating System. Receiver Operating Characteristics (ROC) analysis is done to interpret the results obtained from model prediction by using the value of Area Under the Curve (AUC), sensitivity and a suitable threshold criterion known as the cut-off point. It is evident from the results that the performance of the model is dependent on the number of words considered for classification and therefore shows the best results with respect to top-100 words. The performance is irrespective of the type of effort category.

Monday, August 10 14:30 - 17:30 (Asia/Kolkata)

T2: Tutorial -2: Big Data Mining in Smart Grid: An Indian Perspective

Dr. Kumar Padmanabh, Robert Bosch, India
Room: LT1

Electricity has been in use since last 150 years. Information Technology has changed the life of human being since last 30 years. However there has been little impact of Information Technology on electricity infrastructure smart grid has been conceptualized. With advancement of information and communication technologies, different components of the electrical grid would produce different kind of data in future. Moreover, additional sensors would be required for monitoring different processes which will generate additional data. Size of this data would be in several peta bytes worldwide. Intelligent mining of these data can solve contemporary issues of electrical grid. For example, it can help in finding different types of electrical losses, health conditions of grid assets and help the stakeholders to perform predictive maintenance. The proposed tutorial would throw lights on (i) what are the different kinds of new data available in the ambit of Smart Grid, (ii) what are the traditional issues with electrical smart grid (iii) what can be inferred from these additional data and (iv) what kind of Data Mining is required to achieve the goal. This tutorial would use real life case studies in explaining the concepts. It will help fellow researchers in academia and industries who intend to work on Smart Grid, Internet of Things and Data Mining. Especially it will help research scholar finding the exact research problem.

T3: Tutorial -3: Stochastic models in Computer Science; Graph-theoretic and other applications

Dr. Snehanshu Saha (PES Institute of Technology, Bangalore, India)
Room: LT2

A simple yet fascinating continuous-time stochastic processes, known as Brownian motion marveled natural scientists over decades. The growth and evolution of Random methods of computation and experimentation from the context of physical sciences towards its ever expanding reach to the realms of modern computing is the topic of the tutorial. The lectures and demo strive to explain computers as stochastic processes and pose deterministic events, i.e the existence of events in a probabilistic manner. The idea is to explain/demonstrate the guiding principles in lucid language with the aid of several analytical examples and videos/demos and motivate the young students and researchers to explore this powerful and elegant paradigm.Outline including a short summary of every sectionDiscrete probability models in Computer Science- Randomness, Random variable, the real number line and the set of integers, Playing an ancient game of probability with simulation (Video demo-Web Application), Fair coin tossing game and implications in discrete sampling. Discrete distributions and applications in Computer Science (Demo using toolkit developed by the presenter), Fermi's neutron experiment.Measures of central tendency and implications-Expectation of a discrete random variable, probability mass function, Variance and estimation- implications in randomized sort algorithms, Monte Carlo simulation and its application in Econometric modeling.Random Graphs-Posing a deterministic problem via the theory of uncertainty, Erdos-Renyi theorem, Probability of edge connectivity and existence of cycles/cliques/independent sets in undirected graphsApplications in Computer Modeling- Markov Birth-death process, M/M/1 and M/M/K queues and applications in facility management, system performance analysis, Bayesian classification and Risk bounds in identifying/marking vulnerability in application security( Original research by the presenter)

Monday, August 10 14:30 - 19:30 (Asia/Kolkata)

S7: S7-SET-CAS/ Computer Architecture and VLSI- II

Speakers: Dr. Alex Pappachen James, Nazarbayev University, Kazakhstan & Dr. Shyam Diwakar, Amrita University, Kollam
Room: MSH
Chair: Alex James (IIITMK, India)

SET-CAS Keynote 1: Neuron Models with Memristive VLSI Circuits in Advanced Pattern Recognition Applications, Dr. Alex Pappachen James, Nazarbayev University, Kazakhstan

SET-CAS Keynote 2: Cerebellum-like Spatial-temporal Pattern Recognition Circuits using Spiking Neurons and their role in Bio-robotics, Dr. Shyam Diwakar, Amrita University, Kollam

S7.1 VLSI Architecture of Pairwise Linear SVM for Facial Expression Recognition
Sumeet Saurav (CEERI Pilani, India); Shradha Gupta (SSPL, DRDO, Delhi & IIT Delhi, India); Anil Saini (CEERI & ACSIR, India); Sanjay Singh (CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI) & Academy of Scientific & Innovative Research (AcSIR), India); Ravi Saini (CSIR-CEERI Pilani, India)

In this paper, we present VLSI architecture of Pairwise Linear Support Vector Machine (SVM) classifier for multi-classification on FPGA. The objective of this work is to facilitate real time classification of the facial expressions into three categories: neutral, happy and pain, which could be used in a typical patient monitoring system. Thus, the challenge here is to achieve good performance without compromising the accuracy of the classifier. In order to achieve good performance pipelining and parallelism (key methodologies for improving the performance/frame rates) have been utilized in our architectures. We have used pairwise SVM classifier because of its greater accuracy and architectural simplicity. The architectures has been designed using fixed-point data format. Training phase of the SVM is performed offline, and the extracted parameters have been used to implement testing phase of the SVM on the hardware. According to simulation results, maximum frequency of 241.55 MHz, and classification accuracy of 97.87% has been achieved, which shows a good performance of our proposed architecture.

S7.2 VLSI Architecture of Exponential Block for Non-Linear SVM Classification
Shradha Gupta (SSPL, DRDO, Delhi & IIT Delhi, India); Sumeet Saurav (CEERI Pilani, India); Sanjay Singh (CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI) & Academy of Scientific & Innovative Research (AcSIR), India); Anil Saini (CEERI & ACSIR, India); Ravi Saini (CSIR-CEERI Pilani, India)

In this work, we present a dedicated hardware implementation of exponential function computation unit using CORDIC (Coordinate Rotation Digital Computer) algorithm for extended range of input arguments. Hardware architecture design is done keeping in view its possible integration in the hardware implementation of the Radial Basis Function (RBF) based Support Vector Machine (SVM) classifier. The designed architecture is prototyped on a field programmable gate array (FPGA) to meet the specific requirement of performance. The proposed design is operating at a maximum clock frequency of 249 MHz. This shows good performance of our proposed architecture in terms of speed. Synthesis result also reveals that the proposed architecture is resource efficient.

S7.3 Implementing a Cloud Based Xilinx ISE FPGA Design Platform for Integrated Remote Labs
Jinalkumar Doshi and Pratiksha Patil (Mumbai University, India); Zalak Dave and Ganesh Gore (Eduvance, India); Jonathan Joshi (Eduvance); Reena Sonkusare (Mumabi University, India); Surendra Rathod (Sardar Patel Institute of Technology, Mumbai, India)

This paper describes the implementation of a cloud based Xilinx ISE platform that can be used by the users remotely. The Remote Xilinx environment is used to provide remote access to the Xilinx Integrated Software Environment (ISE). The main aim of the research presented is to highlight how users can access FPGA design resources from anywhere in combination with a potential remote FPGA lab. The architecture of the cloud based platform is described with a load analysis for the server. The cloud based approach has been proposed and a comparative analysis is discussed based on the results obtained. The remote environment is developed on the Ubuntu (open source) operating system by using Python and Hypertext Preprocessor (PHP) scripting languages. The open source Apache server is used for running Xilinx environment on a server and open source analysis tools are used to perform server load analysis for running Xilinx environment on the server system.

S7.4 Memristor Load Current Mirror Circuit
Alex James (IIITMK, India); Irina Dolzhikova and Olga Krestinskaya (Nazarbayev University, Kazakhstan)

Simple current mirrors with semiconductor resistive loads suffer from large on-chip area, leakage currents and thermal effects. In this paper, we report the feasibility of using memristive loads as a replacement of semiconductor resistors in simplistic current mirror configuration. We report power, area and total harmonic distribution, and report the corner conditions on resistance tolerances.

S7.5 An Efficient approach for Testing of FPGA Programming using LabVIEW
Naresh Kumar Reddy, B (DUK, India); Suresh N (K LU, India); Ramesh Jvn, Pavitra T and Krupa Bahulya Y (K L U, India); Pranose J Edavoor (N I T, India); Janaki Ram S (L B R C E, India)

Programming of Field Programmable Gate Arrays (FPGAs) have long been the domain of engineers with VHDL or Verilog expertise.FPGA's have caught the attention of algorithm developers and communication researchers, who want to use FPGAs to instantiate systems or implement DSP algorithms. These efforts however, are often stifled by the complexities of programming FPGAs. RTL programming in either VHDL or Verilog is generally not a high level of abstraction needed to represent the world of signal flow graphs and complex signal processing algorithms. This paper describes the FPGA Programs using Graphical Language rather than Verilog, VHDL with the help of LabVIEW and features of the LabVIEW FPGA environment.

S7.6 FPGA Implementation of multiplication algorithms for ECC
Ravi Kishore Kodali, Lakshmi Boppana, Venkata Sai Kiran A and Chandana Amanchi (National Institute of Technology, Warangal, India)

Various cryptographic techniques use finite field multiplication. Efficient implementation of finite field multiplication is essential. Especially, Elliptic curve cryptography, which provides high security with shorter key lengths, performs many multiplications while encrypting and decrypting. So, it is important to choose a faster and less resource consuming multiplier. Many algorithms have been proposed for the implementation of finite field multiplication in the literature. This paper discusses Sunar-Koc, Karatsuba and Booth's multiplication algorithms and also gives a comparison in terms of time and resource utilization. All the three algorithms are implemented for the key lengths of 194-, 233- and 384-bits.

S7.7 Performance Analysis of Low Power Microcode based Asynchronous P-MBIST
Yasha Jyothi Shirur, Sr. (12th main, 27th cross, Banashankari 2nd stage & BNMIT, India); Bilure Chetana Bhimashankar (BNMIT, India); Veena S Chakravarthi (BNM Institute of Technology, Bangalore & Consultant, Pereira Ventures, US, India)

In today's VLSI world, the designers concentrate on low power design, neglecting the test methodology. Defining low power test methodology is the need of the day. In this paper, Microcode based Asynchronous P-MBIST is implemented, measured and compared with similar feature Synchronous PMBIST. The implemented core has given Power, Area advantage of 95.44%, 23.95% respectively but with increased Timing of 10.04% over Synchronous P-MBIST. The design is a synthesizable core which can be extended to multiple memory fault testing. The implemented design is synthesized over two different technology 180nm & 45nm. Even, the methodology adopted has given power advantage in scaled down technology.

S7.8 Low Power Analog VLSI Implementation of Cortical Neuron with Threshold Modulation
Ethoti Radhika, Sanjeev Kumar and Anita Kumari (Lovely Professional University, India)

Neuron is the basic entity that transmits and process information through generated action potential or spikes in neuromorphic systems. In real scenario there is no fixed threshold in the cortical neuron but it varies for every neuron. In this work, the neuron with threshold modulation capability is implemented in analog VLSI using CMOS technology. The proposed neuron circuit generated time varying threshold voltage that applies to neuron input. The circuit is capable of generating a variety of different spiking patterns with diversity similar to that of real biological neuron cell. The paper describes how threshold modulation is achieved besides operation of the circuit. Simulation results of different patterns are presented along with threshold modulation to the circuit as well as power analysis for each pattern. The circuit is implemented in 0.18um technology in Cadence Design Environment. The neuron circuit is efficient to be used in microcircuits as it consumes low power.

S7.9 SPICE Algorithm Implementation For Optical Network Analysis
Soumya Sajjan (VTU & RVCE, India); Anant Kulkarni (Senior Technology Specialist, India); Vikram Seshasai (Principal Engineer, India); Gurupadappa Sadashivappa (VTU, India)

This paper aims at utilization of computational advantages of SPICE algorithm in the Optical domain for analyzing the optical network links with DWDM capability. Proposed methodology focuses on computing Power levels, Dispersion and Noise values throughout a given optical link by using the Nodal Analysis technique used in SPICE algorithm which is an optimal computing technique in electrical domain. An example DWDM optical link is considered and detailed matrix formation and its solving method using LU factorization is explained for computing the parameters given above.

S7.10 Optical Theremin based True Random Number Generation (TRNG) System
Rahul Sharma, Ramya Ullagaddimath, Amit Roy and Apratim Halder (BMS College of Engineering, Bangalore, India); Veena Hegde (BMS College of Engineering, India)

The aim of this paper is to develop a hardware Random Number Generator (RNG) by using an optical Theremin. Theremin is a non-contact electronic musical instrument based on the principle of heterodyne. It has two antennas to control volume and pitch of the sound which form a pair of resonant RLC oscillators. In the presented system, optical sensors (Photodiodes) have been used to produce the notes of the Theremin by varying the intensity of incident light beam. Two photodiodes have been used for this purpose to control gain and pitch of the sound notes. These sound waves are then sampled to give amplitude levels which forms the desired Random Number Sequence and is unique for every tune produced by the Theremin. Since random numbers are at the core of all encryption algorithms and network security such as SSH/SSL, generating high-entropy random numbers is the prime need of the hour.

S7.11 Design of Vedic-Multiplier using Area-Efficient Carry Select Adder
Gayatri Gokhale and Prabhavati Bahirgonde (N. K. ORCHID, India)

In this paper, area-efficient Vedic multiplier is designed using modified Carry Select Adder (CSLA). As the multiplication is nothing but subsequent addition process, adder is important block in the design of multiplier. Simple Ripple Carry adder (RCA) can be used for implementing multiplier. Digital adder has problem of carry propagation, thus carry select adder is used instead. Carry select adder is known to be one of the fastest adder structures. Here novel technique that is Vedic multiplier is implemented instead of normal multipliers like add and shift multiplier, array multiplier etc. Here multiplier is designed based on ancient Vedic multiplication technique. The goal of this paper is to design area-efficient Vedic multiplier based on crosswise and vertical algorithms. Conventional CSLA designs are compared with proposed design to prove its efficiency. On an average, modified CSLA has 29 % less area than Binary to Excess one Converter (BEC) based CSLA for different bit widths. Proposed Vedic multiplier is designed using modified CSLA, which has approximately 10 % less area than BEC based Vedic multiplier. It shows improved performance in terms of area. Proposed design is also compared with the Booth multiplier. Proposed multiplier showed more excellent results than Booth multiplier.

S7.12 Design of Digital Down Converter Using Computation Sharing Multiplier Architecture
Avinash M (PES Institute of Technology, India); Sumanth Sakkara (PES University, India)

In the field of software defined radio, DDC plays a pivotal role in defining the optimum sampling rate without encountering any loss of information. The essential part of any DDC is the low pass filtering operation. Traditionally CIC filters are used for the design of low pass filters in the DDC which poses disadvantages in the area occupied and delay in the FPGA devices. This paper focuses on the shared resource technique for the design of the FIR filter operation. The technique involved in designing the filter is the usage of a special type of multiplier called Computation sharing multiplier(CSHM). This technique enhances the performance of the DDC and also reduces the component utilization in FPGA. Xilinx ISE is used to simulate and synthesize the design.

S7.13 Microcontroller Based RR-Interval Measurement Using PPG Signals for Heart Rate Variability based Biometric Application
Nazneen Akhter and Sumegh Tharewal (Babasaheb Ambedkar Marathwada University, India); Hanumant Gite (Dr Babasaheb Ambedkar Marathwada University, Aurnagabad, India); Karbhari Kale (Babasaheb Ambedkar Marathwada University, India)

Heart Rate Variability (HRV) is a natural property of heart rate. Medical science since last two decades has been viewing at it as a diagnostic and prognostic tool. This study is intended towards harnessing the HRV property of heart for person identification. RR-Intervals are the duration between two consecutive heart beats and it's the only requirement for HRV analysis. Traditionally it is measured from electrocardiography (ECG) signals, but we used photoplethysmography (PPG) based pulse sensor and in-house designed microcontroller based RR-Interval measurement system. PPG sensors come in two basic types, one is transmission and other is reflection. We have tested the hardware with both transmission and reflection type sensors. This article is intended to document the performance analysis of both types of PPG sensors in perspective of biometric identification based on HRV analysis done from RR-Intervals collected at fingertips. Classification is done using KNN classifier.

S7.14 A novel design of Optical Logic Gate AND,OR and NOT with Polyvinyl-Chloride Multiwalled Carbon Nano Tube
Mihir Desani (R K University, India); Arjav Bavarva (Marwadi University, India); Vishal Parsotambhai Sorathiya (Parul Institute of Engineering and Technology & Parul University, India)

Faster computation required higher speed and higher data rate for digital circuits. To achieve this requirement Photonics digital circuits are one of the solution. Here we represent the design of AND, OR and NOT optical logic gate with Polyvinyl-Chloride Multiwalled Carbon Nano Tube (PVC-MWCNT) waveguide. PVC-MWCNT material was choose for better constant optical response over 800nm to 1000nm wavelength. It was also beneficial to achieve smaller sized waveguide design. The logic gates are designed in 10μm× 4.5μm size. All the optical operation and truth tables were followed as digital circuits for given wavelength span. Here the logic gates was investigated with Finite Differentiation Time Domain (FDTD) method. Comparison with available logic gate designs are also reported in this research. All the logic gates are compatible for the further expansion to logic circuits.

Monday, August 10 17:30 - 19:30 (Asia/Kolkata)

SRS: Fourth Student Research Symposium(SRS): Poster Sessions

Fourth Student Research Symposium(SRS) -List of Accepted Posters

  • Digital Watermarking in Audio Using Least Significant Bit and Discrete Cosine Transform, Priyanka Pattanshetti, Shabda Dongaonkar and Sharmila Karpe
  • Fuzzy Tuned Proportional Integral Derivative Control of Paper Machine Headbox, Daisy Sharma, Rajesh Kumar and Vishal Verma
  • Drina Based Forward Aware Factor Energy Balanced Routing Method For Wireless Sensor Networks, P. C. Sarik and S. Swapna Kumar
  • Integration Of Usb Based Multifunctional Data Acquisition (Daq) Module With Icrh Dac, Aniruddh Mali, Ramesh Joshi, H. M. Jadav and S. V. Kulkarni and Hemal Nayak
  • Analysing Student Engagement Using Gross Facial Gestures, A. Harikrishnan, S. B. Karthick and Nachiket Kulkarni
  • Analysis of Text Detection In Video, Dinesh Annaji Kene and D. J. Pete
  • Twitter Mining For Prognosticating The Irruption Of Pandemic Diseases, Umesh S. Padashetty and S. Natarajan
  • A Novel Error Minimization Method For Stock Index Movement Prediction Using Instantaneous Frequency Algorithm, S. Soumyaand S. Hema
  • A Novel Error Minimization Method for Stock Index Movement Prediction using Instantaneous Frequency Algorithm, S. Soumya and S. Hema
  • Driver Drowsiness Detection using Eye Movements, Regina Antony and G. S. Ajith
  • Design and Implementation of Communication Interface for Machine to Machine Based Data Acquisition and Monitoring System, Niraj S. Dhangar and S.M. Patil
  • Rare and Frequent Weighted Itemset Optimization using Homologous Transactions, Sheethal Abraham and Sumy Joseph
  • Implementation of an Adaptive Acoustic Modem for Underwater Sensor Networks, Omkar R. Damale and D.J. Pete
  • Image Change Detection using Stereo Imagery and Digital Surface Model, Roshan V. Patil and D.J. Pete
  • Detecting Double JPEG Compression using Error-Based Statistical Features, Nilu Treesa Thomas, Anju Joseph and V. Anjana
  • Co-Salient Object Detection in Multiple Images for Foreground Segmentation, Hima Anns Roy and V. Jayakrishna
  • Lossless Data Compression Technique for Data Transmission in Vanet, Lida K. Kuriakose and Neethu C. Sekhar
  • Similarity Measure for K-Nearest Neighbour Classifier, Anisha Mariam Thomas and M.G. Resmipriya
  • Providing QoS using Buffered Admission Control and Priority Queues, Tom Kurian and Deepu Benson
  • A Face Identification and Matching System using Genetic Algorithm in Forensic Applications, Anju Joseph, Nilu Treesa Thomas and Anishamol Abraham
  • Table Driven Source Routing Protocol for Mobile Adhoc Networks, Thomas Mathew and G.S. Santhoshkumar
  • Channel Estimation for OFDM System using Monte Carlo Simulation in Frequency Selective Channel, S. Gorlosa and A. Misra
  • Finite Element Analysis of Honeycomb Sandwich Beam with Multiple Delaminations Using Ansys, M.S.VAISALI, A.S NISHA and TENNU SYRIAC
  • Data Mining Approach for Detection of Masquerade Data, Varsha M. V., Vinod P and Dhanya K. A
  • Detecting Multiple Instance of an Object, Anju Pankaj and Sonal Ayyappan
  • Edge based Object Tracking, Jeena Rita K. S.and Bini Omman
  • Numerical Investigation On Hydrodynamic Behaviour Of Container Ship (S175) Under Slamming Loads, Anit and Sheeja Janardhanan
  • Numerical Investigation on Hydrodynamic Analysis of a Fixed Cylinder in Waves, Nithya S and Sheeja Janardhanan
  • EPR Embedding using DCT based Crypto-Watermarking Approach, Anna Babu and Sonal Ayyappan
  • Numerical Investigation on Hydrodynamic Analysis of Container Ship (S175) in Waves, Anu Elishuba Mammen and Sheeja Janardhanan
  • Panorama Creation from Non-overlapping Images, Anly Antony M and Bini Omman

Tuesday, August 11

Tuesday, August 11 9:00 - 14:00 (Asia/Kolkata)

R2: Registration Starts

Room: Atrium

Tuesday, August 11 9:30 - 10:30 (Asia/Kolkata)

K3: Main Track Keynote-3: New Paradigm of Machine Learning for Big-data Mining

Dr. Parag Kulkarni, CEO and Chief Scientist, EKLaT, Pune, India
Room: MSH

Our traditional machine learning methods are simply not capable enough to deal with big data and some of the complexities in real life decision-making. The simple event and pattern-based learning can handle routine scenarios but fail in case of dynamic scenario. Though Reinforcement Learning and some of its variants deal with dynamic scenarios like gaming, there is need of holistic learning methods to cope up with these challenges. What are the new methods those can cope up with this challenge? This talk will introduce new methods and paradigms of systemic machine learning. The talk will throw light on incremental, co-operative and multi-perspective learning while elaborating concept of systemic machine learning.

Tuesday, August 11 10:30 - 11:20 (Asia/Kolkata)

WCI-K1: WCI Keynote -1: Software Bloat, Lost Throughput and Wasted Joules

Dr. Suparna Bhattacharya, Hewlett-Packard India Software Operations, India
Room: 308

There appears to be an inherent tension between development-time productivity and run-time efficiency of IT solutions. Software systems tend to accumulate "bloat" as a side-effect of the same software engineering trends that have been extremely successful in fueling the widespread growth of frameworks that enable applications to rapidly evolve in scale and function to handle ever more complex business logic, integration requirements and unanticipated changes in demanding environments. In the past, the benefits of such flexibility have far outweighed the cost of overheads incurred. Today, shifting hardware technology trends and operational challenges motivate a need for paying greater attention to these inefficiencies and devising techniques for mitigating their impact on energy consumption and system performance. However, distinguishing excess resource utilization due to "bloat" from essential function is highly non-trivial without a deep knowledge of intended semantics of very complex software. This talk presents a systems perspective of the problem of runtime bloat in large framework based Java applications and its implications for energy efficient design, including results from recent research and interesting open questions to tackle.

K4: Main Track Keynote-4: Enabling Self-Management in 5G Networks: the SELFNET EU Project Perspective

Main Track Keynote-4: Prof. Gregorio Martinez, University of Murcia, Spain
Room: MSH

5G has been defined as an end-to-end ecosystem with the main intention of enabling a full mobile and connected society. This definition is imposing a high number of challenges in, at least, these three planes: radio, network, and operations and management. This talk will be considering this last part and describing how a self-management perspective can fit into the control plane of 5G networks. When considering self-management, we refer to self-configuring, self-healing, self-optimising, and self-protecting all of then related to both the network and the services being provided on top of this network. The SELFNET EU project, which will be presented in details during this talk and which is part of the 5G-PPP EU initiative, is considering as running use cases the last three ones, as well as a composed and coordinated use case integrating all of them.

Tuesday, August 11 11:40 - 12:30 (Asia/Kolkata)

WCI-K2: WCI Keynote -2: Hybridization with Rough Sets: Application to Bioinformatics and Biomedical Imagery

Prof. Sushmita Mitra, Indian Statistical Institute, Kolkata
Room: 308

Tuesday, August 11 12:30 - 13:20 (Asia/Kolkata)

WCI Panel: WCI Panel Discussion: On deciding Career Paths for Women: Academics, IT, Research, Entrepreneurship

Room: 308

Tuesday, August 11 14:00 - 17:45 (Asia/Kolkata)

S10: S10-Data and Knowledge Engineering- II

Room: 302
Chair: Yashwant Singh (Jaypee University of Information Technology, India)
S10.1 Efficiency Analysis of Kernel Functions In Uncertainty Based C-Means Algorithms
Dishant Mittal (VIT University); B. K. Tripathy (VIT University, India)

Application of clustering algorithms for investigating real life data has concerned many researchers and vague approaches or their hybridization with other analogous approaches has gained special attention due to their great effectiveness. Recently, rough intuitionistic fuzzy c-means algorithm has been proposed by Tripathy et al [3] and they established its supremacy over all other algorithms contained in the same set. Replacing the Euclidean distance metric with kernel induced metric makes it possible to cluster the objects which are linearly inseparable in the original space. In this paper a comparative analysis is performed over the Gaussian, hyper tangent and radial basis kernel functions by their application on various vague clustering approaches like rough c-means (RCM), intuitionistic fuzzy c-means (IFCM), rough fuzzy c-means (RFCM) and rough intuitionistic fuzzy c-means (RIFCM). All clustering algorithms have been tested on synthetic, user knowledge modeling and human activity recognition datasets taken from UCI repository against the standard accuracy indexes for clustering. The results reveal that for small sized datasets Gaussian kernel produces more accurate clustering than radial basis and hyper tangent kernel functions however for the datasets which are considerably large hyper tangent kernel is superior to other kernel functions. All experiments have been carried out using C language and python libraries have been used for statistical plotting.

S10.2 Email Forensic Analysis Based on k- means clustering
Arya Nampoothiri (University of Kerala, India); Minu Madhavu (Sree Buddha College of Engineering, India)

Computer crime activities are increasing more and more, which bring great threat to network security. Email is used for several computer crime activities due to its simplicity. In this scenario, email forensics is needed. This paper proposed an email forensic method using k- means clustering. We collect and analyze email data of suspicious users. Then filtering and clustering is done to obtain the email communication network graph. Finally, we apply spam filtering to avoid spam mails in network graph and k- means clustering on email messages to obtain the accurate communication graph. The algorithm can analyze the core members and the structure of criminal organization.

S10.3 Personalization of Web Search based on Privacy Protected and Auto-Constructed User Profile
Rasika Kaingade and Hemant Tirmare (Shivaji University, India)

Web search engines are widely used to find huge amount of data on the web in a minimum amount of time. But sometimes it becomes difficult to the users to get exact result for given query. Personalized Web Search (PWS) provides the better search results for individual user needs and improves the quality of the search result based on the user profile. However, user's unwillingness to disclose their private information during search has become major issue in wide increase in PWS. This paper presents a scalable way for users to build rich hierarchical user profiles automatically and provide privacy for the user profiles. Extended User customizable Privacy-preserving Search (E-UPS) framework generalize the user profiles for each query according to user-specified privacy requirements. It will improve the search quality and hide the privacy contents existence in the user profile and gives protection against a typical model of privacy attack.

S10.4 Analysis of Data Management and Query handling in Social Networks using NoSQL Databases
Anita Mathew (NIT Calicut, India)

In the past few decades, traditional RDBMS(Relational Database Management Systems) was a predominant technology used for storing and retrieving structured data in web and business applications since 1980. However, relational databases have started squandering its importance due to strict schema reliance and costly infrastructure. This has collectively led to the problem in upgrade hardware-software challenges and relationships between objects. Another major issue of failure is the brobdingnagian hike of BigData. A new database model called NoSQL, plays a vital role in BigData analytics. This paper focus on four different types of NoSQL databases owned by social networking sites like Facebook, LinkedIn, Twitter, MySpace, Foursquare, Flickr and Friendfeed. Their features like scalability of the data model, concurrency control, consistency in storage, availability during partitioning, durability, transactions, implementation language, query support possibilities and programming characteristics are compared and analyzed. These features are compared for the sub categories of the four NoSQL databases, for swift querying from social networking sites. In the detailed analysis presented, the features like data storage and fast retrieval phase of query processing are given primary importance. We also present a comparison of the time taken during insert and read operations of social network data of Facebook associating friends, intimate friends, family and groups like hometown, workplace. The results are compared and most suitable database from NoSQL Graph database subcategories for insert and read operations in Facebook is identified.

S10.5 Twitter Sentiment Classification Using Machine Learning Techniques for Stock Markets
Mohammed Qasem, Ruppa K. Thulasiram and Parimala Thulasiraman (University of Manitoba, Canada)

Sentiment classification of Twitter data has been successfully applied in finding predictions in a variety of domains. However, using sentiment classification to predict stock market variables is still challenging and ongoing research. The main objective of this study is to compare the overall accuracy of two machine learning techniques (logistic regression and neural network) with respect to providing a positive, negative and neutral sentiment for stock-related tweets. Both classifiers are compared using Bigram term frequency (TF) and Unigram term frequency - inverse document term frequency (TF-IDF) weighting schemes. Classifiers are trained using a dataset that contains 42,000 automatically annotated tweets. The training dataset forms positive, negative and neutral tweets covering four technology-related stocks (Twitter, Google, Facebook, and Tesla) collected using Twitter Search API. Classifiers give the same results in terms of overall accuracy (58%). However, empirical experiments show that using Unigram TF-IDF outperforms TF.

S10.6 Spam Filtering Using Hybrid Local-Global Naive Bayes Classifier
Rohit Solanki and Karun Verma (Thapar University, Patiala, India); Ravinder Kumar (Thapar Institute of Engineering and Technology, Patiala, India)

This paper propose a novel learning framework for classification of messages into spam and legit. We introduce a classification method based on feature space segmentation. Naive Bayes (NB) model is a statistical filtering process which uses previously gathered knowledge. Instead of using a single classifier ,we propose the use of local and global classifier, based on Bayesian hierarchal framework. This helps in achieving multi-task learning , as simultaneous extraction of knowledge can be achieved while achieving classification accuracy. Knowledge among different task can be shared while learning for task specific.

S10.7 Generating and Visualizing Topic Hierarchies from Microblogs: An Iterative Latent Dirichlet Allocation Approach
Anoop V S (Rajagiri College of Social Sciences (Autonomous), India); Prem sankar C (University of Kerala, India); Asharaf S (IIIT M K, India)

Research in social networks is attaining more attention in the recent past due to the explosive growth in the creation and sharing of information over social media. As the volume of information grows exponentially, we need efficient computational techniques to analyze this information and to synthesis the hidden knowledge associated with it. Being a suit of text understanding algorithms, topic modeling discovers the topics or themes within a huge collection of documents. In this work, we employ the essence of a powerful topic modeling algorithm to analyze hidden knowledge contained in the information spread across a famous social network platform Twitter, using a novel iterative topic modeling approach. Additionally, we visualized the knowledge extracted using a sunburst chart so that even a naive user can interpret the hidden knowledge extracted from tweets.

S10.8 Knowledge Representation and Assessment Using Concept Based Learning
Nandu C Nair (Amrita E-Learning Research Lab & Amrita Vishwa Vidyapeetham, India); Archana Jalaja Surendran (Amrita E-Learning Research Lab, India); Shiffon Chatterjee (Amrita Vishwa Vidyapeetham, India); Kamal Bijlani (Amrita University, India)

The process of learning can be improved with proper and timely feedback. This paper proposes a system that provides feedback for both teacher and student using concept-based learning. Various types of concepts are defined, and assessment is done for each concept. The level of knowledge acquired by students can be clearly represented using this approach. The feedback provided is goal-oriented, actionable, personalized, timely, ongoing and consistent, which counts for the novelty of this approach. Results from this study show that concept-based assessment and feedback motivates the students and helps them improve their examination score.

S10.9 An Automated Approach for Classification of Plant Diseases Towards Development of Futuristic Decision Support System in Indian Perspective
Yogesh H Dandawate (Vishwakarma Institute of Information Technology, India); Radha Kokare (Vishwakarma Institute of Information Technology)

The major cause for decrease in the quality and amount of agricultural productivity is plant diseases. Farmers encounter great difficulties in detecting and controlling plant diseases. Thus, it is of great importance to diagnose the plant diseases at early stages so that appropriate and timely action can be taken by the farmers to avoid further losses. The project focuses on the approach based on image processing for detection of diseases of soybean plants. The soybean images are captured using mobile camera having resolution greater than 2 mega pixels. The purpose of the proposed project is to provide inputs for the Decision Support System (DSS), which is developed for providing advice to the farmers as and when require over mobile internet. Our proposed work classifies the images of soybean leaves as healthy and diseased using Support Vector Machine (SVM). The algorithm comprises of four major steps: image acquisition, extracting the leaf from complex background, statistical analysis and classification. The pre-processing step includes conversion from RGB to HSV (Hue Saturation Value) color space. For extracting the region of interest (ROI) from the original image, multi-thresholding is used. The color based and cluster based methods are used for segmentation. The algorithm uses Scale Invariant Feature Transform (SIFT) technique which automatically recognizes the plant species based on the leaf shape. The SVM classifier proves its ability in automatic and accurate classification of images. Finally, it can be concluded from the experimental results that this approach can classify the leaves with an average accuracy of 93.79%. The proposed system will enable the farmers to get advice from the agricultural experts with minimal efforts.

S10.10 Sub Pixel Level Arrangement of Spatial Dependences to Improve Classification Accuracy
Suresh Merugu (CMR College of Engineering and Technology & Indian Institute of Technology, Roorkee, India); Arun Rai and Kamal Jain (Indian Institute of Technology, Roorkee, India)

The colors in universe have sharp boundaries everybody is aware of specifically wherever a color starts and wherever it ends and that any color communicates the details about the targets in the scene in a much better way and that this detailed information can be used to further polish the interpretation of an imaging system. In this paper, the proposed subpixel level arrangements of spatial dependences provide super resolved landuse landcover information using the output of soft classified fractional values. The output of soft classifier satisfies the constraint of non-negativity and sum to 1 instead of whatever their "natural" total is of fractional abundance within the pixels. This phenomenon is also discussed while defining mixed pixels, the pixels at boundary contain both the colors in a proportion so that the pixel appears the color different from either of two. This paper main goal is to extract the information from mixed pixels and subpixel analysis with the subpixel level arrangements of spatial dependences to get the super resolved information.

S10.11 Privacy-Preserving Frequent Itemset Mining in Outsourced Transaction Databases
Iyer Gurumurthy and Pallav Kumar Baruah (Sri Sathya Sai Institute of Higher Learning, India); Ravi Mukkamala (Old Dominion University, USA)

Cloud computing has ushered a new interest in a paradigm called Datamining-as-a-service. This paradigm is aimed at organizations that lack the technical expertise or the computational resources enabling them to outsource their data mining tasks to a third party service provider. One of the main issues in this regard is the confidentiality of the valuable data at the server which the data owner considers as private. In this work, we study the problem of privacy preserving frequent itemset mining in outsourced transaction databases. We propose a novel hybrid method to achieve k-support anonymity based on statistical observations on the datasets. Our comprehensive experiments on real as well as synthetic datasets show that our techniques are effective and provide moderate privacy.

Tuesday, August 11 14:00 - 17:50 (Asia/Kolkata)

S11: S11-Image Analysis and Image Enhancement

Room: 303
Chairs: Mahesh Chavan (KIT's College of Engineering, India), Ajinkya S. Deshmukh (Uurmi System Pvt. Ltd., India)
S11.1 Offline Handwritten Signature Verification Using Low Level Stroke Features
Mohitkumar A Joshi, Mukesh M Goswami and Hardik Adesara (Dharmsinh Desai University, India)

Signatures are most widely used biometric identity for verification of a person or an individual. Signature is legally accepted as a mark of identification and authorization in almost all commercial, social, jurisdictional documents since a long time. Signature verification is a process of automatic recognition of human handwritten signature and differentiating between original and forge signature. In this research, we have used low level stroke feature, which were originally proposed for recognition of printed Gujarati text, for offline handwritten signature verification. Experiment was performed on the ICDAR 2009 Signature Verification Competition dataset which contains both genuine and forge signature. Recognition is performed using Support Vector Machine (SVM) classifier with 3-fold cross validation. Equal Error Rate (EER) achieved is 15.59, which is comparable with the ICDAR 2009 Signature Verification Competition Result.

S11.2 DPCM Block-based Compressed Sensing With Frequency Domain Filtering and Lempel-Ziv-Welch Compression
Soham Bhattacharjee (Indian Institute of Engineering Science and Technology, Shibpur, India); Saikat Kundu Chowdhury (Indian Institute of Engineering Science and Technology, India); Shrayan Das (Indian Institute of Technology, Kharagpur, India); Ankita Pramanik (Indian Institute of Engineering Science & Technology, Shibpur, India)

Block Compressed Sensed (BCS) images reconstructed by the Smoothed Projected Landweber (SPL) equations are severely degraded in visual quality. This work focuses on removal of the noise present in the BCS - SPL reconstructed image. For noise removal the nature of the noise is studied first. A suitable frequency domain filter to mitigate this noise is proposed in this work. Differential Pulse Coded Modulation after coupling with Blocked Compress Sensing and the proposed filtering shows significant improvement in the result compared to many other similar techniques where generally smoothing filters or no filters are used. Along with Differential Pulse Coded Modulation, this work proposes the use of Lempel-Ziv-Welch channel coding technique for further compression of data. Significant compression is achieved for medical images compared to other channel coding techniques by the incorporation of LZW.

S11.3 An Efficient Hybrid Scheme for Key Frame Extraction and Text Localization in Video
Monika Singh (ITM University, Gurgaon, India); Amanpreet Kaur (The NORTHCAP University, India)

Efficient algorithms for caption text and scene text detection in video sequences are highly in-demand in the area of multimedia indexing and data retrieval. Due to challenges like, low resolution, low contrast, complex background and texts with multiple orientation/style/color/alignment, scene text extraction from video images is undoubtedly more challenging task. In this paper, a method has been proposed to efficiently extract the key frames from the videos based on color moments and then text localization is done only on the key frames. Since the text information does not change with each frame, text extraction is performed only on key frames which helps in reducing the computational/processing time of the algorithm. Further, this paper proposes a hybrid robust method to localize scene and graphic text in the video frames using 2-D haar discrete wavelet transform (DWT), Laplacian of Gaussian filter and maximum gradient difference method. DWT provides a fast decomposition of the images into an approximate and three detail components. The three detail components contain the information about the vertical, horizontal and diagonal edges of the image which are used to easily differentiate texts from image. Maximum gradient difference method is used to further refine the text localization process and the gradient difference magnitude is used in the thresholding process. A dynamic thresholding technique has been used to convert the images into binary form. Since this thresholding technique obtains different threshold values for different images, it can be used for automatic text localization in video sequences. Two mask operators has been employed to obtain an equation which when applied on each pixel provides the intended threshold value. False positives are eliminated using morphological operations and connected component analysis is done to finally localize the text. The comparison metrics in the results show that the proposed method gives a good performance of detection rate, false alarm rate and misdetection rate

S11.4 Adaptive Digital Scan Variable Pixels
Sherin Sugathan (University of Bergen, Norway); Reshma Scaria (Enview Research & Development Labs, India); Alex James (IIITMK, India)

The square and rectangular shape of the pixels in the digital images for sensing and display purposes introduces several inaccuracies in the representation of digital images. The major disadvantage of square pixel shapes is the inability to accurately capture and display the details in the objects having variable orientations to edges, shapes and regions. This effect can be observed by the inaccurate representation of diagonal edges in low resolution square pixel images. This paper explores a less investigated idea of using variable shaped pixels for improving visual quality of image scans without increasing the square pixel resolution. The proposed adaptive filtering technique reports an improvement in image PSNR.

S11.5 Latent Fingerprint Preprocessing: Orientation Field Correction using Region Wise Dictionary
Sachin Kumar (National Institute of Technology Tiruchirappalli, India); Leela Rengaraj (National Institute of Technology Tiruchirappalli India, India)

Latent Fingerprint Images has been extensively used by law enforcement agencies in investigating the crime spot and used the necessary information obtained as evidence to validate the criminal in Court. Although an important breakthrough in this direction has already been made in plain biometrics recognition, but still identifying biometrics such as Face in CCTV footage and Latent Fingerprint in uncontrolled, uncooperative, and hostile environment is an open research problem. Poor quality, lack of clarity, absence of proper mechanism make the latent fingerprint preprocessing one of the persistent and challenging problem to extract the reliable features. Dictionary based learning technique has given significant result, in contrast to conventional orientation field estimation methods by reconstructing an orientation field to enhance the latent image. Orientation field is corrected using orientation patches of good quality fingerprint from region wise dictionary. This paper proposes a fresh idea to construct the dictionary as region wise to correct the distorted orientation field in latent image. To verify the accuracy of enhance image, a statistical observation has been done and got some good results. This study concentrates on latent fingerprint preprocessing module towards reliable and efficient (optimal) Latent Fingerprint Identification.

S11.6 Method For Characterize Landslide Caused By Heavy Rainfall By Using Smoothing Method
Sayali Kokane (Savitribai Phule Pune University, India); Rohini Agawane (KJ College of Engineering and Management Research, Pune, India)

Dynamic Landslides are quality phenomena for the element offset of the world's range. Huge precipitation and tremor are the two primary attentiveness toward avalanches. The allocation of territory measurement is the most essential quantitative parameter of avalanches. Along these accumulations, the motivation behind this exploration is to clarify the reach and spatial examination of precipitation impelled as looked at and those of tremble influenced avalanches. Because of ensuing exercises of precipitation and earth shake, colossal upgrades are consistent dangers to people's ways of life. In this archive, the explanation of data is evaluated as ID prerequisites. Multisource high-determination data, for instance, a SPOT satellite picture, And Ranging (Lidar) data, and receiving wire ortho-photos were utilized to build up the property space for avalanche research. Avalanches were perceived by an article6 arranged strategy turning into an individual from edge-based division and a Supported Vector Machine (SVM) method. The delineation results are assessed regarding those by aide illustration. Two circumstances from Malin village avalanche and Uttarakhand's considerable precipitation are attempted. Both circumstances show that the item based SVM method is amazing to a pixel-based framework in gathering accuracy.

S11.7 A Quantum Bi-Directional Self-Organizing Neural Network (QBDSONN) for Binary Image Denoising
Debanjan Konar (SRM University - AP & Indian Institute of Technology Delhi, India); Siddhartha Bhattacharyya (RCC Institute of Information Technology, India); Nibaran Das (Jadavpur University, India); Bijaya Panigrahi (Indian Institute of Technology - Delhi, India)

A Quantum Bi-directional Self-Organizing Neural Network (QBDSONN) architecture suitable for binary image denoising in real time is proposed in this article. It is composed of three second order neighborhood topology based inter-connected layers of neurons (represented by qubits) known as input, intermediate and output layers. Moreover, it does not use any quantum back-propagation algorithm for the adjustment of its interconnection weights. Instead, it resorts to a counter-propagation of quantum states of the intermediate layer and the output layer. In the proposed architecture, the inter-connection weights and activation values are represented by rotation gates. The quantum neurons in each layer of the network follow a cellular network architecture and are fully intra-connected to each other. QBDSONN self-organizes the quantized input image information by means of the counter-propagating fashion of the quantum network states of the intermediate and output layers of the architecture. A quantum measurement at the output layer collapses superposition of quantum states of the processed information thereby yielding the desired outputs once the network attains stability. Applications of QBDSONN are demonstrated on the denoising of a synthetic and real life spanner image with different degrees of uniform noise and Gaussian noise. Comparative results indicate that QBDSONN outperforms its classical counterpart in terms of time and also it retains the shapes of the denoised images with great precision.

S11.8 Feature Extraction and LDA based Classification of Lung Nodules in Chest CT scan images
Taruna Aggarwal, Asna Furqan and Kunal Kalra (Guru Gobind Singh Inderprastha University, India)

This paper presents a computational based system for detection and classification of lung nodules from chest CT scan images. In this study we consider the case of a primary lung cancer. Optimal thresholding and gray level characteristics are used for segmentation of lung nodules from the lung volume area. After detection of lung mass tissue, geometrical features are extracted. Simple image processing techniques like filtering, morphological operation etc are used on CT images collected from Cancer Imaging Archive database to make the study effective and efficient. To distinguish between the nodule and normal pulmonary structure, geometrical features are merged with LDA (linear discriminate analysis) classifier. GLCM technique is used for calculating statistical features. The results show that proposed methodology successfully detects and provides prior classification of nodules and normal anatomy structure effectively, based on geometrical, statistical and gray level characteristics. Results also provide 84% accuracy, 97.14 % sensitivity and 53.33 % specificity

S11.9 Image Segmentation using Thresholding for Cell Nuclei Detection of Colon Tissue
Archana Nawandhar (CMRIT, VTU, India); Lakshmi Yamujala (Centre for Development of Telematics, India); Navin Kumar (Amrita University & School of Engineering, India)

In this paper, image segmentation process is discussed using thresholding technique. We focus our study for nuclei detection of Hematoxilin Eosin (HE) stained colon tissue for the cancer detection. Detection of cancer cell at early stage is very important and several studies are being carried out. In medical imaging for the processing of microscopic tissue images and especially the detection of cell nuclei is done more and more using digital imagery techniques. However, there are many other techniques varies in complexity and quality of detection. In this paper the result of different thresholding techniques are applied on HE-stained colon tissue to detect cell nuclei in the image. In the proposed method a binary mask is formed which is of same size as that of original image. In this binary image the centre of cell nuceli are represented by white colour and the background in black. This image is superimposed over the original image. The outcome of this is a segmented image to detect the cell nuclei. The process is relatively simple and results are encouraging.

S11.10 De-noising of Ultrasound Image using Discrete Wavelet Transform by Symlet Wavelet and Filters
Ashwani Kumar Yadav (Amity University Rajasthan, India); Ratandeep Roy (Amity University Raasthan, India); Praveen Archek (Amity University, India); Cheruku. Kumar (Amity University Raasthan, India); Shailendra Dhakad (BITS PILANI K K BIRLA GOA CAMPUS & Bits Pilani, India)

Noise is a major factor in degrading the image quality of various medical images ( MRI, CT scan, Ultrasound etc.). Speckle noise is a most common noise which affects all imaging systems including medical ultrasound images. For diagnosis purpose it is essential to extract useful data in the original form so transformations are required for this. DWT (Discrete wavelet transform) is the latest and best technique for image denoising. This paper presents study of various techniques for removal of speckle noise from medical images based on Wavelet Multiresolution analysis and filtering techniques. A comparative analysis of different methods: DWT with different wavelet families (Haar and Symlet) with wiener and median filtering has been presented. Results are compared in terms of PSNR, Mean squared Error (MSE) and processing time.

S11.11 Train Rolling Stock Segmentation with Morphological Differential Gradient Active Contours
Polurie Venkat Vijaya Kishore (K L University College of Engineering & K L University, India); Ch Raghava Prasad (KLUniversity, India)

Rolling examination as it is called by railway maintenance staff of Indian railways is visual and auditory examination of moving bogies of a train for defects. The undercarriage moving parts of the train are called rolling stock. This paper makes an attempt to segment the rolling stock from video frames of the rolling stock for further analysis. This paper focuses on Chan vese active contour (CV) model for segmenting the rolling stock. We present a modified version of Chan vese using morphological differential gradient (CVMDG) to segment rolling stock. The rolling stock videos are captured under four different lighting conditions near Guntur railway station in INDIA. For better segmentation of rolling stock, video frames are contrast enhanced with virtual exposure wavelet image fusion. The segmented rolling stock is compared with ground truth model to assess the usability of the proposed method for rolling stock segmentation.

S11.12 Crowd Density Analysis and Tracking
Polurie Venkat Vijaya Kishore (K L University College of Engineering & K L University, India); R Rahul, K Sravya and A s c s Sastry (K L University, India)

Crowd Density Analysis (CDA) aims to compute concentration of crowd in surveillance videos. This paper core is to estimate the crowd concentrations using crowd feature tracking with optical flow. Local features are extracted using Features for Accelerated Segment Test (FAST) algorithm per frame. Optical flow tracks the features between frames of the surveillance video. This process identifies the crowd features in consecutive frames. Kernel density estimator computes the crowed density in each successive frame. Finally individual people are tracked using estimated flows. The drawback of this method is similar to suffered by most of the estimation methods in this class that is reliability. Hence testing with three popular optical flow models is initiated to find the best optical flow. Three methods are Horn-Schunck (HSOF), Lukas-Kanade (LKOF) and Correlation optical flow (COF). Five features extraction methods were tested along with the three optical flow methods. FAST features with horn-schunck estimates crowed density better than the remaining methods. People tracking application with this algorithm gives good tracks compared to other methods.

S11.13 Classification of Handwritten Gujarati Numeral
Archana Vyas and Mukesh M Goswami (Dharmsinh Desai University, India)

This paper addresses the problem of recognizing handwritten numerals for Gujarati Language. Three methods are presented for feature extraction. One belongs to the spatial domain and other two belongs to the transform domain. In first technique, a new method has been proposed for spatial domain which is based on Freeman chain code. This method obtains the global direction by considering n x n neighbourhood and thus eliminates the noise which occurs due to local direction. In second and third method, 85 dimensional Fourier descriptors and Discrete Cosine Transform coefficients were computed and treated as feature vectors. Comparative analysis has been done for these three methods. These methods are tested with three different classifiers namely K-Nearest Neighbour, Support Vector Machine and Back Propagation Neural Network. Experimental results were evaluated using 10 fold cross validation. The highest recognition rates obtained for full data set of 3000 digits are 85.67%, 93.60% and 93.00% using modified chain code, DFT and DCT respectively.

S11.14 Fractal Image Compression for HD Images With Noise Using Wavelet Transforms
Prashanth N (Sai Vidya Institute of Technology & VTU, India); Arun Vikas Singh (PES Institute of Technology, India)

In this paper fractal encoding of images using the wavelet transform is proposed. Fractal encoding is used to produce the image with good visual quality in less encoding time. Quadtree partitioning is applied iteratively during encoding, and fractal coding is executed for JPEG, PNG and BMP HD images in two conditions: by supplying the noise to the input image and without any noise added to the input image, and the results are computed for both the conditions. Median filter is used after decoding to remove the noise which is present in the image. Results show that PSNR is significantly improved and the proposed method is able to produce the image with good visual quality and a compression ratio of 21.33%.

S11.15 A Vision Based Motion Estimation in Underwater Images
Pushpendra Kumar (Indian Institute of Technology Roorkee, India); Sanjeev Kumar (IIT Roorkee, India, India); Balasubramanian R (IIT Roorkee, India)

Motion estimation from underwater images is an active research area of the vision system devoted to the applications of robots. In this paper, a vision based system for tracking the motion of moving objects is presented. The aim of this paper is to give an optimal performance against radiometric features such as non-uniform lighting, blurring and noise. The moving object detection is performed by means of optical flow. The optical flow is determined by minimizing the variational functional. The proposed variational functional combined the global model of Horn and Schunck(1981) and the classical model of Nagel and Enkelmann(1986) as a new regularization functional. The formulated variational function is based on total variation regularization and L1 norm, which is solved by an efficient numerical scheme. This makes the model more robust and preserves discontinuity. Finally, a number of experimental results on several underwater images verify the validity of the proposed algorithm.

S17: S17-Energy Efficient Wireless Communications and Networking/Sensor Networks, MANETs and VANETs- II

Room: 305
Chairs: Ravi Kishore Kodali (National Institute of Technology, Warangal, India), Bheemarjuna Reddy Tamma (IIT Hyderabad, India)
S17.1 GAE3BR: Genetic Algorithm based Energy Efficient and Energy Balanced Routing Algorithm for Wireless Sensor Networks
Ram Narayan Shukla (OPJIT, Raigarh); Suneet Kumar Gupta (Bennett University Gr Noida, India); Arvind Chandel (OPJIT Raigarh, India); Jainendra Jain and Ashok Bhansali (OPJIT, Raigarh, India)

An important role of Wireless Sensor Networks (WSNs) is to sense the region and forward the sensed data to remotely placed Base Station (BS) directly or by using multihop communication. One of the most important constraints in such networks is energy consumption. In this paper we propose a novel approach to minimize and balance the energy consumption. The proposed algorithm is based on Genetic Algorithm (GA) and it generates such routing scheme which establishes a trade-off between energy efficiency and energy balancing. Algorithm considers the energy consumption issue by minimizing the total distance covered in a round. Energy balancing issue is taken care by consideration of diverting the incoming traffic of less residual energy relay node to high residual energy relay node. Based on current network state our algorithm quickly computes a new routing schedule. The experimental results show that the proposed algorithms are better than the existing techniques.

S17.2 Energy Efficient Routing in Multi-level LEACH for WSNs
Ravi Kishore Kodali (National Institute of Technology, Warangal, India)

Energy efficient routing is the paramount point of significance in any wireless sensor network (WSN) so that the network sustains without any human intervention. Various energy efficient protocols have been proposed, multi-level Low Energy Adaptive Clustering Hierarchy (LEACH) being one among these. The multi-level LEACH protocol involves election of cluster heads at various hierarchical levels. If the area of deployment is large, the farther cluster heads cannot reach the base station directly. In such cases, it is needed to make use of an energy efficient routing technique for communication among cluster heads and forward the data towards the base station. Energy consumption is reduced by adopting routing mechanisms based on multi-hop and inverted tree in this multi-level LEACH protocol. This work deals with multi-level LEACH comprising of 1L-LEACH and 2L-LEACH. Directed diffusion carried out in 1L-LEACH and flat topology are compared with flooding. In directed diffusion, routing is done between a source and a sink optimising the path taken. NS-3 simulation platform has been used and the results are presented.

S17.3 Middle Position Dynamic Energy Opportunistic Routing For Wireless Sensor Networks
Mayank Sharma (Jaypee University of InformationTechnology, India); Yashwant Singh (Jaypee University of Information Technology, India)

Wireless Sensor Networks (WSN) are the networks in which many tiny sensor nodes are deployed in the targeted region to form networks. The sensor nodes collect the information about physical or chemical phenomenon and transfer this information towards the base station for further processing. To achieve a high throughput in unreliable wireless links, Opportunistic Routing (OR) collaborate all the sensor nodes in the path while forwarding the data packets. Opportunistic routing takes advantage of the broadcasting nature of WSN for multi-hop communication. In this paper, Middle Position Dynamic Energy Opportunistic Routing (MDOR) has been proposed for efficient multi-hop communication between a source and destination pair in WSN. MDOR uses dynamic energy consumption when a packet is transmitted between nodes. In wireless sensor networks, there is a trade-off between the End-to-End delay and Network's Lifetime, when we use the concept of dynamic energy consumption. It is observed that the average End-to-End delay is more in Energy Efficient Opportunistic Routing (EEOR) protocol as compared to the Multi-hop Optimal position Opportunistic Routing (MOOR) protocol. But in case of dynamic energy consumption, the networks lifetime of an EEOR protocol is better. The proposed protocol has optimized the End-to-End delay and Network's Lifetime with the use of dynamic energy consumption

S17.4 Energy Efficient Detection of Malicious Nodes Using Secure Clustering With Load Balance and Reliable Node Disjoint Multipath Routing in Wireless Sensor Networks
Pavithra Bhat (Visvesvaraya Technological University, Belgaum & Cambridge Institute of Technology, Bangalore, India); Satyanarayan Reddy Kalli (Cambridge Institute of Technology & Visvesvaraya Technological University, India)

In order to increase the network lifetime and solve the security bottlenecks induced by the camouflaged malicious nodes in Wireless Sensor Network, the residual energy and trust values are used to form a secured clustering, the network lifetime is increased by using the backup nodes in order to balance the load among the clusters and reliable multipath node-disjoint route discovery algorithm is proposed. The simulated experimental results in NS2 platform show that the proposed method can minimize the effect of intrusion attacks on the sensor network, improving the reliability and protract the lifetime of sensor network by balancing the trust values and residual energy of sensor nodes.

S17.5 Energy Efficient Cognitive Cross-layer MAC Protocol
Deepti Singhal (MVJCE, Bangalore, India); Rama Garimella (IIIT Hyderabad, India)

The ever increasing demand for communication bandwidth and inefficient usage of the existing spectrum has led to spectrum scarcity. In this light, spectrum should be managed as a scare resource. For radio communication systems, efficient utilization of spectrum is the key requirement. The inefficient usage of the existing spectrum can be improved through opportunistic access to the licensed bands without interfering with the primary users. This introduces the concept of dynamic spectrum access and cognitive radio. This paper proposes a energy efficient medium access protocol for any cognitive network. For performance evaluation, proposed solution is implemented in NS2 and comparative analysis is also done against existing solution from the literature. The paper also validates that the proposed solution gives better results.

S17.6 Weightage based Secure Energy Efficient Clustering Algorithm in MANET
Kanwaljeet Kaur (Punjab Technical University, Jalandhar & Global Institute of Management & Emerging Technology, Amritsar, India); Jaspinder Singh (Punjab Technical University, India); Er Himani (Punjab Technical University, Jalandhar & Global Institute of Management & Emerging Technology, Amritsar, India)

The research in Mobile Ad-hoc Network remains winsome due to the crave to obtain better performance and scalability. Mobile ad hoc network is an unstructured self-forming radio mesh of moving nodes. There is no median approach for interaction of mobile nodes. In MANET nodes are mobile in nature and it became challenging to handle them while preserving the energy. Misbehavior, Mobility, Congestion are some factors that always degrade the performance of network. In this paper we proposed a WSEEC (Weightage based Secure Energy Efficient Clustering) algorithmic approach towards energy efficient clustering and security of nodes. The aim of this algorithm is to form secure and energy efficient cluster head. Where rely value of each node is calculated to know behavior of node. The performance of purposed WSEEC algorithm is compared with WCA under five metric such as a Network life time, Energy consumption, throughput, delay, packet delivery ratio.

S17.7 Cross Layer Best Effort QoS Aware Routing Protocol for Ad hoc Network
Mahadev A Gawas (Directorate of Higher Education Goa India & Goa University, India)

Mobile Ad Hoc Networks (MANETs) are a self organizing and adaptive wireless networks. Currently MANETs possess an advanced challenge in providing Quality of Sup- port(QoS) support for real time application data streaming through the network. These applications are delay sensitive and get affected due to congestion in the network. Thus, performance degrades due to high data loss, frequent link breakage and excessive retransmission. The strict layer network structure makes even more difficult to provide solutions for such issues. We propose a protocol which cooperates between adjacent layers and performs cross layer communication. We propose a novel QoS routing for MANET called Cross Layer Best effort QoS aware routing protocol (CLBQ) which considers the metric of link quality, data rates and MAC delay as the QoS parameters. The proposed protocol implements cross layer interaction between PHY, MAC and Network layer. The simulation results conducted shows best effort QoS service to the network in discovering the route and data transfer. Our analysis shows that the protocol has improved throughput and low network overhead.

S17.8 A Fault Tolerant Approach to Extend Network Life Time of Wireless Sensor Network
Ashish B. Jirapure (PCE NAGPUR, India); Ravindra Kshirsagar (RTM, NU, Nagpur (M.S.).INDIA, India)

In a wireless sensor network the delivery of the data packet from source to destination may be failed for various reasons and major due to failure-prone environment of networks. This may happen due to the topology changes, node failure due to battery, exhaust or breakdown of the communication module in the wireless node and results in the link failure. This paper addressed the major problem of link failure due to the failure of the nodes in the WSN and with the aim of providing robust solutions to satisfy the QoS-based stern end-to-end requirements of communication networks. In this paper, we propose the new solution by modifying the existing extended fully distributed cluster-based routing algorithm (EFDCB). In this proposed algorithm the faulty nodes or nodes that are more prone to failure in the every cluster of the network get identified by exchanging data and mutually testing among neighbor nodes. When we established the path between source and destination these faulty nodes get excluded in the path selection process and more stable, less prone to failure path will be formed. The performance of this new modified fault-tolerant fully distributed cluster-based routing algorithm is evaluated by simulating it in NS2 environment. Simulation results show that it performs better than the existing algorithm and provide novel solution for fault detection and fault management along the QoS paths and achieves a high degree of fault tolerance.

S17.9 Low Power Wireless Health Monitoring System
Vinayak Kini (Vivekanand Education Society's Institute Of Technology, India); Chinmay Patil (Vivekanand Education Society's Institute of Technology, India); Siddhesh Bahadkar, Sharvil Panandikar, Akhilesh Sreedharan and Abhay Kshirsagar (Vivekanand Education Society's Institute Of Technology, India)

Low Power Wireless Health Monitoring System (LoWHMS) is a sensor network which aims to monitor vital signs of a patient remotely. It provides real time feedback to medical personnel in order to alert them when life-threatening changes occur. The network is a self-healing network so that it can get reconfigured when the network links are broken. Ultra-Low power microcontrollers are used to reduce the power consumption drastically. The LoWHMS is a low cost solution which focuses on keeping doctors frequently updated about the health status of a patient and his vital signs. It also aims at eliminating physical delays arising due to lack of facilities in a particular hospital.

S17.10 Energy Efficient QoS Aware MAC Layer Time Slot Allocation Scheme for WBASN
Tamanna Puri (NITTTR, Chandigarh, India); C. Rama Krishna (NITTTR, India); Navneet Kaur (Chandigarh University, Mohali, India)

With the technological advancements in the field of Wireless Sensor Networks (WSNs) in the last decade, a number of remote monitoring and health care applications have made their presence felt through implanted and wearable biosensors. These biosensors can be used for real time patient monitoring creating a Wireless Body Area Sensor Network (WBASN). These biosensors are equipped with limited battery creating a requirement for energy-efficient schemes for WBASN protocols. The biosensor nodes are heterogeneous in terms of buffer size, traffic type, inter-arrival time and payload size. The data generated by these nodes may have different requirements based on their quality parameters. A biosensor node generating electroencephalography (EEG) reading may require reliable transmission of data and same way some other biosensor node may generate delay sensitive data. Differing requirements of these biosensor nodes lead to the requirement of QoS aware schemes for WBASN. In this paper, an energy-efficient Quality of Service (QoS) aware time slot allocation scheme is proposed for WBASN Media Access Control (MAC) layer. This scheme takes heterogeneity of biosensor nodes under consideration. Slot allocation is based on QoS aware priority classes. Unlike most of the existing schemes, time slot duration is traffic adaptive in this scheme and variable length slots are allocated based on different payload sizes of the sensor nodes. This scheme improves the energy efficiency by reducing the chances of packet drops and energy wastage due to idle listening. Sleep time for sensor nodes is increased leading to lower energy consumption. The proposed time slot allocation scheme is compared with slot allocation scheme of eMC-MAC protocol. Simulation result shows a significant improvement in energy efficiency over eMC-MAC slot allocation scheme.

S17.11 Energy Efficient m- level LEACH protocol
Ravi Kishore Kodali, Venkata Sai Kiran A and Govinda Swamy (National Institute of Technology, Warangal, India)

Wireless Sensor Networks (WSNs) comprise of large number of sensor nodes, which sense and measure various physical phenomena related parameters and transmit the measured data towards the base station by making use of the neighbouring nodes acting as relay nodes. In order to extend the lifetime of a WSN application, it is necessary to distribute the energy dissipated among the nodes evenly in the network and improve the overall system performance. The lifetime of network depends on the underlying routing protocol. This paper presents various energy-efficient routing protocols being widely used. A performance comparison of direct transmission protocol, MTE protocol and LEACH protocol and improved LEACH and multi-level LEACH protocols like M-LEACH protocol, DD-LEACH protocol and TL-LEACH protocol is presented. This work also proposes an energy efficient and improved multi-level LEACH protocol and DD-TL-LEACH protocol. For the purpose of simulation analysis, the NS-3 simulation platform has been made use of.

S17.12 Efficient Bandwidth Utilization during Message Dissemination among Authentic Vehicles in VANET
Bidisha Bhabani (National Institute of Technology, Rourkela, India); Sulata Mitra (Indian Institute of Engineering Science and Technology, India)

Vehicular ad hoc network performs crucial functions in road safety, detection of traffic accidents and reduction of traffic congestions. It allows vehicles on roads to disseminate messages for avoiding road accidents and congestion, for enhancing driving safety and for comfort of automotive users. Hence efficient message dissemination among authentic vehicles is an important research issue to ensure safe and reliable message communication among vehicles using the limited available bandwidth in vehicular ad hoc network. Efficient message dissemination among authentic vehicles in vehicular ad hoc network is proposed in the present work. Each vehicle generates beacon message periodically and broadcasts it for neighbors. The emergency (warning) messages are generated by a vehicle after the occurrence of an emergency (warning) event within its coverage area. A vehicle transmits emergency (warning) message for its neighbors and receives emergency (warning) message from its neighbors in multiplexed form to utilize the available bandwidth efficiently and to reduce network congestion. The performance of the proposed scheme is evaluated qualitatively and quantitatively. It outperforms the existing schemes.

S17.13 Addressing Node Isolation Attack in OLSR Protocol
Balaji Shivankar (Rajarambapu Institude of Technology, India); Sandeep Thorat (Shivaji University, India)

Mobile ad hoc networks (MANETs) is a self-configure, a highly dynamic network changing topology and infrastructure less. The MANETs don't have any fixed infrastructure or central access points and they aren't using any backbone infrastructure, and it contains secret and sensitive information. The MANET is use for designing specific type of applications. The providing security on that application are one of the major tasks for researchers. In this paper we observe the network vulnerabilities of a pro-active routing protocol. An analyzing attack on optimized link state routing (OLSR) protocol which is proactive, we propose a reputation base mechanism for prevention attacks. The mechanism is useful for securing the OLSR protocol against the denial of service attack (i.e. node isolation attack). Our mechanism is capable of finding whether a node is presenting correct network topology information or not by confirming its HELLO message. The experimental simulation results show the protocol is able to achieve routing security mechanism with increasing throughput, packet delivery frequency and control overhead. Our proposed mechanism is light weight because the protocol doesn't include with high computational complexity for securing OLSR protocol against the node isolation attacks in MANETs. This mechanism brings few modifications and still compatible with the OLSR protocol. In order to evaluate network parameters by using network simulator (NS-2) tool.

S17.14 Energy, Link Stability and Queue Aware OLSR for Mobile Ad hoc Network
Rohit Patil and Ashwini Patil (Rajarambapu Institute of Technology, India)

Mobile Ad hoc Network (MANET) is formed with the autonomous nodes (i.e. nodes are mobile in nature, no central co-ordination). MANET is infrastructure-less, that is, there are no routers or access points in the network, nodes itself acts as a router in MANET. Although there are too many features of MANET but due to its characteristics, certain issues arises which results into performance degradation of the network. Due to mobility, limited residual energy of the node, selection of stable and durable path for the communication is the challenge. Also remaining queuing capacity is also affected on the packet loss. Optimized Link State Routing (OLSR) is a proactive routing protocol for MANET. The key concept in OLSR is multi-point relay's (MPR) to find path for each node in the network with reducing broadcasting numbers. To improve the performance of the network, new approach is proposed with new version of original OLSR based new three criterions for MPR selection as link stability, residual energy and queuing capacity of the node. Based on these criterions, selected path is more stable, durable for the communication in the network. Performance evaluation is done with the help of NS2 simulator.

S13: S13-Pattern Recognition and Analysis

Room: 306
Chairs: Sharada Chougule (Finolex Academy of Management abd Technology, Ratnagiri, India), Alex James (IIITMK, India)
S13.1 Palmprint Retrieval based on Match Scores and Decision-Level Fusion
Ilaiah Kavati (IDRBT & University of Hyderabad, India); Munaga V N K Prasad (IDRBT, India); Chakravarthy Bhagvati (University of Hyderabad, India)

This paper proposes a new indexing mechanism for biometric databases based on match scores between the images. First, the proposed approach computes a match score to each image by comparing it against a preselected representative image. The computed match score of the image against the representative acts as its key in the database. The proposed approach uses these keys (i.e., match scores) to arrange the biometric images in sorted order like traditional database records so that a fast search is possible. During identification, this approach computes the key of the query (i.e., its match score against the representative image) and retrieves a set of images that have same key in the database as similar to it. We experimented this approach with multiple representatives and extended to multi-modal biometrics. This approach enrolls the users dynamically without disturbing the existing system. Experimental results on benchmark PolyU palmprint database show a significant performance improvement of the proposed system compared to the existing retrieval methods.

S13.2 Automatic Classification of Cotton Boll using Signature Curve and Boundary Descriptors
Sandeep Kumar (Indian Institute Of Information Technology and Management, Gwalior, India); Manish Kashyap (IIITM Gwalior, India); Yogesh Choudhary (ABV-Indian Institute Of Information Technology and Management, Gwalior, India); Swaraj Singh Pal and Mahua Bhattacharya (Indian Institute Of Information Technology and Management, Gwalior, India)

The correlation between the environmental features and image features of cotton bolls is the necessary step for the pattern recognition and translate those features for machine understanding is the main challenge to distinguish mature cotton boll from immature one. Present work is tried to solve this problem using shape based features. The fuzzy based classifier is introduced for the decision making. Any improper acquisition of images of cotton bolls, like intense illumination or deep shadows (which is of course absent in natural settings) will produce improper results.

S13.3 Finger Vein Recognition Using Discrete Wavelet Packet Transform Based Features
Santosh Shrikhande (Swami Ramanand Teerth Marathwada University, Nanded & Sub-Centre, Latur, India); Hanumant Fadewar (Swami Ramanand Teerth Marathwada University, Nanded, India)

Finger vein biometric has become most promising recognition method due to its accuracy, reliability and security. This paper discusses a novel technique for finger veins features extraction using Discrete Wavelet Packet Transform (DWPT) based method. The DWPT without HH subband decomposition is applied on ROI of 96x64 size finger veins image up to third level. The average standard deviation and average energy of each decomposition level are used for the creation of features vector database. The Euclidean, City Block and Canberra distance classifiers are used for the classification of finger veins images. The performance of proposed method is evaluated on the standard finger veins image ROI database of SDUMLA Shandong University. Experimental results show that the proposed method gives better results as compare to the standard Discrete Wavelet Transform (DWT) and DWPT Methods.

S13.4 Computer Based RR-Interval Detection System with Ectopy Correction in HRV Data
Nazneen Akhter (Babasaheb Ambedkar Marathwada University, India); Hanumant Gite (Dr Babasaheb Ambedkar Marathwada University, Aurnagabad, India); Sumegh Tharewal and Karbhari Kale (Babasaheb Ambedkar Marathwada University, India)

Heartbeat rhythm contains information related to many physiological factors. The interbeat interval or the RR-Interval and its variability can be effectively used to understand the relation between sympathetic and parasympathetic nervous system. As the duration between two consecutive heart beats (RR-Interval) changes from beat to beat it represents a time series. It contains many indicators representing diseases, there are reports that this can be used to predict expected cardiac disease well in advance. We designed, developed and constructed heartbeat detection and RR-Interval data acquisition system to measure and store HRV data. This data is prone to several types of noise that comes in the way of drawing diagnostic conclusions from data, for this reason, the external noise including ectopic beats is removed from data for reliable results. The role of ectopic beat removal is examined using a Poincare map and Fourier Transform. Typical data collected is presented and analyzed to demonstrate the role of ectopic beat removal.

S13.5 Adaptive Visual Tracking on Euclidean Space Using PCA
Shreenandan Kumar, Suman Kumari, Sucheta Patro, Tushar Shandilya and Anuja Acharya (KIIT University, India)

In this paper, we present a simple and elegant tracking algorithm that incrementally updates the covariance matrix descriptor using an update mechanism based on Principle Component Analysis on Euclidean subspace. Here, the target window is represented as the covariance matrix descriptors, computed using the features extracted from that window. The covariance matrix is independent of size so it can be compared to any regions without being limited to a constant window size, also it has low dimensionality. We have used the multivariate Hotelling's T2 test to detect the object which is based on Mahalanobis distance. Also, we have incorporated an update mechanism which is based on PCA to increase efficiency of tracking for longer trajectory. This update mechanism also adapts the intrinsic as well as extrinsic variations effectively. The experimental analysis shows the effectiveness of the proposed approach.

S13.6 An Approach for Reducing Morphological Operator Dataset and Recognize Optical Character based on Significant Features
Ashis Pradhan (Sikkim Manipal Institute of Technology & Sikkim Manipal University, India); Mohan Pradhan and Amit Prasad (Sikkim Manipal Institute of Technology, India)

Pattern Matching is useful for recognizing character in a digital image. OCR is one such technique which reads character from a digital image and recognizes them. Line segmentation is initially used for identifying character in an image and later refined by morphological operations like binarization, erosion, thinning etc. The work discusses a recognition technique that defines a set of morphological operators based on its orientation in a character. These operators are further categorized into groups having similar shape but different orientation for efficient utilization of memory. Finally the characters are recognized in accordance with the occurrence of frequency in hierarchy of significant pattern of those morphological operators and by comparing them with the existing database of each character.

S13.7 Features based classification of hard exudates in retinal images
Anup V Deshmukh (Vishwakarma Institute of Technology Pune, India); Tejas Patil and Sanika Patankar (Vishwakarma Institute of Technology, India); Jayant Kulkarni (Vishwakarma Institute of Technology, Pune, India)

Diabetes mellitus is a major disease spread all across the globe. Long-time diabetes mellitus causes the complication in the retina called Diabetic Retinopathy (DR), which results in visual loss and sometimes blindness. In this paper, we discuss a simple and effective algorithm for segmentation of the optic disk (OD) and bright lesions such as hard exudates from color retinal images. Color fundus images are enhanced using brightness transform function. Morphological operator along with the Circular Hough Transform (CHT) is used for optic disk segmentation. Further, local mean and entropy based region growing technique is applied in order to classify exudate - non-exudate pixels in retinal images. The performance of the proposed algorithm has been tested on publicly available standard Messidor database images with varied disease levels and non-uniform illumination. Experimentation yields 94% success rate for localization of the optic disk, 99% accuracy of classification of exudate - non-exudate pixels and subject level accuracy is found to be 93% and 67% in identifying the abnormal (with exudates) and normal (without exudates) images respectively.

S13.8 A Proposed Model of Graph Based Chain Code Method for Identifying Printed & Handwritten Bengali Character
Arindam Pramanik and Sreeparna Banerjee (West Bengal University of Technology, India)

The problem of Optical Character Recognition is as follows- here input is a scanned image of a printed or handwritten text and the output is a computer readable version of the input content. A lot of research work has been done on OCR worldwide for different languages. Though "Bengali" is the second most popular script and language in Indian subcontinent and the fifth most popular language in the world, a fewer no. of research publications are there. The application areas of character recognition are increasing remarkably. Here we have proposed a new approach of handwritten Bengali character recognition using Graph Based Chain Code Method. Our aim is to achieve maximum recognition rate with minimum processing time.

S13.9 Spike Encoding for Pattern Recognition: Comparing Cerebellum Granular Layer Encoding and BSA algorithms
Chaitanya Medini (Amrita Vishwa Vidyapeetham ( Amrita University), India); Asha Vijayan, Ritu Maria Zacharia and Lekshmi Priya Rajagopal (Amrita University, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Spiking neural encoding models allow classification of real world tasks to suit for brain-machine interfaces in addition to serving as internal models. We developed a new spiking encoding model inspired from cerebellum granular layer and tested different classification techniques like SVM, Naïve Bayes, MLP for training spiking neural networks to perform pattern recognition tasks on encoded datasets. As a precursor to spiking network-based pattern recognition, in this study, real world datasets were encoded into spike trains. The objective of this study was to encode information from datasets into spiking neuron patterns that were relevant for spiking neural networks and for conventional machine learning algorithms. In this initial study, we present a new approach similar to cerebellum granular layer encoding and compared it with BSA encoding techniques. We have also compared the efficiency of the encoded dataset with different datasets and with standard machine learning algorithms.

S13.10 Analyzing Hardware Constraints of Gabor Filtering Operation for Facial Expression Recognition System
Sumeet Saurav (CEERI Pilani, India); Nidhi Sharma (Banasthali Vidhyapeeth, India); Ravi Saini (CSIR-CEERI Pilani, India); Sanjay Singh (CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI) & Academy of Scientific & Innovative Research (AcSIR), India); Anil Saini (CEERI & ACSIR, India)

This paper presents hardware constraints analysis of Gabor filtering operation for its hardware implementation in a real time Facial Expression Recognition System (FERS). Gabor filter is the most common feature extractor employed for the realization of such system. Feature extraction using Gabor filter is efficient and has better discrimination capability. In this work, we have employed software-based approach to find the optimum filter and facial image size. These two factors employed in the Gabor filtering process directly affect the hardware resource utilization and hence we have considered these two factors for our analysis. We have used two versions of Gabor filter for feature extraction, one using the original Gabor filtering approach and the other its modified version using Image Pyramid based approach. Support Vector Machine (SVM) classifier has been used for analyzing the performance of the extracted feature.

S13.11 Multi-Secret Sharing Threshold Access Structure
Siva Reddy Lebaka (University of Hyderabad, India); Munaga V N K Prasad (IDRBT, India)

This paper proposes threshold access structure for multi-secret sharing. It uses a two-level encoding for share generation from multiple secret images. In the first level of encoding secure Boolean based operations are used. The second level of encoding uses Chinese remainder theorem and Lagrange Interpolation. It is mainly useful for secure encryption of multiple secret images with two levels of encoding. It has the ability to recover secret images without any distortion. Another advantage is to share n secret images among k shares. This scheme is free from pixel expansion problem.

S13.12 Information retrieval from an image in natural light
Lev B Levitin and Tommaso Toffoli (Boston University, USA)

A most common way to store information is to encode it in the optical properties of an object and to retrieve it by viewing the object by reflected and transmitted natural (thermalized) light---or even by light emitted by the object itself---for a specified time interval. The discreteness of the radiation degrees of freedom and the statistical properties of thermal (incoherent) radiation impose limitations on the amount of the retrieved information. We derive the maximum information that can be retrieved from the object. This amount is always finite and is proportional to the area of the object, the solid angle under which the entrance pupil of the receiver is seen from the object, and the time of observation. An explicit expression for the information in the case where the information recorded by the receiver obeys Planck's spectral distribution is obtained. The amount of information per photon of recorded radiation is a universal numerical constant, independent of the parameters of observation.

SSCC-02: SSCC-02: - Cryptography and Steganography/ SSCC - Application Security

Room: 308
Chairs: Varghese Paul (CUSAT, India), Ram Ratan (SAG, DRDO, India)
SSCC-02.1 FPGA Implementation of 128- bit Fused Multiply Add Unit for Crypto Processors
Sandeep Kakde (Y C College of Engineering, India); Mithilesh Mahendra (YCCE, India); Atish Khobragade (University of Nagpur, India); Nikit Shah (University of Texas, India)

Fused Multiply Add Block is an important module in high-speed math co-processors and crypto processors. The main contribution of this paper is to reduce the latency. Binary 128 arithmetic plays an important role in floating point application of Quadruple Precision. The major component of 128 bit Fused Multiply Add (FMA) unit with multi-mode operations are Alignment Shifter, Normalization shifter, Multiplier, Dual Adder by Carry Look Ahead Adder. The major technical challenges in existing FMA architectures are latency and higher precision. The repeated occurrence of fractional part affects the precision. In order to reduce the latency, the Multiplier is designed by using reduced complexity Wallace Multiplier and the latency of overall architecture gets reduced up to 15-25%. In this paper the total delay of multiplier designed using reduced complexity Wallace Multiplier is found to be 37.673 ns. Simultaneously to get higher precision we design namely Alignment Shifter and Normalization Shifter in the FMA unit by using Barrel Shifter as this Alignment Shifter and Normalization Shifter will have less precision, but since replacement of these blocks by Barrel Shifter will result into higher precision and the latency is further reduced by 25- 35% and the total delay of Alignment Shifter and Normalization Shifter using Barrel Shifter is found to be 5.845 ns.

SSCC-02.2 A Hybrid Cryptographic and Steganographic Security Approach for Identity Management and Security of Digital Images
Kester Quist-Aphetsi (University of Brest France, France)

Privacy and security of image data are of paramount importance in our ever growing internet ecosystem of multimedia applications. Identity and security of image content play a major role in forensics and protection against media property theft. Copyright issues and ownership management as well as source identification within today's cyberspace are very crucial. In our work, we proposed a hybrid technique of ensuring security of digital images as well as owner Identification by combining cryptographic and steganographic approaches. The steganography was used to embed a secret identification tag into the image which was engaged using a secrete key. This was applied directly on the pixel values of the image used. The cryptographic technique was used to encrypt the image to conceal its visual contents. The implementation was done successfully and analysis of the output results was done using MATLAB.

SSCC-02.3 A Robust and Secure DWT-SVD Digital Image Watermarking Using Encrypted Watermark for Copyright Protection of Cheque Image
Sudhanshu S. Gonge (Symbiosis Institute of Technology, Lavale & Symbiosis International University, Pune, India); Ashok Anandrao Ghatol (Director Genba Sopanrao Moze College of Engineering Pune, India)

Digital watermarking is a process of embedding a data/additional information into exiting data. This additional data /information is used for embedding called as watermark. With this watermark, if the file or data get copied, this is then used to identify the originality of data. As, Digital communication facilitates transfer of digital data such as text, audio, image, video, etc. The un-authorized user can copy this data & can use anywhere they want. However, this creates the problem of security, ownership and copyright protection. To overcome this problem, digital watermarking is used. During transmission of data, there are many attacks occurring either intentionally or non-intentionally on digital watermarked image. These results into degradation of image quality and watermark may get destroyed. To provide security to watermark private key encryption and decryption is used. This encrypted watermark is used further for digital watermarking process using combined DWT-SVD transform. In this Paper, it is going to discuss "A Robust & Secure DWT-SVD Digital Image Watermarking Using Encrypted Watermark for Copyright Protection of Cheque Image." Digital Watermarking Technique is used to provide ownership & copyright protection for cheque image and private key encryption & decryption technique used for security of bank watermark.

SSCC-02.4 An Improved Substitution Method for Data Encryption Using DNA Sequence and CDMB
Ravi Gupta (IMS Engineering College, India); Rahul Singh (Thapar, India)

Cryptography provides the solution in fields such as banking services, digital certificate, digital signature, message and image encryption etc. In this paper, we propose an improved data encryption approach based on the Deoxyribonucleic acid that exploits two different techniques: Substitution technique and Central Dogma of Molecular Biology. It transforms the message into protein (cipher text) using various complementary rules. Deoxyribonucleic acid provides robust DNA sequence for the substitution technique from approximately 163 million huge DNA database, where Central Dogma of Molecular Biology support the encrypted output of substitution method using transcription and translation technique.

SSCC-02.5 Analysis of Neural Synchronization Using Genetic Approach for Secure Key Generation
S Santhanalakshmi (Amrita School of Engineering, India); Sangeeta K (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India); Gopal Patra (CSIR Centre for Mathematical Modelling and Computer Simulation, India)

Cryptography depends on two components, an algorithm and a key. Keys are used for encryption of information as well as in other cryptographic schemes such as digital signature and message authentication codes. Neural cryptography is a way to create shared secret key. Key generation in Tree Parity Machine neural network is done by mutual learning. Neural networks receive common inputs to synchronize using a suitable learning rule. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Faster synchronization of the neural network has been achieved by generating the optimal weights for the sender and receiver from a genetic process. In this paper the performance of the genetic algorithm has been analysed by varying the neural network and genetic parameters.

SSCC-02.6 A Fast Discrete Cosine Transform and a Cryptographic Technique for Authentication and Security of Digital Images
Kester Quist-Aphetsi (University of Brest France, France)

In a cyberspace where disparate applications are engaged in multimedia transmission, data compression is a key for maintaining bandwidth usage efficiency as well meeting the data requirements of the disparate applications. Security of data in such environment needed to be guaranteed. Cryptographic approaches engaged in the process to be efficient enough to stand against attacks and also to maintain important visual continents after compression. In our work, we proposed a fast discrete cosine transform and a cryptographic technique for authentication and security of digital images. The cryptographic approach was applied to the image before compression, and after decompression, the image was successfully decrypted without significant loss of visual data. There was a loss in pixel values due to the compression process. The implementation of the proposed approach was done successfully and analysis of the output results was done using MATLAB.

SSCC-02.7 A Novel Image Encryption Scheme Using an Irrevocable Multimodal Biometric Key
Suchithra M (AMRITA VISHWA VIDYA PEETHAM, India); Sikha O k (Amrita Vishwa Vidyapeetham, India)

In today's digital world the use of secret key is inevitable to any secure communication through the network. But human beings feel hard to recollect the lengthy cryptographic key. One solution to this would be the use of biometric characteristics of human beings which are unique in nature, that makes the attacker difficult to guess the key generated from these features. Here we propose a new scheme for multimodal biometric key generation to secure cryptographic communication. Initially, the feature points of fingerprint and iris image are extracted using SLGS feature extraction algorithm followed by which chaotic mechanism is applied to shuffle the feature vectors and finally fused them to produce a single biometric key. In this paper we also present a new image encryption technique using the multimodal biometric key, where we are able to reconstruct the secret image without any pixel quality loss

SSCC-02.8 CLCT: Cross Language Cipher Technique
Laukendra Singh and Rahul Johari (GGSIP University, India)

Information Security has become an important issue in data communication. In modern world, internet and network applications are growing fast. So the importance and the value of the exchanged data over the internet or other media types are increasing. Any loss or threat to information can prove to be huge loss to the organization. Encryption technique is the best solution against the intruder. In this paper we formalize a new symmetric cryptographic cipher technique which is easy to understand and implement. This introduced cipher technique's name is Cross Language Cipher technique (CLCT). In this technique we use the concept of cross language which plays an important role in data security. Today most of the cipher techniques work with English language but we use two languages English and Hindi in our cipher technique. Basically, here are two functions in CLCT; first replaces/converts the English text data to Hindi text data and second function encrypt the Hindi text data. The encryption function is similar to Caesar cipher's function. To find the actual plaintext by intruder is not an easy task because CLCT uses diffusion property. So, CLCT is more reliable and powerful cipher technique. Most of cipher techniques have issue of higher performance and good security feature. Advantages of CLCT are that its performance is high and is less vulnerable to network attack.

SSCC-02.9 Secure Communication using Digital Watermarking with Encrypted Text Hidden in an Image
Sundari M, Revathi P B and Sumesh S (Amrita University, India)

Increased data transmission through the network has led to the need for greater security in order to protect against different attacks. To transfer data safely, there are many techniques in use, such as digital watermarking, fingerprinting, cryptography, digital signature and steganography. The aim of this paper is to propose a method to send text data securely over the network using digital watermarking, steganography and cryptography techniques together in combination, to provide greater security. First, the cover image will be compressed using JPEG Compression Algorithm. The message to be sent is then encrypted using the improved RSA algorithm. Then the encrypted message bits and the bits of watermark image are embedded into the compressed cover image.

SSCC-02.10 Securing database server using homomorphic encryption and re-encryption
Greeshma Sarath (Amrita Vishwa Vidyapeetham, India); Jayapriya R (Amrita University, India)

remote server is maintenance of confidentiality, integrity and availability of outsourced data, especially if the remote server is not trusted. Encryption can be used as a solution to these security concerns. But it should be possible for the data owner and other authenticated users of the data to perform queries (especially statistical queries) over data in encrypted domain. Usage of symmetric encryption will create another security concerns like management of key, denying a user from query- ing data etc. The problem with asymmetric encryption technique is only owner of the data is allowed to decrypt the data with his private key. In this paper we proposed a solution to all these problems by designing an asymmetric fully homomorphic encryption algorithm to encrypt the data. Encryption is done using public key of the owner. So that only owner can decrypt it with his own private key. If owner wants to allow a third party or delegate to perform query over encrypted data then owner will calculate a re-encryption key for him, so that authorized delegates can submit queries to database server and re-encrypt the query result with this re-encryption key so that delegate can decrypt it with his own private key

SSCC-02.11 An Automata Based Approach for the Prevention of NoSQL Injections
Swathy Joseph (Amrita Vishwa Vidhyapeetham, India); Jevitha K P (Amrita Vishwa Vidyapeetham, India)

The eminent web-applications of today are data-intensive. The data generated is of the order of petabytes and zetabytes. Using relational databases for storing them only complicates the storage and retrieval in the DB and degradation of its performance. The big data explosion demanded the need for a more flexible, high-performance storage concept -the NoSQL movement. The NoSQL databases were designed to overcome the flaws of the relational databases including the security aspects. The effective performance and efficient storage criterias were satisfied by the non-relational databases. The attackers, as usual found their way into the NoSQL databases that were considered to be secure. The injection attacks, one of the top-listed attack type of the relational databases poses threat to the non-relational databases as well. MongoDB is one of the prominent NoSQL databases to which the application development trends are shifting. In this paper, we present the different injection attacks on the leading NoSQL database and an automata based detection and prevention technique for this attack. We also evaluate the effectiveness on different subjects with a number of legitimate as well as illegitimate inputs. Our results show that our approach was able to detect all the attacks.

SSCC-02.12 Android Users Security via Permission Based Analysis
Pradeep Kumar Tiwari (Bharat Electronics Ltd. & Central Research Laboratory, India); Upasna Singh (Defense Institute of Advanced Technology, India)

Android being the most popular mobile platform with nearly 80% of global market share, attracts the mobile application developers to target end users for their private information such as contacts, GPS data, call logs, sending premium messages etc through the use of application permissions. Android permissions are selected by the application developer and there is no check on whether asked permission is relevant for the application or not. Paper proposes a methodology for identifying the over privileged applications and then reducing the set of permissions used by these applications. The proposed work demonstrates that an over privileged application can be used with reduced set of permissions and thus successfully denying access to user's sensitive information.

SSCC-02.13 Data-centric Refinement of Information Flow Analysis of Database Applications
Md. Imran Alam (Indian Institute of Technology Patna, India); Raju Halder (IIT Patna, India)

In the recent age of information, most of the applications are associated with external database states. The confidentiality of sensitive database information may be compromised due to the influence of sensitive attributes on insensitive ones during the computation by database statements. Existing language-based approaches to capture possible leakage of sensitive database information are coarse-grained and are based on the assumption that attackers are able to view all values of insensitive attributes in the database. In this paper, we propose a data-centric approach which covers more generic scenarios where attackers are able to view only a part of the attribute-values according to the policy. This leads to more precise semantic-based analysis which reduces false positives with respect to the literature.

SSCC-02.14 Detection and Diagnosis of Hardware Trojan Using Power Analysis
Eknadh Vaddi and Karthik Gaddam (Amrita Vishwa Vidyapeetham, India); Rahul Karthik Maniam (Amrita Vishwa Vidyapeetham); Sai Abhishek Mallavajjala and Srinivasulu Dasari (Amrita Vishwa Vidyapeetham, India); Bandarupalli Chandini (Amrita School of Engineering, India)

Intended malicious modification in the integrated circuits is referred to as Hardware Trojans, which has emerged as major security threat. The earlier approach, to keep check to these threats, like logic testing also known as functional testing is proved to be no longer effective for detecting large sequential Trojans which are very rarely triggered. Side channel analysis has been an effective approach for detection of such large sequential Trojans but the increasing process variations and decreasing the Trojan size resulted in the reduction of Trojan detection sensitivity using this approach. All these approaches also require golden IC. In this paper, we propose leakage power analysis approach which doesn't require golden IC and the issue of process variations doesn't affect the detection sensitivity.

S16: S16-Security, Trust and Privacy- I

Room: 309
Chair: Raghudas P (SCMS School of Engineering & Technology, India)
S16.1 Permutation based image encryption algorithm using block cipher approach
Aditya Rawat, Ipshita Gupta, Yash Goel and Nishith Sinha (Manipal University, India)

Encryption is a process of hiding significant data so as to prevent unauthorized access and ensure confidentiality of data. It is widely used to transmit data across networks ensuring secure communication. This paper aims at improving the security and efficiency of image encryption by using a highly efficient shuffle based encryption algorithm and an equivalent decryption algorithm based on random values obtained by using pseudorandom number generator. Due to the immense amount of possibilities of instances of the encrypted image which can be generated by shuffling the pixels as a block (or on a pixel by pixel basis), the algorithm proves to be highly impervious to brute force attacks. The proposed algorithm has been examined using multiple analysis methods to support its robustness for achieving good results.

S16.2 Extended Visual Cryptography for General Access Structures using Random Grids
Sonu K Mishra (University of California Los Angeles, USA); Kumar Biswaranjan (Indian Institute of Technology Guwahati, India)

Noise-like random shares successfully hide secret images, but suffer management problem: dealers themselves cannot differentiate between the shares. This problem is solved by the extended visual cryptography scheme that stamps different innocent looking cover image on each share. In this paper, we propose an extended visual cryptography scheme using random grids which can accommodate general access structures. Random grids help us get rid of pixel expansion and extensive codebook designs. The experimental results validate the correctness of the algorithms from security perspective.

S16.3 New Secret Sharing Scheme for Multipartite Access Structures with Threshold Changeability
Appala Naidu Tentu (CR Rao AIMSCS, India); Banita Mahapatra and China Vadlamudi (University of Hyderabad, India); V Kamakshi Prasad (JNTUH School of Information Technology, India)

A secret sharing method that realizes a variation of hierarchical access structure is proposed in this paper. The method indirectly shares the main secret with one random secret among a set of players. Again the shares of some unknown random secrets are calculated by the players and shared among themselves. At each level, the players can change their threshold to any arbitrary value dynamically. Each subset of the players can only recover the secret corresponding to their own level. The main secret will be retrieved by highest level players if and only if all the secrets in the lower levels are first recovered. The hierarchical access structure that our scheme realizes is such that the players of different levels calculate secrets individually but to recover the main secret cooperation of players from all levels are needed.

S16.4 Security Improvement in Fingerprint Authentication System using Virtual Biometric
Athira Ram A and Jyothis T S (University of Calicut, India)

Fingerprint authentication system faces a problem of security of patterns stored in database. In the proposed system a virtual fingerprint is created and stored in database instead of a single fingerprint. Two different fingerprints are used here. From the first fingerprint orientation is estimated and from second fingerprint certain minutiae points are extracted. They both are mixed to create a template which is encrypted and used as a virtual biometric. The system gives low rate and provides no stored information needed to recreate the fingerprint patterns.

S16.5 Evaluating the theoretical feasibility of an SROP attack against Oxymoron
Zubin Mithra and Vipin P. (Amrita University, India)

Many of the defences proposed to defend against exploitation of memory corruption vulnerabilities rely on the randomization of addresses in the process space of a binary. Oxymoron is an exploit mitigation mechanism for x86 processors that relies on reorganizing the instructions in an Executable and Linkable Format(ELF) file to enable randomization at a page level. Sigreturn Oriented Programming(SROP) is an exploitation mechanism that requires very few gadgets. It has been shown that either these gadgets are available at constant addresses for a given kernel version or can be leaked. In this paper, we evaluate the theoretical feasibility of an SROP attack against an Oxymoron protected binary and determine the preconditions necessary to make such an attack possible. As an aid to writing such exploits, we also implement libsrop, a library that generates customizable SROP payloads for x86 and x86-64.

S16.6 A secure SMS protocol for implementing Digital Cash System
Raghu Kisore Neelisetti (Idrbt, India); Supriya Sagi (Tata Consultancy Services, India)

We propose a digital cash system suitable for low value transactions and a secure SMS protocol based on EC-MQV key agreement protocol and AES encryption algorithm to operate the proposed digital cash system. The key security aspects of the protocol are resilience to SIM cloning, SIM swapping attacks and possible message tampering using a GSM ghost base station. It further provides for 2 factor authentication by using IMSI number as proof of "'what you have?"' and a user provided password as proof of "'what you know?"'. The total cost of executing a financial transaction is 2 SMS messages. We use the proposed protocol to implement a digital cash system. We strongly believe such a low cost secure digital cash system can be a boon to extend financial services to people who are left out of regular banking services due to the high cost of providing the same through existing banking and payment solutions. The low communication cost associated with each financial transaction makes it financially viable for handling low value transactions. The proposed protocol was implemented for both android and J2ME mobile phones with an easy to use interface wherein any individual with number literacy can operate. This makes it easy to deploy in less developed economies where literacy is often a challenge.

S16.7 A Novel approach for Circular Random Grid with Share Authentication
Sandeep Gurung (Sikkim Manipal University, India); Bijoy Chhetri (Centre for Computers and Communication Technology & Sikkim Manipal Institute of Technology, India); M K Ghose (Sikkim Manipal University, India)

Secrecy, Integrity and Authenticity are the major thrust area whenever information is subjected to third party intervention who can gain data access from the vulnerable links in the system, communication channel or individual user identity. While addressing these issues, Cryptography turns out to be the trusted brand in the computer science fraternity and Visual Cryptography as one among this is a methodology where the secret recovery is done with human visual system without having to perform complex calculations. The proposed scheme comply with the methodology of secret sharing scheme where secret information are recovered on overlapping the printed transparencies and each of them being validated for authenticity. This paper discusses on the use of circular rings to embed the secret information with certain angular rotation and performs validation of the individual cipher share in order to avoid cheating

S16.8 Using Photomosaic and Steganographic Techniques for Hiding Information inside Image Mosaics
Sameerchand Pudaruth, Maleika Heenaye-Mamode Khan, Christopher Li and Henriette Arthe (University of Mauritius, Mauritius)

In this digital world, transferring sensitive data electronically has become inevitable. The objective of this work is to hide and retrieve confidential information in image mosaics. Different algorithms of mosaic techniques as well as steganographic techniques have been explored. The photomosaic approach has been used for the creation of the mosaic and the least significant bit (LSB) technique has been adopted for the embedding of the hidden information. The construction of the photomosaic is done by selecting an image, splitting it into smaller images (tiles) of sizes 8x8, 16x16 and 32x32. These tiles are then compared from a very large amount of photos of the same sizes. Next, the user can either hide a secret image or a secret text into them. The final mosaic image contains secret information that is well-concealed and is impossible to find out with the naked eye. This technique is more robust compared to modifying the bits of the original image directly.

S16.9 Implementing High Interaction Honeypot to Study SSH Attacks
Solomon Zemene Melese (Andhra University Visakhapatnam, India); Ps Avadhani (Andhra University, India)

Honeypot is a recent technology in the area of computer network security. Production systems that are connected to the Internet are the main target for various cyber attacks. This paper presents a deployment of honeypot system in campus network. The system implements high-interaction honeypot with Secure Shell installed to study common SSH attacks in Linux environment. This system records usernames and passwords that are attempted by intruder from the Internet. It also captures detail activities of the attackers while they are interacting inside the target honeypot. Intruders attack SSH servers through dictionary and brute-force mechanism followed by intrusion. This paper covers both dictionary attack and intrusion.

S16.10 Trust Analysis of Execution Platform for Self Protected Mobile Code
Divya Kumar (Motilal Nehru National Institute of Technology, India); Shashank Srivastava (MNNIT Allahabad, India); Shuchi Chandra (IIIT-Allahabad, India)

Malicious host problem is still a challenging phenomenon in agent computing environment. In mobile agent computing, agent platform has full control over mobile agent to execute it. A host can analyze the code during stay of mobile agent on that host. A host can modify the mobile code for his benefits. A host can analyze and modify the data which is previously collected during agent itinerary. Hence to save the code from malicious host we need to identify it. Therefore we calculate the risk associated with that code executing on a mobile host using fuzzy logic. When a host performs an attack over the mobile agent it will take more execution time thus some risk is associated with it. If the calculated risk is greater than a user specified maximum value then the agent code is discarded and the host is identified to be malicious. In this paper, we proposed a fuzzy based risk evaluation model integrated with proposed self-protected security protocol to secure mobile code from insecure execution.

S16.11 Design of Efficient ID-Based Group Key Agreement Protocol Suited for Pay-TV Application
Abhimanyu Kumar (ISM Dhanbad, India); Sachin Tripathi (CSE, ISM, Dhanbad, India); Priyanka Jaiswal (ISM, Dhanbad, India)

Pay-TV application is one of the challenging home appliances where a TV program is delivered to a group of customers. The broadcaster of the Pay-TV system needs to assure that only legitimate subscribers/customers can watch the subscribed TV program. To deliver a TV program confidentially to its subscribers, the broadcaster may encrypt the program by a group key shared by all subscribers. Thus the Pay-TV system often requires a group key establishment protocol to establish a symmetric key among all the subscribers subscribing for a particular program. The group key agreement protocol for Pay-TV system presented so far are requires bilinear pairing computations which creates a big overhead for subscribers specially in wireless environment. A protocol is suitable for Pay-TV system only when most of the computational load are shifted to the broadcaster than the subscribers because the subscribers may have less computational resources as compared to the broadcaster. The present paper proposes an ID-based group key agreement protocol suitable for the Pay-TV system without using bilinear pairing. The proposed protocol also have efficient join and leave procedures to allow dynamic subscriptions with forward and backward secrecy. Moreover, the security of the proposed protocol is justified with respect to the necessary security attributes. Finally the performance of proposed protocol has been compared with some other existing protocols which shows that the proposed protocol has comparable communication and computation cost with zero pairing computation.

S16.12 Improving attack detection in self-organizing networks: A trust-based approach toward alert satisfaction
Manuel Gil Pérez, Félix Gómez Mármol and Gregorio Martinez Perez (University of Murcia, Spain)

Cyber security has become a major challenge when detecting and preventing attacks on any self-organizing network. Defining a trust and reputation mechanism is a required feature in these networks to assess whether the alerts shared by their Intrusion Detection Systems (IDS) actually report a true incident. This paper presents a way of measuring the trustworthiness of the alerts issued by the IDSs of a collaborative intrusion detection network, considering the detection skills configured in each IDS to calculate the satisfaction on each interaction (alert sharing) and, consequently, to update the reputation of the alert issuer. Without alert satisfaction, collaborative attack detection cannot be a reality in front of ill-intended IDSs. Conducted experiments demonstrate a better accuracy when detecting attacks.

S12: S12-Natural Language Processing and Machine Translation

Room: 310
Chair: Sumam Mary Idicula (Cochin University, India)
S12.1 An Information Retrieval System for Malayalam Using Query Expansion Technique
Arjun Babu (Cochin University of Science and Technology, India); Sindhu L (College of Engineering Poonjar, India)

Malayalam is the administrative language of the south Indian state of Kerala and also of the Lakshadweep islands of west coast of India. This work explain an efficient monolingual information retrieval system for Malayalam using query expansion technique, which returns Malayalam documents in response to query in Malayalam. The proposed system uses synonym mapping to improve the efficiency of document retrieval. This technique help to overcome vocabulary mismatch issues by expanding user query with additional relevant terms and reweighting the terms in expanded user query. The developed system is domain independent and used in different Natural Language Processing (NLP) tasks in Classical language like Malayalam.

S12.2 A Novel Approach in Channel Independent Speaker Verification System for Malayalam Database Using GMM-SVM Frame work
Gayathri S (Mahatma Gandhi University, India); Anish Babu K K (Rajiv Gandhi Institute of Technology, Kottayam, India)

Speaker verification deals with the task of confirming the identity of a claim using a hypothesized speaker model and a speaker model database. This work concentrates on a speaker verification system by combining GMM and SVM. The feature vectors used for modelling are Mel Frequency Cepstral Coefficients (MFCC). The database in Malayalam is collected through different recording equipments which is considered as different channels. In order to reduce the channel effects, a method called feature mapping is implemented. To model a speaker in SVM positive as well as negative files are needed. Comparative studies on the rate of modelling using different types of negative files are performed in this work. Then the proposed system is tested over the Malayalam database. The maximum modelling rate obtained is in the range of 95-99%. Then the accuracy of the system with different type of negative files are calculated and compared the results. The system is developed using MATLAB 7.12.0(R2011a) and LIBSVM tool kit.

S12.3 Classification of Offline Gujarati Handwritten Characters
Swital Macwan and Archana Vyas (Dharmsinh Desai University, India)

Intelligent Character Recognition (ICR) is a specific form of optical character recognition (OCR) dealing mostly with handwritten texts. Due to their specificity, they are usually more adept in interpreting different styles and fonts of handwriting providing eventually higher recognition rates. Factors like language constructs, amount of research on ICR pertaining to the language, etc., essentially determines the amount of success achieved in its character recognition. This research mainly deals with the recognition of Gujarati Handwritten Characters. We have considered 34 consonants and 5 vowels; a total of 39 Gujarati Characters. The structure and lexicons of the language posed a challenge during the initial phase of segmentation; for that we have proposed new algorithm for segmentation. Our segmentation algorithm is able to address these concerns effectively. Different algorithms from different domains have been considered for comparative analysis like Transform Domain (DWT, DCT and DFT), from Spatial Domain; Geometric Method (Gradient feature) and Structural method (Freeman chain code). We have also proposed a new Combination of Structural (Freeman chain code, Hu's invariant moment and center of mass) and Statistical method (Zernike Moments) to extract feature vectors and it results into good amount of accuracy. These extracted feature vectors were further supplied as input into Support Vector Machines and their resulting accuracies were analyzed using 10 fold cross validation. SVM performs well on data sets that have many attributes and can also handle large number of classes.

S12.4 Automated Question Generation Tool for Structured Data
Asmita Shirude (College of Engineering, India); Shefali Totala (College of Engineering Pune, India); Shiril Nikhar (College of Engineering, Pune, India); Vahida Attar (College of Engineering Pune, India); Ramanand Janardhanan (Choose To Thinq, India)

For competitive examinations (e.g. GMAT, CAT) and for various offline and online academic courses, test questions need to be written by human authors. These question-writing tasks take significant manual effort and time. To simplify this effort, we propose an Automatic Question Generation system that focuses on questions that can be generated in English from structured data. This paper explains an approach, which takes a data table as input and generates questions of varying types. The data is preprocessed and categorized according to their class (numeric and nonnumeric). Appropriate templates are used to create various kinds of questions. A custom tagger is built, which takes the data entry and assigns it an entity, such that it can be generalized to a particular entity name. For uncategorized or unrecognized entities, options to add or modify entities are also provided. Preliminary experimental work carried out using this template system shows promising results. To the best of our knowledge, this is the first such question generation system based on structured data in the form of tables.

S12.5 Breast Cancer Staging using Natural Language Processing
Johanna Johnsi Rani G. (Madras Christian College (Autonomous), India); Dennis Gladis (Presidency College, India); Marie Therese Manipadam and Gunadala Ishitha (Christian Medical College, India)

Medical diagnostic reports archived as electronic forms are valuable resources for processing to understand retrospectively, the severity of the disease among patients and to verify the correctness of the diagnosis. In this work, Breast Cancer Pathology reports are processed using Natural Language Processing (NLP) and Information Extraction (IE) techniques in order to extract the parameters required for cancer staging namely Tumour (T), Lymph nodes (N) and Metastases (M). An automated system is developed to process the 'Impression' section of the report, classify T and N using pTNM classification protocol of American Joint Committee on Cancer (AJCC) and derive the stage S of cancer of patients. T and N are classified using numerical parameters and non-numeric medical conditions given in the natural language text. Metastases M which is not evident from Pathology reports is given a default value of M0 for staging. The dataset consisting of 150 de-identified reports were reviewed by the Pathologists to obtain the Gold standard for evaluation. The TNM classification and the cancer stage derived by the system were evaluated against the Gold standard and discrepancy reports were generated. The extraction process was then fine-tuned based on the recommendations of the domain experts. The automatic staging process had an average percentage of 73 for Precision, 82 for Recall, 59 for Specificity and 72 for Accuracy. The lack of high performance is due to presence of certain vital information in other sections of the report that are not processed. Processing these sections in future would improve the performance.

S12.6 Machine Translation from English to Malayalam Using Transfer Approach
Aasha VC (University of Calicut & Vidya Academy of Science and Technology, India); Amal Ganesh (Vidya Academy of Science & Technology, India)

A machine translation system converts text from a natural language to other while abiding to the syntax and semantics of the latter. The area of interest here is a Rule Based machine translation system that translates text from English to Malayalam using transfer approach. The system is designed to translate sentences from cricket domain related articles. The purpose behind making the system domain specific is to improve translation quality of sentences in the chosen domain. Our system depends on Stanford parser in the preprocessing stage. The input English sentence is tokenized, and parsed with the help of Stanford parser, and a parse tree is generated. According to the rules structured for the target language the source parse tree is reordered to produce the target parse tree. The words in target tree are mapped with bilingual English - Malayalam dictionary to obtain output Malayalam words. The system also maintains a separate verb dictionary that holds English verb in root form and its Malayalam meaning with 9 different inflections. Morphological generation phase provides necessary inflections for Malayalam words and thereby creates meaningful translations. The domain specific translation system was able to achieve correct translation results for about 86% of test sets.

S12.7 Investigating the Impact of Combined Similarity Metrics and POS tagging in Extrinsic Text Plagiarism Detection System
Vani K (Amrita Vishwa Vidhyapeetham & ASE, Bangalore, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India)

Plagiarism is an illicit act which has become a prime concern mainly in educational and research domains. This deceitful act is usually referred as an intellectual theft which has swiftly increased with the rapid technological developments and information accessibility. Thus the need for a system/ mechanism for efficient plagiarism detection is at its urgency. In this paper, an investigation of different combined similarity metrics for extrinsic plagiarism detection is done and it focuses on unfolding the importance of combined similarity metrics over the commonly used single metric usage in plagiarism detection task. Further the impact of utilizing part of speech tagging (POS) in the plagiarism detection model is analyzed. Different combinations of the four single metrics, Cosine similarity, Dice coefficient, Match coefficient and Fuzzy-Semantic measure is used with and without POS tag information. These systems are evaluated using PAN -2014 training and test data set and results are analyzed and compared using standard PAN measures, viz, recall, precision, granularity and plagdet_score.

S12.8 Transcription of Telugu TV News using ASR
Ram Reddy Malle (Osmania University & NERTU, India); Laxminarayana Parayitam and A V Ramana (Osmania University, India); Markandeya Janaswamy (MathWorks, India); Bhaskar Jupudi and Harish Biruduganti (NERTU, India); Samala Jagadheesh (BITS-Pilani, Hyderabad Campus, Hyderabad, India); Sumalatha Emmela (NERTU, India)

Automatic Speech Recognition (ASR) is the process of converting the human speech which is in the form of acoustic waveform, into text. In this paper we discussed about building an automatic speech recognition system for Telugu news. A Telugu speech database is prepared along with the transcription, dictionary. Telugu speech files are collected from the Telugu TV news channels. Most of the selected sentences are recorded in studio environment and some of the speech files are having non predictable back ground noise. Wavsurfer is used to make the speech in to small sentences. CMU SPHINX tool kit is used to develop the ASR system. The recognized text file is finally displayed using Baraha software (demo version).

S15: S15-Special Session on Social Signal Processing for Personalisation in Smart Cities

Keynote: Signal Processing of Electrical Consumption - Providing Deep Insights into Our Lives, Dr. Amarjeet Singh, Indian Institute of Technology Delhi
Room: 311
Chairs: Deepayan Bhowmik (University of Stirling, United Kingdom (Great Britain)), Arunangsu Chatterjee (University of Plymouth, United Kingdom (Great Britain))

Opening speech: Dr. Deepayan Bhowmik & Dr. Arunanshu Chatterjee

Title of Talk: Signal Processing of Electrical Consumption - Providing Deep Insights into Our Lives
Talk description: Electrical consumption, for both residential and industrial consumers, is currently provided through monthly bills that give little insights into our consumption patterns. This data if collected at a higher resolution - say every minute, can provide deep insights into our personal lives such as what time we wake up in the morning, what time we go to bed and how long do we watch television every day, among others. In addition, big data generated from such high resolution consumption pattern can be used to automatically dis-aggregate home level energy consumption into appliance level energy consumption without monitoring these appliances individually. In this talk, I will use real datasets collected by our group (and other publicly available datasets) to discuss deep learning opportunities possible with high resolution electrical consumption data. I will further talk about recent work in advanced analytics possible by combining this electrical consumption data with data from mobile phone sensors to provide personalized energy consumption feedback.

Panel discussion: Dr. Amarjeet Singh (IIT Delhi), Mr. Suparnakanti Das (DRDO) & Mr. Nobin Matthew (Kalki Technology)
Session closing

S15.1 A method for community recommendation for social networks
Soumyajyoti Banerjee (Iit Roorkee, India); Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

Ever increasing popularity of social networks in our daily life leveraged us to solve and support several severe problems. Here in this paper we tried to recommend a user centric community based on users 0 interest topics, location and relationship network. This can solve several significant socio-psychological issues and also will help us to understand information propagation. Several research works have been done on the user recommendation and community detection but novelty of our approach is that we are not only considering users topic interest, location but also network structure with proper edge content (weighted edge) and also recommending only those users who are interested in socialization. This has been predicted from several features of the user profile. Moreover we are trying to suggest only human user (removing bots, cyborg from the recommendation set). Relationship between attitude and action has been captured from the user model and the user centric network. Based on these features we tried to analyse intention and attitude of a user to another user. We will show that these properties can be leveraged to improve the performance and effectiveness of the user centric community recommendation in online social networks. As a large number of world population is active in twitter (near about 190 millions) we tried our method using this twitter data to observe the perfection and performance. We demonstrated several experimental results illustrating the enhanced performance and effectiveness of our approach.

S15.2 Application of IoT in detecting health risks due to flickering artificial lights
Sandip Das, Manojit Ballav and Saheli Karfa (University of Engineering and Management, India)

Internet of Things (IoT) has the ability to offer great promise in the field of health care. It can be used extensively to improve access to care, quality of care and moreover provide awareness, support and assistance to patients in remote locations. IoT can also provide assistance to patients with chronic diseases. In this paper, a cloud based user centric approach is proposed to make users aware of the health risk due to flickering of artificial light around them and provide precautionary measures if necessary. This paper calculates the flicker values of different artificial light sources using Matlab R2011a after acquiring the video from a user over cloud and notifies the user about the potential risk he/she may possess due to exposure to such lighting condition.

S15.3 Architecture for Internet of Things to Minimize Human Intervention
Rupali Shanbhag (University of Mumbai, India); Radha Shankarmani (Sardar Patel Institute Of Technology, India)

Today, smart objects in Internet of Things (IOT) are able to detect their state and share it with other objects across the Internet, thus collaboratively taking intelligent decisions on their own. Large number of objects is quietly introduced into the web every day. It is impossible for a user to keep track of all objects available around him. Therefore a search engine that could find smart objects is used. If an exact object yielding the required service is not present then system requires user inputs to find alternatives such as name of an alternate object that may provide the same service. Humans always find alternatives around them to carry out their work smoothly. Service provisioning in IOT should also be made capable of providing alternate objects that are aligned with user requirements, current context and previous knowledge without any human intervention. We propose an architecture that uses Naive Bayesian prediction on user history and Ant Colony Optimization for accurately predicting objects of user interest so as to reduce the dependency of IOT application on human inputs for finding an object match

S15.4 Smart Cities in Developing Economies: A Literature Review and Policy Insights
Sheshadri Chatterjee (Indian Institute of Technology Delhi, India)

Now-a-days all the cities of developed and developing countries feel the need to provide public services in a most effective and efficient ways. It is also noted that a considerable number of rural population is converging to the cities to relish the fruits of development and this pace of migration is increasing rapidly. From global experience it is seen that pace of growth of urbanization up to 30% is not very fast but thereafter at least up to 60-65% this pace of growth increases rapidly. Statistics highlights that now the urban population in India is 31%. As a result, pace of urbanization would speed-up to combat the need of inhabitants of those cities. In this context the Government of India has decided to set up 100 "Smart Cities" in India (as observed from the budget speech of Finance Minister of India in 2014) to face the situation since if this is right now not achieved, the existing cities would very soon become unlivable. However to achieve this, "Smart Technology" is essential which in particular, means improvement of Information and Communication Technology) ICT) along with other essential services. Now some basic principles are required to be adopted by the cities marked for converting them as "Smart Cities". These are Coalescence, Practicality and Involvement. During the past two centuries, the scenario of urban landscapes has undergone drastic reduction because previously it was relatively small in area and meager in population. But, of late, those small landscapes have turned into urban agglomerations in keeping with rapid industrial and other infrastructural growth from all angles. In India urban population is currently approximately 31 % of the total population which contributes over 60% of GDP of India. Now within next 15 years it is projected that urban India will contribute 75% of National GDP. For This reason, in order to achieve the projected result the conditions of the cities being "engine of growth" are required to be improved to a great extent and for this, concept of "Smart Cities" has come on the surface. In this paper a sincere endeavor is being taken to critically examine and discuss about the definition of the "Smart Cities", principles, constraints, characteristics, and operational procedures to be adopted to fetch 100% success.

S15.5 Smart WSN-based Ubiquitous Architecture for Smart Cities
Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India); Beneyaz Ara Begum (Academy of Scientific and Innovative Research (AcSIR) & CSIR-Indian Institute of Chemical Technology, India)

In the quest for better quality of life and living standards, people living in rural areas are expected to move to urban locales. As this trend continues, more than half of world's population is expected to make cities their place of dwelling. To meet this massive influx of rural masses into urban areas, the cities world over need to equip themselves with robust infrastructure that provides necessary amenities like, adequate & clean power, hygienic water in sufficient quantities and accommodation that makes optimum use of resources in a sustainable manner. Based on these requirements, a host of applications like, smart power generation & distribution, smart traffic management, smart waste management & utilization, smart governance, etc., are being developed. The paper discusses few research issues and challenges related to the development of IT infrastructure for smart cities. Research efforts in this direction, however, appear to be application centric. Not much attention has been given to develop an IT based holistic infrastructural framework for smart cities. To this end, we develop a conceptual architecture: Smart WSN-based Infrastructural Framework for smart Transactions (SWIFT) that provides a ubiquitous platform for seamless interaction of various smart objects, devices and systems. In this paper, we introduce the SWIFT architecture and discuss issues related to its implementation.

S15.6 Socio-Physical Interaction Network (SPIN)
Sobin C C (SRM University Amaravati, AP, India); Alark Sharma and Deepak S (IIT Roorkee, India); Vaskar Raychoudhury (Miami University, USA)

Online social networks have long been studied by researchers to unearth the common patters of socializing among human beings. Recently, there is considerable interest in Internet-of-Things which is an interaction of smart physical objects augmented with sensing devices. However, there is hardly any research on intra-smart object interactions and interactions among smart physical objects and humans. In this paper, we considered a common network of human and smart physical objects, called Socio-Physical Interaction Network (SPIN), and studied their joint interaction patterns and underlying properties. Our extensive experiments prove that that the SPIN sports common social networking properties.

S15.7 Machine Learning Approach for Detection of Cyber-Aggressive Comments by Peers on Social Media Network
Vikas Chavan (PESIT, India); Shylaja S S (PES University, India)

The fast growing use of social networking sites among the teens have made them vulnerable to get exposed to bullying. Cyberbullying is the use of computers and mobiles for bullying activities. Comments containing abusive words effect psychology of teens and demoralizes them. In this paper we have devised methods to detect cyberbullying using supervised learning techniques. We present two new hypotheses for feature extraction to detect offensive comments directed towards peers which are perceived more negatively and result in cyberbullying. Our initial experiments show that using features from our hypotheses in addition to traditional feature extraction techniques like TF-IDF and N-gram increases the accuracy of the system.

S15.8 Vehicle Brake and Indicator Detection for Autonomous Vehicles
R. Sathya, M. KalaiselviGeetha and P. Aboorvapriya (Annamalai University, India)

Automated detection of vehicle lights can be used as a part of the systems for forward collision avoidance and accidents. This paper presents automatic vehicle detection and tracking system using Haar Cascade method. The threshold collection is HSV color space in the vehicle light representation and the segmentation selection of light area. The extracted vehicle brake and turn indicator signals are morphological paired and vehicle light candidate information is extracted by identifying the Region of Interest (ROI). The canny edge light intensity of the vehicle is extracted and a novel feature is called Edge Block Intensity Vector (EBIV). In this experiment, traffic surveillance system is developed for recognition of moving vehicle lights in traffic scenes using SVM with polynomial and RBF (Radial Basis Function) kernel. The experiments are carried out on the real time data collection in traffic road environment. This approach gives an overall average higher accuracy 95.7% of SVM with RBF kernel by using 36 EBIV features compared to SVM with Polynomial kernel by using 9, 16, 25, 36, 64 and 100 EBIV features and SVM with RBF kernel by using 9, 16, 25, 64 and 100 EBIV features for recognizing the vehicle light.

S15.9 Predicting the Next Move: Determining Mobile User Location using Semantic Information
Nitin Bhyri, Gautham Kidiyoor and Varun SK (PESIT, India); Subramaniam Kalambur and Dinkar Sitaram (PESIT); Chid Kollengode (Dataxu, India)

The location of a mobile user is used to deliver context sensitive information like advertisements and deals. Predicting the future possible locations of a mobile user can help target specific services. Nokia provided researchers with data collected from around 200 mobile users over a period of about 2 years for the purpose of research. Previous efforts have attempted either to predict the location of the user or the semantics associated with a location. In our work, we demonstrate that using a multi-level approach by predicting the semantics of a location and then the specific location within helps improve the accuracy of the prediction. We have shown that for certain semantic locations, the prediction accuracy can be as high as 90%.

WCI-01: WCI-01: Cloud, Cluster and Grid Computing

Room: 402
Chair: Sanjaya Kumar Panda (National Institute of Technology, Warangal, India)
WCI-01.1 A Mobile Based Remote User Authentication Scheme without Verifier Table for Cloud Based Services
Sumitra Binu (Christ University, India); Mohammed Misbahuddin (Centre for Development of Advanced Computing, India); Pethuru Raj Chelliah (IBM, India)

The emerging Cloud computing technology, offering computing resources as a service is gaining increasing attention of both the public and private sector. For the whole hearted adoption of Cloud, the service providers need to ensure that only valid users gain access to the services and data residing within the provider's premises. Ensuring secure access to sensitive resources within the Cloud requires a strong user authentication mechanism using multiple authentication factors. The user authentication mechanisms should also consider the increasing needs of Internet access through smart phones and other mobile devices and facilitate access through a variety of devices. Traditionally, a user needs to maintain separate user accounts for each Service Provider whose service he/she desires to use and this may cause inconvenience to users. Single Sign on (SSO) addresses this issue by permitting users to create one login credential and access multiple services hosted in different domains. In this scenario, a compromise of the single credential can result in account take over at many other sites. This points out to the requirement of strengthening the authentication mechanism by using more than one factor. This paper proposes a SSO based remote user authentication scheme for a Cloud environment. The proposed protocol uses password and mobile token and does not require the server to maintain a verifier table. The protocol is verified using automated security Protocol verification tool, Scyther and the results prove that the protocol provides protection against man-in-the-middle attack, replay attack and secrecy of the user's credentials.

WCI-01.2 A Robust and Light Weight Authentication Framework for Hadoop File System in Cloud Computing Environment
Mrudula Sarvabhatla (NBKR IST, India); M Chandramouli Reddy (Veltech Technical University & Vaishnavi Institute of Technology for Women, India); Chandra Sekhar Vorugunti (Indian Institute of Information Technology- SriCity, India)

The advancement of web and mobile technologies results in the rapid augmentation of traditional enterprise data, IoT generated data, social media data which results in peta bytes and exa bytes of structured and un structured data across clusters of servers per day. The storage, processing, analyzing and securing these big data is becoming a serious concern to large and medium enterprises. Hadoop or HDFS is a distributed file system based on cloud for storage and processing of large voluminous amounts of data across clusters of servers. Along with huge potential for dynamism for processing and scalability, HDFS also brought inherent security drawbacks like lack of authentication and authorization of remote user connecting to cluster, missing of encryption of sensitive data at communication, storage and processing levels. These existing drawbacks demands for a robust, light weight security framework for HDFS. In this context, we propose a secure and light weight remote user authentication framework for HDFS, which guarantees all the critical security requirements of a distributed file system

WCI-01.3 Architecting Solutions for Scalable Databases in Cloud
Gitanjali Sharma and Pankaj Deep Kaur (Guru Nanak Dev University, India)

Owing to technological proliferation and data deluging, the extent of scalability which cloud databases can accomplish is a long fostered dream for applications which support high levels of availability, consistency, elasticity and reliability. Scaling cloud databases implies scaling ACID guarantees over geographically distributed and replicated data stores which is hard to achieve since such applications prefer availability over strong consistency. Past decades have witnessed many architecting decisions and solutions in view of this challenge like partitioning, replications, concurrency control mechanisms and consistency. This paper presents a brief description of these architecting solutions implemented so far and discusses the challenges faced under the ongoing research and development in this domain.

WCI-01.4 Content Based Audiobooks Indexing using Apache Hadoop Framework
Sonal Shetty (B V B College of Engineering & Technology, India); Akash Sabarad (Vidya Nagar & B V B College of Engineering and Technology, India); Hareesh Hebballi (B V B College of Engineering & Technology, India); Moula Husain (Bvb-cet Hubli, India); Meena S M (B. V. Bhoomaraddi College of Engineering and Technology, India); Shiddu Nagaralli (B V B College of Engineering & Technology, India)

In recent years, content based audio indexing has become the key research area, as the audio content defines the content more precisely and has comparatively subservient density. In this paper, we present conversion of audio books into textual information using CMU SPHINX-4 speech transcriber and efficient indexing of audio books using term frequency-inverse document frequency(tf-idf) weights on Apache Hadoop MapReduce framework. In the first phase, audiobook datasets are converted into textual words by training CMU SPHINX-4 speech recognizer with acoustic models. In the next phase, the keywords present in the text file generated from the speech recognizer are filtered using tf-idf weights. Finally, we index audio files based on the keywords extracted from the speech converted text file. As, conversion of speech to text and indexing of audio are space and time intensive tasks, we ported execution of these algorithms on Hadoop MapReduce Framework. Porting content based indexing of audio books on to a Hadoop distributed framework resulted in considerable improvement in time and space utilization. As the amount of data being uploaded and downloaded is escalating, this can be further extended to indexing of image, video and other multimedia forms.

WCI-01.5 Design of Dependable Task Scheduling Algorithm in Cloud Environment
Suruchi Sharma (KIIT University, Bhubaneswar, India); Pratyay Kuila (National Institute of Technology Sikkim, India)

Cloud computing is a very emerging technology now a days and has a lot of research in various areas like resource allocation, virtual machine (VM) allocation, task scheduling, security, privacy, etc. A large number of requests arrive in cloud every time; it is necessary to manage these requests which contains some independent and some dependent tasks. Dependent tasks causes delay in execution to other tasks so requires a heuristic approach for scheduling. In this paper we have proposed a new heuristic algorithm for dependable task scheduling which will reduce the period of scheduling and compared the results with Bounded Number of Processors (BNP) class of scheduling algorithms- Modified Critical Path (MCP), Earliest Time First (ETF) and Dynamic Level Scheduling (DLS). The results show low makespan, low average processor utilization, low scheduling length ratio and high speed up.

WCI-01.6 Experimental evaluation of Multipath TCP with MPI
Khushi Anand Patel (Devang Patel Institute of Advance Technology and Research-DEPSTAR, India); Jigar Raval (Physical Research Laboratory, Ahmedabad, India); Samuel Johnson (Physical Research Laboratory, India); Bhavesh Patel (Gujarat Technological University, India)

These days, computing devices are equipped with multiple network interfaces: mobiles with 3G and Wi-Fi, Laptops with Wi-Fi and Ethernet, etc. However, TCP was not designed to utilize those interfaces simultaneously for providing high bandwidth and/or fail-over. IETF has realized this problem and has developed/adopted MPTCP as an extension of TCP which tries to solve the problem of utilizing multiple NICs. As of today, MPTCP has been implemented in iOS 7 running on iPhone and iPad. Its Linux kernel implementation is available for use in desktops and servers. In the world of High Performance Computing Clusters, MPI is the de facto standard for achieving communication between various nodes present in the cluster. MPI supports different transport layer protocols over Ethernet (e.g., TCP, iWARP, UDP, raw Ethernet frames, etc.), shared memory, and InfiniBand. The way most HPC are implemented today, they have a primary high bandwidth and low latency network for data communication (usually InfiniBand) and a secondary network for backup, fall back or management purposes. In the majority cases, it is the network that causes bottleneck for the jobs running on HPC; thus, networking technologies used in HPCs are continually growing. In this paper, we explore the possibility of combining these two technologies in order to achieve high performance in an HPC environment by increasing the network throughput by intelligently utilizing all the different types of network interfaces that nodes in HPCs have. We also address various performance issues that may arise with the use of MPTCP in MPI.

WCI-01.7 Optimal Time Dependent Pricing Model for Smart Cloud with Cost Based Scheduling
Chetan Chawla (Thapar University, India); Inderveer Chana (Thapar Institute of Engineering and Technology-(Deemed to be University), India)

In this paper, we design and develop a pricing model applicable to smart cloud environment, "Day-Ahead" pricing. Here, a Time Dependent Pricing (TDP) scheme has been presented to increase resource provider's revenue and ensure Quality of Service (QoS) to its consumers. This model helps resource workload providers to solve the problem of cost minimization. A resource management system is developed to schedule the workload in advance for its consumers based on its previous demands. We have used Cost Based (CB) scheduling policy in our simulation. The customers will have the opportunity to shift their workload to those hours of the day when traffic is least in exchange of rewards (lower price) by the service provider. Proceedings.

WCI-01.8 Performance based energy efficient techniques for VM allocation in Cloud environment
Bela Shrimali (LDRP Institute of Technology and Research-Gandhinagar, India); Hiren Patel (S. P. College of Engineering, India)

Cloud computing is emerging as a new paradigm for providing different services like platform, infrastructure and software as a large scale distributed computing applications via Internet. Computing resources are available in Cloud through virtualization. It divides a physical machine into many half or full isolated machines (known as Virtual Machines-VMs) using various allocation techniques. To identify a technique that can satisfy a quality of service in consideration of energy consumption in Cloud environment is one of the challenging issues for Virtual Machine allocation in Cloud as there are tradeoffs between energy consumption and performance. In the present research, we aim to survey various techniques that combine energy efficiency and performance. Hence, different real world virtual machine allocation policies are explored and the performance based energy efficient techniques for VM allocation are discussed. This survey may assist the researchers who wish to step in to the domain of performance based energy efficient VM techniques.

WCI-01.9 Physical to Virtual Migration of Ubuntu System on OpenStack Cloud
Pooja Karande (Visvesvaraya National Institute of Technology, Nagpur, India); Sonam Gaherwar and Manish Kurhekar (Visvesvaraya National Institute Of Technology, Nagpur, India)

Cloud computing is an evolving technology which provides reliable computing resources as a utility rather than having to build and maintain computing infrastructures in-house. Today users and organizations wants to migrate from physical system to a virtual cloud instance. In other words, users want to migrate a traditional operating system, pre-installed on their physical system into a virtual machine instance running on Cloud. Most of the time user requires instance configuration to be exactly the same as that of his/her personal physical system. In this paper, we have implemented a process of migration of physical system to virtual OpenStack cloud instance. We have achieved this migration with the use of system images of the physical system and a semi-automated script.

WCI-01.10 Rainfall Prediction using Artificial Neural Network on Map-Reduce Framework
Namitha K, Jayapriya A and Santhosh Kumar G (Cochin University of Science and Technology, India)

Big data is a celebrated topic in Business as well as research community for several years. With the revolution of Big Data, it is becoming easy and less expensive to store tremendous amount of data for future analysis. Weather data gets accumulated very fast and on a large scale. Thorough analysis and research is required on handling this big data and utilizing it for accurate weather prediction. As deterministic weather forecasting models are usually time consuming, it becomes challenging to efficiently use this large volume of data in hand. Machine learning methods are already proved to be good replacement for traditional deterministic approaches in weather prediction. These algorithms are popular for their scalability and hence more suitable in big data solutions. This paper proposes an approach of processing such Big volume of weather Data using Hadoop. Proposal includes Artificial Neural Network implemented on Map-reduce framework for short term rainfall prediction. Rainfall is predicted one day ahead using temperature and rainfall data of immediately preceding days. Temperature and Rainfall data of India over past 63 years (1951-2013) is used for this study.

WCI-01.11 Time and Energy Saving through Computation Offloading with Bandwidth Consideration for Mobile Cloud Computing
Apurva Pawar (Student of Masters of Engineering, India); Vandana Jagtap and Mamta Bhamare (University of Pune, India)

Smarthphone applications are gaining popularity and with time many complex and computation/resource intensive applications are coming to market. These application requires more resources and consume lot of battery power, but The improvisation in battery capacity has been lagged far behind as compared to other components of mobile devices. On other hand, clouds are rich in resources and computational power. Therefore, by utilizing cloud resources for application execution we can reduce battery consumption and in turn increases battery charging intervals. This all is known as cyber foreign or computation offloading in Mobile Cloud Computing. There are various factors affecting computation offloading like bandwidth. So, we had considered bandwidth as major factor in proposed offloading decision making algorithm. Our algorithm is based on divide and conquer strategy. This system focuses on time and energy saving.

WCI-01.12 Truthful Resource Allocation Detection Mechanism for Cloud Computing
Sasmita Parida (C V Raman College Of Engineering BPUT, India); Suvendu Nayak (Asst Prof, India)

Resource allocation is a n-p hard problem. In spite of different proposed algorithms and methods, still it is a challenging one. VMs are basically considered as resource in cloud computing. In his paper we are considering the resource allocation in Haizea model which is an open-source VM-based lease management architecture and acts as resource manager. Different scheduling mechanisms are used in Haizea like Advance Reservation (AR), Best effort (BE), Immediate (IM) and Deadline Sensitive (DS). Except these scheduling swapping and backfilling algorithm is proposed. But Backfilling is not applicable for all type of situation. In this work we tried to make a truthful scheduling by our proposed mechanism. The proposed mechanism also identifies where the lease can be scheduled or not. The mechanism for allocation detection is simple and robust.

WCI-02: WCI-02: MANET, VANET, WSN and Social Networks

Room: 403
Chairs: Santhosh Kumar G (Cochin University of Science and Technology, India), Monish Chatterjee (Asansol Engineering College & West Bengal University of Technology, India)
WCI-02.1 Modeling Evolutionary Group Search Optimization Approach for Community Detection in Social Networks
Hema Banati (University of Delhi & DYAL SINGH COLLEGE, India); Nidhi Arora (University of Delhi, India)

Community structure identification is an important area of research in complex social networks. Uncovering hidden communities in social networks data can help us to visualize and analyze various behavioral and structural phenomenon's occurring in social networks. Detecting communities in networks implies identification of set of clusters that show strong internal cohesion than external cohesion. The problem can be translated into modularity maximization problem which is NP hard. This paper attempts to maximize the modularity of a given network through Mod-GSO, a modification of the Group Search Optimization (GSO) algorithm based on evolutionary animal searching behavior. Mod-GSO modifies the area copying mechanism of the scrounger animals in GSO, by performing a single point crossover instead of real coded GA's crossover to evolve communities. The proposed modification is done to make GSO applicable to Social networks data and evolve scroungers with better community structures and convergence as compared to random evolution of GSO scroungers. Mod-GSO does not require the number of communities to be fixed a-priori and works with a smaller population. Experimental results obtained by Mod-GSO were compared with three well known community detection algorithms named CNM, RB, Multilevel and two evolutionary community detection algorithms named Firefly and GA-Mod using Modularity and NMI metrics on four real world and eleven well known benchmark datasets. The best modularity values of Mod-GSO were observed to be higher than many of the community finding algorithms. The results shows the capability of Mod-GSO for detecting accurate community structures in comparison to other algorithms.

WCI-02.2 A Control Mechanism for Power Management of Electrical Devices Using Android Phone in a Zigbee Network
Archana Das (University of Calicut & Vidya Academy of Science and Technology, India); Amal Ganesh (Vidya Academy of Science & Technology, India)

Efficient power management is one of the major problems that exist in the present world. To solve this problem, an effective mechanism to control and manage power in electrical devices is necessary. One such mechanism is to control the devices within our home with the help of wireless zigbee and android technology presented in this paper. Here the system is an integration of hardware and software. With this system, we will be able to control the electric devices in our home from outside home using an android phone. The android phone works as a remote control device that performs the switch ON and switch OFF function of devices. The interfacing of embedded system and android phone is done through zigbee module. The paper focuses on the design and implementation of the control system that controls the electric devices. Here a voice input to perform the required operation is given to the android speech recognition page; it is converted to text and is updated in the server database. From the server database the values are retrieved by the micro controller via zigbee and it operates the logic to switch ON or switch OFF the device. The micro controller will be programmed to calculate the power consumption by the electrical devices connected to the system and the calculated power is displayed in the android phone. With the system it is also possible to view the energy meter reading in the android phone and as a result power management can be done efficiently.

WCI-02.3 A Routing Protocol for Detecting Holes in Wireless Sensor Networks with Multiple Sinks
Anju Arya (Deenbandhu Chhotu Ram University of Science & Technology, India); Amita Rani (Deenbandhu Chhotu Ram University of Science and Technology, India); Sanjay Kumar (Hindu College of Engineering, Sonepat, India)

In wireless sensor networks, data routing in an optimal way is a challenging task while tackling the limited energy constraint at the same time. Our goal is not only to enhance network's lifetime for the available limited network energy, but also to detect the sub-areas in the network, which have run out of energy or became a dead zone i.e. hole. The hole detection algorithm used in our work detects such dead zones and help the network coordinators to mitigate the problem before it hampers the whole network functioning. The simulation results have shown the ability of the protocol to optimally route data packets in a multiple sink scenario and detect hole(s) when it occurs. Our work on holes covers coverage hole detection and routing hole detection.

WCI-02.4 CODES: A COllaborative DEtection Strategy for SSDF Attacks in Cognitive Radio Networks
Amar Taggu (North Eastern Regional Institute of Science and Technology, India); Chukhu Chunka (NERIST, India); Ningrinla Marchang (North Eastern Regional Institute of Science and Technology, Arunachal Pradesh, India)

Cognitive Radio Network (CRN) nodes called the Cognitive Radio (CR) nodes, sense the presence of a Primary User (PU) activity. The sensing reports from all the CR nodes are sent to a Fusion Centre (FC) which aggregates these reports and takes decision about the presence of the PU, based on some decision rules. Such a collaborative sensing mechanism forms the foundation of any centralised CRN. However, this approach can be misused by malicious Secondary Users (SUs) by carrying out Spectrum Sensing Data Falsification (SSDF) attacks. SSDF attacks which invade the CRN during spectrum sensing phase can affect the global decision of spectrum occupancy and thus, degrade the overall performance of a CRN. In this paper, a threshold-based detection strategy, CODES, is proposed for detection of SSDF attacks. Simulation results show successful detection of malicious SUs across a range of SSDF attacks.

WCI-02.5 Design and Implementation of Wireless Robot for Floor Cleaning Application
Kota Solomon Raju (Scientist, India); Kanithi Vijaya Ram Bharadvaj (CSIR-CEERI, India); Sagarika Choudhury (Tezpur University, India)

Efficiency and completeness of coverage are the major constraints in designing an autonomous robot floor cleaning application. This paper presents a simple and inexpensive approach for design and implementation of wireless floor cleaning robot. An MSP430 based Ultrasonic sensor system has been developed for complete coverage navigation of a floor-cleaning robot in which the robot adopts a complete coverage cleaning strategy and executes back-and-forth cleaning task. Thus, the robot operated wirelessly and it can complete the cleaning task efficiently in an environment with unexpected obstacles in an unknown environment.

WCI-02.6 Detecting Overlapping Communities in LBSNs with Enhanced Location Privacy
Sreelekshmi K (University of Kerala & Sree Buddha College of Engineering, India)

Location based social networks (LBSNs) for instance Facebook places and Twitter provides large amount of data which allows service providers to create several applications like group marketing, friend and location recommendations, trend inquiry etc. Location based social networks does not provide precise communities which enables users to subscribe, join or follow. Strengthened community disclosure is needed for capitalizing probable users. The variance of user's behavior and preferences makes those communities to overlie. Several procedures have been introduced for detecting overlapping communities in LBSNs. Still a severe threat to LBSNs is that location data may be misused to steal their identity track users and execute home invasions and even stalk or intimidate them. This is a framework to discover overlapping communities based on users check-ins at various locations and user venue attributes, with refined location privacy by secure user-specific coordinate conversions to location data which will be shared with the server by preserving the distance.

WCI-02.7 MACO: Modified ACO for reducing travel time in VANETs
Vinita Jindal, Heena Dhankani, Ruchika Garg and Punam Bedi (University of Delhi, India)

The problem of increased travel time due to congestion on roads is gaining wide attention of researchers these days. Several approaches have been proposed to improve this problem thorough various traffic management strategies. This paper proposes a modified version of ant colony optimization (MACO) in order to reduce the travel time for vehicles on move. MACO is a variation of the classical ACO in which repulsion effect is used instead of attraction towards pheromones for avoiding congested routes and dispersing the traffic towards paths with less pheromone value. The proposed algorithm reduces the overall waiting time in order to maintain the fast movement of the traffic irrespective of the path taken by the vehicles. It also ensures that in case of normal traffic, vehicles follow the shortest path, while in case of congestion; vehicles use the MACO algorithm in order to select the non-congested path. The experiments were conducted to compare the efficiency of MACO on three types of networks: simulated simple network, simulated complex network and network of real world obtained from University of Delhi map. It was found that the travel time is reduced by using the proposed MACO algorithm in all the cases with the increase in the number of vehicles.

WCI-02.8 Maximizing Fault Tolerance and Minimizing Delay in Virtual Network Embedding using NSGA-II
Angel Bansal (Jaypee University of Engineering and Technology, India); Omprakash Kaiwartya (Universiti Teknologi Malaysia, Malaysia); Ravindra Singh (Jaypee University of Engineering and Technology, India); Shiv Prakash (IIT Delhi, India)

Due to growing interest in network cost optimization through resource sharing, virtual network embedding has significantly attracted the attention of researchers. Recently, various virtual network embedding algorithms have been suggested. The performance of these algorithms is not reliable due to the absence of fault tolerance capability. Therefore, this paper proposes a technique for maximizing fault tolerance and minimizing delay in Virtual Network Embedding (VNE) using Non-dominated Sorting Genetic Algorithm (NSGA-II). The multi-objective optimization problem is mathematically formulated. An adapted NSGA-II is proposed for solving the optimization problem. The major components of adapted NSGA-II are representation of chromosome, computation of fault tolerance and delay, sorting using non-domination, and crossover and mutation operation. Two novel mathematical functions for computing fault tolerance and delay are developed. The analysis of simulation results clearly indicates that the proposed technique effectively optimizes both the considered objectives in VNE.

WCI-02.9 Intelligent Information Retrieval Technique for Wireless Sensor Networks
Savneet Kaur (Shri Venkateshwara University, India); Deepali Virmani (VIPS-TC COE & GGSIPU, Delhi, India); Geetika Malhotra (Bhagwan Parshuram Institute of Technology, India)

Wireless Sensor Networks play a vital role in military applications. In military it's very important to extract the actual information from the transmitted message. Sentiment analysis is a valuable knowledge resource which analyses collective sentiments of a text and helps in decision making. This paper proposes an Intelligent Information Retrieval Technique (IIRT) based on SentiWordNet, a lexical resource which is used for aspect analysis of text messages sent among military groups. The Proposed IIRT extracts the intelligent information from the text messages transmitted in military groups in WSNs. Proposed IIRT is based on sentiment analysis, where first the aggregated message is filtered then segregated with the help of lemmatization and then opinion words are gathered by grammatical tagging. At last aspect analysis of these opinion words is done with the help of SentiWordNet. IIRT gathers the intelligent information from the original message by calculating the polarity values of the opinion words. The proposed IIRT is executed on the time stamping data messages sent among military groups in wireless sensor networks and the results prove the correctness and accuracy of our IIRT.

S14: S14 - Signal and Image Processing- I

Room: 501
Chairs: Supriya Hariharan (Cochin University of Science and Technology, India), Mohammed Saaidia (University of Souk-Ahras. Algeria, Algeria)
S14.1 Small Bowel Image Classification using Dual Tree Complex Wavelet-Based Cross Co-occurrence Features and Canonical Discriminant Analysis
Guangyi Chen (Concordia University, Canada); Sri Krishnan (Ryerson University, Canada)

In this paper, a modified algorithm is proposed for automatically classifying small bowel images into normal or abnormal class. Instead of the shift-invariant overcomplete wavelet transform, our modification performs the dual tree complex transform (DTCWT) to the small bowel images for three-decomposition scales and clamp and linearly scale the DTCWT subbands. As the original algorithm, we extracts cross co-occurrence matrix from each DTCWT subband, and calculate four textural features from each cross co-occurrence matrix. Unlike the original algorithm, we select a subset of the calculated texture features by means of minimum redundancy maximum relevance (mRMR) algorithm. We use canonical discriminant analysis as a classifier in order to classify a small bowel image into normal or abnormal class, just like the original algorithm. Experimental results show that our proposed modification outperforms the original algorithm for small bowel image classification by 5.3% in terms of correct classification rate for the same dataset.

S14.2 Phase Congruency and Morphology Based Approach for Text Localization in Videos
Smitha ML (KVG College of Engineering, Sullia, India); B H Shekar (Mangalore University, India)

In this knowledge era, cognitive learning and media technologies promote high visual orientation through videos which appears in the form of text, graphics, animations, audio or still images. Text present in videos carry important semantic information which is indeed essential for video comprehension. In this context, we propose a novel method of detecting the text clusters in video frames based on the phase congruency model and morphology based approaches. We investigate the matching features extracted from both these methods and devise a set of rules using morphological operators for false positive elimination. The text regions are detected using connected component analysis and finally the text is localized. We have evaluated the performance of the proposed method on the standard datasets and the results highlight the effectiveness in localizing the text in videos and scene images.

S14.3 Remote Sensing Image Fusion using Hausdorff Fractal Dimension in Shearlet Domain
Abhishek Dey (Scottish Church College, India); Biswajit Biswas and Kashi Nath Dey (University of Calcutta, India)

Preservation of spectral information and enhancement of spatial resolution is the most important issue in remote sensing image fusion. In this paper, a new remote sensing satellite image fusion method using shearlet transform (ST) with Hausdorff fractal dimension(HFD) estimation method is proposed. Firstly, ST is used in each high-spatial-resolution panchromatic (PAN) image and multi-spectral image (MS). Then, the low frequency sub-band coefficients from different images are combined according to the HFD method which estimates and selects the modified low-pass band automatically. The composition of different high-pass sub-band coefficients achieved by the ST decomposition is discussed in detail. Finally, we achieve fusion results from the inverse transformation of ST. Experimental results show that the proposed method outperforms many stateof- the-art techniques in both subjective and objective evaluation measures.

S14.4 Remote Sensing Image Fusion using Statistical Univariate Finite Mixture Model in Shearlet Domain
Biswajit Biswas (University of Calcutta, India); Abhishek Dey (Scottish Church College, India); Kashi Nath Dey (University of Calcutta, India)

Remote sensing image fusion is a process that integrates the spatial detail of panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MS) image and produces a fused image that contain both high spatial and spectral details. In this paper, a new remote sensing image fusion method is proposed based on Statistical Univariate Finite Mixture Model (UFMM) in Shearlet Domain. Foremost, the Shearlet sub-bands for PAN and MS image are achieved by Shearlet Transform (ST). Latter, a novel fusion strategy is designed for both low and high pass sub-bands. Finally, the fused image is achieved by Inverse Shearlet Transform (IST). By comparing with the well-known methods in terms of several quality evaluation indexes, the experimental results on QuickBird and IKONOS images show the superiority of our method.

S14.5 2D Non-Linear State Optimization Using Evolutionary Techniques
Rajiv Kapoor (Delhi Technological University, India); Ashish Dhiman (IIT Gandhinagar, India); Akshay Uppal (Delhi Technological University, India)

In modelling the non-Gaussian and non-linearity behaviour in the systems accurately for the estimation of the density function, Particle filter(PF) is considered more precise compared to other filters like Kalman filter. Particle filters are also known as Sequential Monte Carlo method which used the sampling method in implementing the recursive Bayesian filters. However, Particle filter has limitations like the degradation of particles and sample impoverishment which afford an immense challenge in the non-linear state estimation of particles. In order to triumph over the limitations (SP), in this paper, we present novel implementation of 2-D state estimation of particles based on bearing on tracking problem using PF-BBO (Biogeography based optimization) and PF-PSO (Particle swarm optimization. The efficacy of particle filter is expressed in the form of root-mean-square-error values (RMSE) and show the improved estimation accuracy of PF-BBO over the PF- PSO.

S14.6 Significance of Implementing Polarity Detection Circuits in Audio Preamplifiers
Deepak Balram (National Taipei University of Technology, Taiwan); Govind D (Koneru Lakshmaiah Education Foundations, India)

The reversal of the current directions in audio circuit elements causes polarity inversion of the acquired audio signal with respect to the reference input signal. The objective of the work presented in this paper is to implement a simple polarity detection circuit in audio preamplifiers which provides an indication of the signal polarity inversion. The present work also demonstrates the possibilities of polarity inversion in audio circuits of audio data acquisition devices. Inputs fed in the inverting/noninverting terminals of audio operational amplifiers (Op-Amps) cause polarity reversal of the amplitude values of the speech/audio signals. Even though, polarity inversion in audio circuits are perceptually indistinguishable, provides inaccurate values of speech parameters estimated by processing the speech. The work presented in this paper discusses, how polarity inversion is introduced at the circuit level and proposes a polarity detection circuit which provides an indication of polarity reversal after the pre amplification. The effectiveness of the proposed polarity inversion circuit is confirmed by 100% polarity detection rate for the 100 randomly selected audio files of the CMUArctic database when simulated using Proteus 8.0. The paper is concluded by discussing the significance of VLSI implementation proposed polarity detection circuit in the most commonly used audio preamplifier systems.

S14.7 A Modified Approach In CBIR Based On Combined Edge Detection, Color And Discrete Wavelet Transform
Nooby Mariam (Calicut University, India); Rejiram R (University of Calicut, India)

Content based image retrieval systems use the contents of the images to represent and access the images. Content basically refers to the image descriptors like color, texture and shape of the image. Among the different image features, edges are the important one as edges represent mainly the local intensity variations. But in the case of color images in order to obtain satisfactory results, we must consider the color of the image during retrieval processes. This paper describes a new method in which both edge and color features of the images are considered for generation of feature vectors. Discrete Wavelet Transform is used to preserve the detailed contents of the images along with the reduction of the size of the feature vector. A comparison study on the effect of the proposed method on YCbCr and HSI color spaces is also presented in this paper.

S14.8 Pause Duration Model for Malayalam TTS
Jesin James (Sahrdaya College of Engineering and Technology, Kodakara, India)

In this paper a CART-based pause duration prediction model has been developed for Malayalam language. Prosodic features like pause durations, syllable prolongations etc. play an important role in making the speech output from a Text To Speech (TTS) system more intelligible. An analysis on the various factors that affect pause duration for Malayalam language has not been conducted till date. Here, inferential and descriptive statistical analysis tools are used to analyze the effect of various factors on pause duration for Malayalam. These identified factors are then used to model pause duration for pause after a word, pause after a phrase, pause after a comma and pause after a sentence separately. The RMSE and correlation values are calculated to evaluate the correctness of the model.

S14.9 Continuous Dynamic Indian Sign Language Gesture Recognition with Invariant Backgrounds
Neha Baranwal (Indian Institute of Information Technology, Allahabad, India); Kumud Tripathi (Indian Institute of Technology, Kharagpur, India); Gora Chand Nandi (IIIT-A, India)

Hand gestures are a strong medium of communication for hearing impaired society. It is helpful for establishing interaction between human and computer. In this paper we proposed a continuous Indian Sign Language (ISL) gesture recognition system where single hand or both the hands have been used for performing gestures. Our proposed method is also invariant against various backgrounds. Tracking useful frames of gestures from continuous frame, frame overlapping method has been applied. Here we extract only those frames which contains maximum information. This is helpful for speedup the recognition process. After that discrete wavelet transform (DWT) is applied for extracting features of an image frame and finally hidden markov model (HMM) is used for testing probes gestures. Experiments are performed on our own continuous ISL dataset which is created using canon EOS camera in Robotics and Artificial Intelligence laboratory (IIIT-A). From experimental results we have found that our proposed method works on various backgrounds like colored background, a background containing multiple objects etc. also it has vary less time complexity as well as space complexity.

S14.10 Autonomous Control of Flapping Wing Vehicles Using Graphical User Interface
S Sankarasrinivasan (National Taiwan University of Science and Technology, Taiwan); E Balasubramanian (Vel Tech University, India); L J. Yang and F Y. Hsaio (Tamkang University, India)

This papers emphasis on the development of graphical user interface (GUI) for the control of flapping wing vehicles (FWVs) to achieve superior maneuverability. The main objective is to develop a computer control interface for manual as well as autonomous performance of FWVs. The vision based control interface is designed through Matrix Laboratory (MATLAB). A color detection algorithm is incorporated to accomplish the real time altitude and yaw control of FMVs with respect to the user color input. Preliminary simulation is carried out interfacing MATLAB GUI with proteus simulator. The color based control algorithm is tested positively in our lab environment and proven to be consistent. The developed control interface serves as an effective solution towards interfacing aerial robots and sophisticated software modules.

S14.11 On Compressed sensing image Reconstruction using Linear Prediction in Adaptive Filtering
Sheikh Rafiul Islam and Santi Prasad Maity (Indian Institute of Engineering Science and Technology, Shibpur, India); Ajoy Ray (IIT Kharagpur, India)

Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.

WCI-03: WCI-03-Multimedia Systems and Human Computer Interaction/ WCI Data Engineering and Data Mining

Room: 502
Chairs: Viswanath Gopalakrishnan (Nanyang Technological University, Singapore), P Viswanathan (Image Processing & VIT University, India)
WCI-03.1 A Mobile Crowd Sensing Framework for Toll Plaza Delay Optimization
Usha Mahalingam (Sona College of Technology, Anna University & Anna University, India); Leela Manju (Sona Colleg of Technology, India)

This paper presents a novel approach to solve the problem of reducing delays in toll plazas. Causes of delay in manual toll plazas are lane selected by the driver, time taken to pay the toll and efficiency of toll collectors. The proposed system uses mobile crowd sensing and crowdsourcing to help traveller swiftly pass through toll plazas by minimizing the delay. The system functions on internet-independent ad-hoc Wi-Fi connectivity to crowd sense the time spent by travellers in tollbooths and crowdsources Quality of Experience to effectively suggest the fastest tollbooth in toll plazas. The system also prepares travellers for tolling at toll plazas well ahead, by providing advance toll information.

WCI-03.2 Comparative Study of Pitch Contour for Mentally Impaired
Kamakshi Chaudhary (ITM University, India); Sumanlata Gautam (The NorthCap University, India); Latika Singh (ITM University, India)

Basic needs of humans are met by conveying information to each other. Most efficient way to do this is via speech signal. But looking in the society we find many people with mental impairment, their essential infrastructure of articulation, word stress, accent, pitch and other speech features are deeply affected by damage in their brain. There are many factors that affect speech but the fundamental frequency (perceived form: pitch) plays major role in prosodic development. This study is an attempt to observe the pattern of pitch development in normal developing and mentally retarded children and adults. It also investigates possible differences between them. Results reveal that younger children have a slighter difference in their pitch as compared to age matched MR, but significant difference is found between normal developing adults and those suffering from neurodevelopment disorder. These criteria can be considered as first step in diagnosis of these neurodevelopment disorders.

WCI-03.3 Face Recognition Based Person Specific Identification for Video Surveillance Applications
Kokila Sivasankaran and Yogameena B (Thiagarajar College of Engineering, India)

Face detection is an important aspect for applications like biometrics, video surveillance and human computer interaction. Videos provide abundant information and also that can be leveraged by temporal variations in pose, expression changes and occlusion. These challenging problems motivate to identify the specific person by face recognition for video surveillance applications. This paper presents face detection and recognition algorithm to identify/recognize the wanted person in a surveillance video. First, face detection is done by 'Viola-Jones' algorithm. Subsequently, the detected face has been cropped to recognize the specific person's face. For the cropped faces, clustering is applied to cluster the face parts. For the detected face parts HOG and LBP features are obtained. Existing approaches use Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) separately to recognize face in static images. The contribution of the work is to implement HOG and LBP for surveillance video and combine both the features to address the issues such as pose variations, illumination changes, expression changes and occlusion for face recognition. SVM classifier is used to classify the weak and strong features and strong features are used to recognize the person. The proposed algorithm has been tested on various datasets and its performance is found to be good in most cases. Experimental results show that the method of detection and recognition achieves very encouraging results with good accuracy and simple computations.

WCI-03.4 Illumination Invariant Video Text Recognition using Contrast Limit Adaptive Histogram Equalization
Smitha ML (KVG College of Engineering, Sullia, India); B H Shekar (Mangalore University, India)

The textual information present in the image/video plays its major role in indexing and retrieval. In this paper, we propose a system for detecting the text in illumination invariant videos using Contrast Limit Adaptive Histogram Equalization(CLAHE). Certain heuristic rules are applied to the preprocessed video frame to detect the text clusters. Further, the newly designed geometrical rules and morphological operations are employed on the obtained text detection results for text localization. Once the text gets localized, the text lines are segmented followed by character segmentation. The segmented characters are then recognized using OCR and then synthesized as voice output message which is audible so that one can hear and understand the text content present in videos. The experimental results obtained on publicly available standard datasets, Trecvid video dataset and our own video dataset illustrate that the proposed method can detect, localize and recognize the texts of various sizes, fonts and colors.

WCI-03.5 Language Modelling and English Speech Prediction System to Aid People with Stuttering Disorder
Pramati Kalwad (National Institute of Technology Karnataka, Surathkal & Samsung R&D Institute, Bangalore, India); Shailja Pattanaik and Chandana Tl (National Institute of Technology Karnataka, Surathkal, India); Ram Mohana Reddy Guddeti (NITK, Surathkal, India)

This paper proposes a novel method to predict the speech based on N-Gram language model for English Language. It also concentrates on how Speech Completion can be combined with stuttering detection to aid people suffering from this disorder to overcome psychological and social introversion. To the best of our knowledge, such systems exist only in Japanese language and hence, this paper is the first to introduce such an application for English language. The existing work in Japanese language uses a vocabulary tree structure for prediction in contrast to the n-gram language model used in this paper. The basic idea of the proposed work is to consider the user's speech input for detecting the repetition of words as stuttering. If this repetition of words is detected then, the next word can be predicted after eliminating the repeated word using the n-gram language model and the predicted word can be converted back to speech. Using this proposed methodology, we are able to achieve a prediction accuracy of 87\% when a 10-fold test is carried out.

WCI-03.6 Session Based Storage finding in Video on Demand System
Soumen Kanrar (Vehere Interactive Pvt Ltd, India); Niranjan Mandal (Vidyasagar, India); Sharmistha Das Kanrar (Bishop Westcott, India)

Performance of video streaming in the video on demand system is highly depended on the session oriented data storage finding. Data is broken up into manageable packets for Internet delivery. Multiple bonded connections behave like a single connection to the video on demand user. Use of the session oriented aggregate bit streaming rate combined with the capacity of the system network provides prior knowledge about the current load on the overall system. Session based optimum storage finding algorithm reduces the search hop count towards the storage-data server. In this work, we present a novel approach to this challenging issue. Propose session based storage finding search for Query Algorithm at the application server of distributed data storage nodes in the video on demand system is to minimize the hop counts. The Simulation results in contrast with the real-life scenario, hence the hop count is reduced for searching the content storage in video on demand system.

WCI-03.7 Subtracted Bit Representation and Frame number reduction using Adjacent Pixel Correlation for Video Compression
Charan Sg (Nokia India, India); Chaya Shettihalli Papareddy (M S Ramaiah Institute of Technology, India)

Size of videos in recent times are increasing very fast.Different coding and decoding standards have been introduced to decrease size.We propose a novel method where an existing video can be compressed and stored back in its original format. Adjacent pixels in an image are highly correlated and also spacial separation of pixels in adjacent video frames is very less. This property is used to compress and store video in the same original format irrespective of its original coding and decoding methods. Experiments were conducted on web-based videos. We removed every third frame in the video and coded its information in the first two frames and so on. Every pixel of the third frame is coded in alternate pixels of first two frames. While decoding we use adjacent pixel correlation rule. This is known as Adjacent Frame Removal. We also propose a second technique where we subtract consecutive frames and store two pixel values each of 8 bits in a single pixel value, since difference pixel value can be represented by 4 bits. This reduces size by more than half. This technique is coined as Subtracted Bit representation. Experiments prove that first technique is reliable as loss in information is at max 3%. Proposed first method reduces the size of video by on an average 30%. This is also enhanced so as to reduce video size by over 60% at cost of loss of information about 7%. Inclusion of both proposed techniques decreased the size of the video by about 80%.

WCI-03.8 Teaching Advanced Algebra to Engineering Majors: Dealing with the classroom challenges
Snehanshu Saha (APPCAIR, BITS Pilani K. K. Birla Goa Campus & Center for AstroInformatics, Modeling and Simulation, India); Bhoomika Agarwal (PES Institute of Technology, Bangalore South Campus); Priyal Mehta (PESIT South Campus, India)

The paper is based on the implementation of a simplified Linear Algebra Toolkit that was developed in order to help students visualize and learn the intricacies of the subject. The toolkit has been implemented as an applet that is lightweight, easy to use and embellished with a graphical user interface. The toolkit is a consequence of a series of classes conducted, where it was noticed that students were unable to grasp essential concepts. A follow-up survey demonstrates how the toolkit was instrumental in augmenting learning through visualization.

WCI-03.9 Blending Concept Maps with Online Labs (OLabs): Case Study with Biological Science
Prema Nedungadi (Amrita Vishwa Vidyapeetham, India); Mithun Haridas (Amrita University); Raghu Raman (Amrita Vishwa Vidyapeetham, India)

Experimental learning combined with theoretical learning enhances the conceptual understanding of a subject. The Online Labs (OLabs) that hosts science experiments was developed thus. OLabs use interactive simulations with theory, procedure, animations, videos, assessments and reference material. Our study blended OLabs with concept maps to examine if it enhances students' learning in Biology. Concept mapping is a framework that provides a deeper knowledge of a subject by understanding the relationships among concepts. The design of the study was quasi-experimental: pre-test, post-test and a satisfaction survey were used as measurement instruments. The study sample was 54 students from a school in Haripad, Kerala, India. The students were randomly grouped into a control and an experimental group. The experimental group that used concept maps as a learning aid scored slightly higher, suggesting blending concept maps can lead to a deeper understanding of the subject. Gender difference did not significantly affect the scores.

WCI-03.10 Graph Clustering for Large-Scale Text-Mining of Brain Imaging Studies
Manisha Chawla (IIT Gandhinagar, India); Mounika Mesa (Rajiv Gandhi University of Knowledge Technologies, India); Krishna Miyapuram (IIT Gandhinagar, India)

This paper presents a Graph clustering method for identifying functionally similar key concepts for meta-analysis of brain imaging studies. We use an existing database ( of key concepts created by a large-scale automated text mining of brain imaging studies. The key concepts here refer to specific psychological terms of interest (for instance, 'decision', 'memory' etc) identified based on their frequency of occurrence (>1 in 1,000 words) in an individual article text of 5809 studies. The pair-wise distance between all 525 nodes was calculated using the Jaccard metric. A graph was created with 525 nodes representing the key concepts. An undirected edge was drawn from every node to the node with minimum distance. We present a clustering approach using a simple graph traversal to identify connected components so that every node belongs to exactly one cluster. The results from our clustering method reveal semantically related concepts confirming potential for further use in text-mining approaches for meta-analysis of brain imaging studies.

WCI-03.11 Privacy and Personalization Perceptions of the Indian Demographic with respect to Online Searches
Saraswathi Punagin (PESIT Bangalore South Campus, India); Arti Arya (PESIT-Bangalore South Campus & VTU, Belagavi, India)

Most internet users' browsing starts with a query submit on a web search engine. Search engine usage has become so extensive that it seems like second nature in today's online world. Customized search results enhance user experience but they bring up the eternal debate of privacy vs. personalization into focus. Users are usually unaware of the implications of disclosing sensitive personal information during their web searches. Those who are aware may take measures to protect their privacy. We argue and hypothesize that in spite of the technological advancement and increased internet usage, an average Indian consumer is less likely to be aware of the privacy and personalization implications of web searches. We also argue that an educated consumer, armed with awareness will change his privacy and personalization perceptions with respect to web searches. The results of a study conducted with 660 participants render support to most of the proposed hypotheses. Results indicate that while there is a very low percentage of Indian consumers who are Fully Privacy Aware (11%), there exists a moderate number of consumers who are Fully Customization/Personalization Aware (55%). Percentage of consumers who dislike being tracked online is 37% and consumers who took some action to protect their online privacy during web searches were 25%. Also, of the 102 participants who were part of group discussions, 56% of them changed their privacy and personalization perceptions slightly or significantly after their increased awareness about this trade-off in online searches.

SSCC-03: SSCC-03:-System and Network Security/SSCC - Application Security

Room: 504
Chair: Chirag Modi (NIT Goa, India)
SSCC-03.1 Detection and Mitigation of Android Malware through Hybrid Approach
Kanubhai Patel (Charotar University of Science and Technology (CHARUSAT), India); Bharat Buddhadev (Malaviya National Institute of Technology, India)

A good number of android applications are available in markets on the Internet. Among them a good number of applications are law quality apps (or malware) and therefore it is difficult for android users to decide whether particular application is malware or benign at installation time. In this paper, we propose a design of system to classify android applications into two classes i.e. malware or benign. We have used hybrid approach by combining application analysis and machine learning technique to classify the applications. Application analysis is performed by both static and live analysis techniques. Genetic algorithm based machine learning technique is used to create rules for creating rule base for the system. The system is tested with applications collected from the various markets on the Internet and two datasets. We have obtained 96.43% detection rate to classify the applications.

SSCC-03.2 Framework for Live Forensics of a System by Extraction of Clipboard Data and Other Forensic Artefacts From RAM Image
Rohit Sharma (Defence Institute of Advanced Technology, India); Upasna Singh (Defense Institute of Advanced Technology, India)

In Memory Forensics, a variant of Live forensics, the acquired dump of physical memory is analysed to find out crucial artefacts from the suspect's system. These artefacts include details of running processes, network connections, Clipboard data, Security Identifiers of users, Master File Table etc which helps in providing an unprecedented visibility into the runtime state of the system. While there has been extensive work done for capturing processes, event logs, registry information and network activities, the focus towards Clipboard acting as another memory storage for evidential information has been limited. In this paper, a framework has been proposed to carry out Live forensics of a system by extraction of Clipboard data and other forensic artefacts from RAM image.

SSCC-03.3 Malicious Circuit Detection for Improved Hardware Security
Bharath R, Arun Sabari G, Dhinesh Ravi Krishna, Arun S Prasathe and Harish K (Amrita University, India); Mohankumar N (Amrita School of Engineering Coimbatore & Amrita Vishwa Vidyapeetham, India); Nirmala Devi (Amrita University, India)

Hardware Trojans have become a major threat faced by most of the VLSI fabrication houses. This paper proposes a nondestructive method for Hardware Trojan detection. Since in many cases the Golden chip is unavailable and hence voting technique is adopted .This paper is an extension of the weighed voting. A microcontroller based Standalone embedded system that will identify any malicious activity in a circuit was designed and implemented. This standalone system performs the weighed voting to detect any malicious activity. The working of this standalone system was tested by using all ISCAS 85 Benchmark circuits. The circuits are implemented using FPGA. We got 93.93% accuracy while testing the system.

SSCC-03.4 A Low Overhead Prevention of Android WebView Abuse Attacks
Jamsheed K and Praveen K (Amrita Vishwa Vidyapeetham, India)

WebView, an Android component to load and display web content, has become the center of attraction for attackers as its use increases with the increased trend of hybrid application development. The attackers mainly concentrate on abusing the JavaScript interface and accessing the native code. Since most of the developers do not go for HTTPS secure connections to decrease processing overhead, injection attacks becomes easy. The attacker looks for the JavaScript interface implementation in well known libraries like ad-provider libraries or hybrid application wrapper libraries and try to inject code that uses them. This paper presents a low overhead solution to use public key cryptography for ensuring integrity over data transferred and thus prevent such attacks.

SSCC-03.5 Anomaly Detection through Comparison of Heterogeneous Machine Learning Classifiers Vs KPCA
Goverdhan Reddy Jidiga (Jawaharlal Nehru Technological University, India); Porika Sammulal (JNTUH University, India)

The anomaly detection is applicable to wide range of critical infrastructure elements due to frequent change in anomaly occurrences and make sure to avoid all threats identified in regular. In this perception, we have to identify the abnormal patterns in applications and to model them by using a new adorned machine learning classifiers. In this paper we are investigating the performance by comparison of heterogeneous machine learning classifiers: ICA (Independent Component Analysis), LDA (Linear Discriminant Analysis), PCA (Principal Component Analysis), Kernel PCA and other learning classifiers. The Kernel PCA (KPCA) is a non-linear extension to PCA used to classify the data and detect anomalies by orthogonal transformation of input space into (usually high dimensional) feature space. The KPCA use kernel trick by extract the principal components from set of corresponding eigenvectors and use kernel width as performance parameter to determine rate of classification. The KPCA is implemented on taking two UCI machine learning repository sets and one real bank dataset. The KPCA implemented with classic Gaussian kernel internally. Finally KPCA performance compared with projection methods (ICA, LDA, PLSDA and PCA), other kernel (SVM-K) and non-kernel techniques (ID3, C4.5, Rule C4.5, k-NN and NB) applied on same datasets using training and test set combinations.

SSCC-03.6 A Pareto Survivor Function Based Cluster Head Selection Mechanism (PSFCHSM) to Mitigate Selfish Nodes in Wireless Sensor Networks
Rajarajeswari Palaniappan (Anna University Chennai & Sri Krishna College of Technology, India)

In Wireless Sensor Networks (WSNs), co-operation among sensor nodes plays a significant role for reliable data delivery by prolonging the lifetime of the network. Taking this aspect into account, a Pareto Survivor Function based Cluster Head Selection Mechanism (PSFCHSM) is proposed for electing the new cluster head under selfish attack. In this approach, the detection of selfish attack is achieved through a conditional probabilistic approach which monitors events that purely depends only on the continuous network parameters. The proposed strategy not only identifies the selfish attack in sensor networks but also elects a new sensor node as a rehabilitative Cluster Head (CH) based on Pareto Survivor Function (PSF). The preeminence of the proposed approach is evaluated through evaluation parameters like Packet Delivery Ratio (PDR), Energy Consumption Rate and Throughput by varying the number of sensor nodes and the transmission range. Further, the incorporation of this conditional survivability co-efficient mechanism detects and mitigates selfish nodes at a rapid rate of 32% than the benchmark mechanisms like Fuzzy Ant Colony Optimization Routing (FACOR) and Genetic Algorithm Inspired Routing Protocol (GROUP) considered for comparison. The proposed approach not only isolates the selfish nodes that causes Denial of Service (DoS) attack but also reduces the cost incurred in communication. Furthermore, the simulation resultsshow that PSFCHSM outperforms GROUP and FACOR by enhancing the network lifetime by 28%.

SSCC-03.7 A Secret Common Information Duality for Tripartite Noisy Correlations
Pradeep Kr. Banerjee (Max Planck Institute for Mathematics in the Sciences, Germany)

We explore the duality between the simulation and extraction of secret correlations in light of a similar well-known operational duality between the two notions of common information due to Wyner, and Gacs and Korner. For the inverse problem of simulating a tripartite noisy correlation from noiseless secret key and unlimited public communication, we show that Winter's (2005) result for the key cost in terms of a conditional version of Wyner's common information can be simply reexpressed in terms of the existence of a bipartite protocol monotone. For the forward problem of key distillation from noisy correlations, we construct simple distributions for which the conditional Gacs and Korner common information achieves a tight bound on the secret key rate. We conjecture that this holds in general for non-communicative key agreement models. We also comment on the interconvertibility of secret correlations under local operations and public communication.

SSCC-03.8 Analysis of Accountability Property in Payment Systems Using Strand Space Model
Venkatasamy Sureshkumar, Ramalingam Anitha and S Anandhi (PSG College of Technology, India)

Nowadays payment protocols are facing difficulties in its implementation, as the parties involved in the execution deny their performed actions. In this context, there should be a provision to link the action to the concerned party, which is addressed by the accountability property. In this paper, a new framework in strand space model for the analysis of accountability property is proposed that can overcome the drawbacks of the existing models. Strand space model is improved so that the analysis of accountability property of a payment protocol that is designed under symmetric key cryptosystem can be done. As a test example, symmetric key based BPAC, a bill payment protocol ensuring accountability is used. Fine-grained analysis of accountability property of BPAC protocol is carried out and proved its correctness using the automated support provided by CPSA along with the strong mathematical proof.

SSCC-03.9 Exploiting Domination in Attack Graph for Enterprise Network Hardening
Ghanshyam Bopche (Institute for Development and Research in Banking Technology (IDRBT) & School of Computer and Information Sciences (SCIS), University of Hyderabad, India); Babu Mehtre (IDRBT - Institute for Development and Research in Banking Technology & Reserve Bank of India, India)

Attack graph proved to be a tool of great value to an administrator while analyzing security vulnerabilities in a networked environment. It shows all possible attack scenarios in an enterprise network. Even though attack graphs are generated efficiently, the size and complexity of the graphs prevent an administrator from fully understanding the information portrayed. While an administrator will quickly perceive the possible attack scenario, it is typically tough to know what vulnerabilities are vital to the success of an adversary. An administrator has to identify such vulnerabilities and associated/enabling preconditions, which really matters in preventing an adversary from successfully compromising the enterprise network. Extraction of such meaningful information aid administrator in efficiently allocating scarce security resources. In this paper, we have applied a well known concept of domination in directed graphs to the exploit-dependency attack graph generated for a synthetic network. The minimal dominating set (MDS) computed over the generated attack graph gives us the set of initial preconditions that covers all the exploits in the attack graph. We model the problem of computing MDS as a set cover problem (SCP). We have presented a small case study to demonstrate the effectiveness and relevancy of the proposed approach. Initial results show that our minimal dominating set-based approach is capable of finding the sets with minimal number of initial conditions that need to be disabled for improved network security.

SSCC-03.10 Human Activity Recognition Based on Motion Projection Profile Features in Surveillance Videos Using Support Vector Machines and Gaussian Mixture Models
Arunnehru J (SRM Institute of Science and Technology, India); M. KalaiselviGeetha (Annamalai University, India)

Human Activity Recognition (HAR) is an active research area in computer vision and pattern recognition. The area of human activity recognition, attention consistently focuses on changes in the scene of a subject with reference to time, since motion information can sensibly depict the activity. This paper depicts a novel framework for activity recognition based on Motion Projection Profile (MPP) features of the difference image, representing various levels of a person's interaction. The motion projection profile features consist of the measure of moving pixel of each row, column and diagonal (left and right) of the difference image and they give adequate motion information to recognize the instantaneous posture of the person. The experiments are carried out using UT-Interaction dataset (Set 1 and Set 2), considering six activities viz (handshake, hug, kick, point, punch, push) and the extracted features are modeled by Support Vector Machines (SVM) with RBF kernel and Gaussian Mixture Models (GMM) for recognizing human activities. In the experimental results, GMM exhibit effectiveness of the proposed method with an overall accuracy rate of 93.01% and 90.81% for Set 1 and Set 2 respectively, this outperforms the SVM classifier.

SSCC-03.11 Modeling Dependencies of ISO/IEC 27002:2013 Security Controls
Anirban Sengupta (Jadavpur University, India)

Security controls like policies, procedures, laws and regulations, or security tools and techniques help in mitigating risks to enterprise information systems. There are several security standards that provide guidance on the implementation of security controls. ISO/IEC 27002:2013 is one of the most widely accepted security standards; it has been adopted by the Indian government for implementation in critical sector enterprises. The controls of ISO/IEC 27002:2013 are inter-dependent and they consist of several types of implementation-specific tasks. Lack of proper research on these aspects makes it extremely difficult for enterprises to implement a comprehensive and correct control implementation programme. The present study analyses the controls of ISO/IEC 27002:2013, categorizes the implementation tasks and details the dependencies among controls and relationships among categories of tasks.

SSCC-03.12 Occlusion Detection Based on Fractal Texture Analysis in Surveillance Videos Using Tree-based Classifiers
Arunnehru J (SRM Institute of Science and Technology, India); M. KalaiselviGeetha and Nanthini T (Annamalai University, India)

Occlusion detection in video has been an active research for decades. This interest is motivated by numerous applications, such as visual surveillance, human-computer interaction, and sports event analysis. In this paper, an occlusion detection approach based on fractal texture analysis is proposed. Texture features are extracted from the segmented images using Segmentation-based Fractal Texture Analysis (SFTA) algorithm. The experiments are carried out using a PNNL-Parking-Lot dataset and the various tree-based classifiers such as random forest, random tree, decision tree (J48), and REP tree are used for classification. In the experiment results, random forest classifier showed the best performance with an overall accuracy rate of 98.3% for SET-1, 98.2% for SET-2, and 83.7% for SET-3, which outperforms other algorithms.

SSCC-03.13 Secure Cluster Based Routing Scheme (SCBRS) for Wireless Sensor Networks
Sohini Roy (Arizona State University, USA)

Wireless sensor network consists of micro electromechanical sensor nodes which are operated by battery power. Sensor network has emerged as an important supplement to the modern wireless communication systems due to its wide range of applications. The modern researchers are facing the various issues concerning sensor networks more graciously. However, offering security to an energy constrained network of sensor nodes is still a challenge. The proposed scheme named as Secure Cluster Based Routing Scheme (SCBRS) for Wireless Sensor Networks adopts a hierarchical network model coupled with low weight security mechanisms in order to conserve energy as well as offer security to the network. Elliptic Curve Cryptography (ECC) based public key cryptography, Elliptic-Curve Diffie-Hellman (ECDH) key exchange scheme, nested hash based message authentication codes (HMAC) and RC5 symmetric cypher are the light-weight security methods used by the proposed scheme. Simulation results prove that SCBRS performs better than Tree Based Protocol for Key Management in Wireless Sensor Networks in terms of energy efficiency and security. SCBRS also performs better than Secure Hierarchical Routing Protocol (SHRP) in fields where security is the most important issue.

SSCC-03.14 A modus operandi for social networking security solutions based on varied usages
Kamatchi R (Amity University, Mumbai, India); Kanika Minocha (KJSIMSR, India)

The popularity of social networking sites is increasing beyond belief. There is no disagreement about the effectiveness of sites such as Facebook, LinkedIn and Twitter. These sites can be used for professional networking and job searches, as a tool to keep the public informed about the safety and other issues, as a means to increase sales revenue or as a way to reconnect with friends. There is a rapid increase in the number of social media users. However, with the increase in number of users there is also an increase in the security threats affecting the users' privacy, personal data, identity and confidentiality. In this paper we have aimed at categorizing security and privacy threats based on the kind of usage of social media. We have also presented an algorithm to find the appropriate solution to address the security and privacy related issues as per the usage category. This paper helps in improving security and privacy of SNS (Social Networking Sites) users without compromising the benefits of sharing information through SNSs.

SSCC-03.15 Technical Aspects of Cyber Kill Chain
Tarun Yadav (Scientific Analysis Group, Defence Research & Development Organisation, Ministry of Defence, GOI, India); Arvind Mallari Rao (Defense Research & Development Organisation, Ministry of Defence, GOI, India)

Recent trends in targeted cyber-attacks has increased the interest of research in the field of cyber security. Such attacks have massive disruptive effects on organizations, enterprises and governments. Cyber kill chain is a model to describe cyber-attacks so as to develop incident response and analysis capabilities. Cyber kill chain in simple terms is an attack chain, the path that an intruder takes to penetrate information systems over time to execute an attack on the target. This paper broadly categories the methodologies, techniques and tools involved in cyber-attacks. This paper intends to help a cyber security researcher to realize the options available to an attacker at every stage of a cyber-attack.

WCI-04: WCI-04: Data Engineering and Data Mining

Room: 506
Chair: V Renumol (School Of Engineering, CUSAT, Kochi, India)
WCI-04.1 A Novel Data Mining Approach for Detecting Spam Emails using Robust Chi-Square Features
Mugdha Sharma (Bhagwan Parshuram Institute of Technology, India); Jasmeen Kaur (Rukmini Devi Institute of Advanced Studies, India)

In spam filtering techniques, the classification of emails are performed on the basis of a collection words that are extracted from the training set. The accuracy and performance of the classifier highly depends on features and length of feature space. Feature selection methods are used in such scenario for evaluating the best features for classification. In an attempt to develop strong spam filtering model we rank the features using Chi-Square feature ranking method and also investigate the effectiveness of feature length on classification accuracy. The results are promising and also the feature ranking method proposed is effective than other methods referred in the literature.

WCI-04.2 Automatic Book Spine Extraction and Recognition for Library Inventory Management
Nevetha M P and Baskar A (Amrita Vishwa Vidyapeetham, India)

Manual inventory management in a library is by far arduous. Automation of book inspection can be achieved by using a simple camera based system that can recognize book spines in a book shelf. The book spines contain printed information such as title, author and publisher name, which can be extracted and verified with the library's database. Book spines can be segmented by detecting their rectangular boundaries which appear as straight lines. Line detection using hough transform and line segment detector may result in spurious boundaries due to the presence of long titles or graphics on the book spine. In this paper, we propose a technique to improve book spine border detection by devising set of constraints based on structural properties that can be used to filter the detected line segments so as to obtain book spine boundaries. The segmented book spines are binarized to extract the printed information such as title, author and publisher name. The text is recognized using Tesseract Optical Character Recognition Engine. The proposed algorithm was tested successfully on book shelf images with vertically oriented, uniformly inclined and multi-oriented book spines.

WCI-04.3 Building Web Personalization System with Time-Driven Web Usage Mining
Ramya P t (Amrita School of Engineering, India); G. p. Sajeev (Amrita University)

Web personalization is a powerful tool used for personalizing the Websites. The personalization system aims at suggesting the Web pages to the users based on their navigational patterns. Use of attributes such as time, popularity of Web objects makes the model more efficient. This paper proposes a novel Web personalization model which utilizes time attributes, such as duration of visit, inter-visiting time, burst of visit, and the user's navigational pattern. Test results indicate that the proposed model explores the user's behavior and their interest.

WCI-04.4 Correlation based Feature Selection for Diagnosis of Acute Lymphoblastic Leukemia
Vanika Singhal (Manipal University Jaipur, India); Preety Singh (LNM Institute of Information Technology, India)

Acute Lymphoblastic Leukemia (ALL) is a type of cancer characterized by increase in abnormal white blood cells in the blood or bone marrow. This paper presents a methodology to detect ALL automatically using shape features of the lymphocyte cell extracted from its image. We apply Correlation based Feature Selection technique to and a prominent set of features which can be used to predict a lymphocyte cell as normal or blast. The experiments are performed on 260 blood microscopic images of lymphocyte and an accuracy of 92:30% is obtained with a set of sixteen features.

WCI-04.5 Distributed Multi Class SVM for Large Data Sets
Aruna Govada and Bhavul Gauri (BITS-Pilani KK Birla Goa Campus, India); Sanjay K. Sahay (BITS Pilani, India)

Data mining algorithms are originally designed by assuming the data is available at one centralized site.These algorithms also assume that the whole data is fit into main memory while running the algorithm. But in today's scenario the data has to be handled is distributed even geographically. Bringing the data into a centralized site is a bottleneck in terms of the bandwidth when compared with the size of the data. In this paper for multi class SVM we propose an algorithm which builds a global SVM model by merging the local SVMs using a distributed approach(DSVM). And the global SVM will be communicated to each site and made it available for further classification. The experimental analysis has shown promising results with better accuracy when compared with both the centralized and ensemble method. The time complexity is also reduced drastically because of the parallel construction of local SVMs. The experiments are conducted by considering the data sets of size 100s to hundred of 100s which also addresses the issue of scalability.

WCI-04.6 Fast Modular Artificial Neural Network for the Classification of Breast Cancer data
Hosahalli Doreswamy and Umme Salma M (Mangalore University, India)

Medical data mining is both an interesting as well as competitive field, where research is carried out to invent new algorithms and techniques which can aid in efficient classification and prediction of various diseases. Breast cancer is one among the deadly diseases killing large population of women around the world. Many techniques have been proposed for classification of breast cancer data, but Artificial Neural Network ranks first in the accuracy rating. In this paper we come up with an idea of Fast Modular Artificial Neural Network (FMANN), where a feature selection step followed by data normalization (range of -1 to 1), data set division and attribute division leads to the fine refinement of input making them more suitable for classification. Modular Neural Network is built using four different types of Feed forward Neural Network (FNN) and refined inputs are sent to each module where they carry out their task distinctly. The final result of the model is the probabilistic sum of results obtained from all modules. The FMANN produces highest classification compared to other networks. We tested our model on two different benchmark datasets and obtained good results. The accuracy of Wisconsin Breast Cancer Diagnostic Data (WBCD) is found to be 99.8% and, accuracy of KDD cup 2008 breast cancer data is found to be 99.96%.

WCI-04.7 Integrating Apriori with paired k-means for Cluster fixed mixed data
Veena Nair (Amrita Vishwa Vidyapeetham, India); Haripriya H (Amrita Vishwa Vidyapeetham & Amrita CREATE, Amrita University, India); Amrutha Shaji and Prema Nedungadi (Amrita Vishwa Vidyapeetham, India)

The field of data mining is concerned with finding interesting patterns from an unstructured data. A simple, popular as well as an efficient clustering technique for data analysis is k-means. But classical k-means algorithm can only be applied to numerical data where k is a user given value. But the data generated from a wide variety of domains are of mixed form and it is effortful to trust on a user given value for k. So our objective is to effectively use an association rule mining algorithm which can automatically compute the number of clusters and a pairwise distance measure for calculating the distance in mixed data. We have done experimentations with real mixed data taken from the UCI repository.

WCI-04.8 Multi-view Ensemble Learning: A Supervised Feature Set Partitioning for High Dimensional Data Classification
Vipin Kumar and Sonajharia Minz (Jawaharlal Nehru University, India)

Partitioning the feature set into non-empty subsets of features is the generalized task of feature subset selection. The subsets of features (called views of the dataset) are jointly useful than a useful subset of the feature (single view). Multi-view ensemble learning (MEL) exploits the views of the dataset to enhance the classification using consensus and complementary information. The way of partitioning of feature set affects the classification performance of MEL. Therefore, supervised feature set partitioning (SFSP) method is proposed. This method partitions the features into a number of blocks of partition such that the magnitude of features relevance of each block of partition is equivalent to the other blocks of the partition. SFSP method is compared with random feature set partition (RFSP) method. Experiments have been performed on seven high dimensional datasets. The results and their statistical analysis show that SFSP method is better than RFSP method for MEL.

WCI-04.9 Prognosis and Disclosure of Functional Modules from Protein-Protein Interaction Network
Manali Modi (Gujarat Technological University & Marwadi Education Foundation Group of Institution, India); Merry K P (Gujarat Technological University & Marwadi University, India)

Bioinformatics is an integrated area of data mining, statistics and computational biology. Proteins are the building bit of the living creatures and it assumes transcendent part to fulfill natural procedures of the life forms. Protein-Protein Interaction (PPI) network is a sort of system that delineates the fundamental part in organic exercises. PPI is the corner stone of all the biological processes occurring in the organisms. In PPI network, protein interacts with one another to incorporate large molecules. Functional module detection refers to the set of proteins which partake in the same organic courses of biological action. Clustering in the PPI system setting assembles proteins which impart a bigger number of communications. The consequences of clustering can clarify the construction of the PPI arrangement and prescribe possible limits for detecting modules which were at one time uncharacterized. The functional module prognosis and disclosure using clustering methods helps in better understanding of the biological mechanism of an organism which inturn contributes towards the remedies of various diseases.

WCI-05: WCI-05: Pattern Recognition, Signal and Image Processing-I

Room: 507
Chair: Bharathi Pilar (Mangalore University & University College Mangalore, India)
WCI-05.1 Distributed Binary Decision-Based Collaborative Spectrum Sensing in Infrastructure-less Cognitive Radio Network
Roshni Rajkumari (NERIST, Arunachal Pradesh & Deemed University, Under MHRD, Govt. of India, India); Ningrinla Marchang (North Eastern Regional Institute of Science and Technology, Arunachal Pradesh, India)

Individual scanning of channel for the presence or absence of primary user does not give accurate sensing result in cognitive radio network. Thus, collaborative spectrum sensing is employed to get a more accurate spectrum sensing result.This paper proposes a distributed binary decision-based collaborative spectrum sensing, in which a node uses the binary decisions of its h hop neighbors for making its decision. The proposed scheme is for an infrastucture-less cognitive radionetwork where secondary users collaborate based on the local information exchanged without requiring a fusion center.Instead of using consensus algorithm, this paper focuses on message passing, i.e., exchanging the local sensing decision.Simulation result showed that the proposed scheme can give a significant good detection rate with low false detection rate.

WCI-05.2 CT image denoising based on complex wavelet transform using local adaptive thresholding and Bilateral filtering
Manoj Diwakar (Graphic Era University Dehradun, India); Sonam Gautam (BabaSaheb Bhimrao Ambedkar University, Lucknow, India); Manoj Kumar (Babasaheb Bhimrao Ambedkar University, Lucknow, India.)

Computed Tomography (CT) is one of the most widespread radiological tools in medical diagnostics. To achieve good quality of CT images with low radiation dose has drawn a lot of attention to researchers. Hence, post processing of CT images has become a major concern in medical image processing. This paper presents a novel edge-preserving image denoising scheme based on Dual-tree Complex Wavelet Transform (DT-CWT), Bilateral filtering and a locally adaptive thresholding method. The noisy image is decomposed into Complex Wavelet coefficients through a Dual-tree Complex Wavelet Transform. Low-pass subbands are modified using Bilateral filter. High pass subbands are modified using locally adaptive thresholding based on interscale statistical dependency, where the noise variance of noisy wavelet coefficients are estimated using a robust median estimator. Denoised image is retrieved using inverse DT-CWT. The proposed scheme is compared with existing methods. It is observed that performance of proposed method is superior than existing methods in terms of visual quality, Image Quality Index (IQI) and PSNR.

WCI-05.3 Analog BIST for Capacitive MEMS Sensor using PLL
Favoureen Swer (Visvesvaraya Technological University & UTL Technologies Limited, Bangalore, India); Pradeep V (VTU, India); Siva Yellampalli (Visvesvaraya Technological University, India)

In this work, a Capacitive MEMS readout circuitry using a PLL has been designed. Since the simulation of a MEMS sensor in Cadence Environment is still to be explored, a method is proposed, wherein the variable capacitor will be mimicking the MEMS sensor. The MOS capacitor in Cadence Virtuoso is used to serve the purpose of the MEMS sensor. When the capacitance of the sensor changes, the phase of the capacitance output will change, which can be detected by the PFD. To bring this capacitance output in phase with the reference phase, there will be changes in the control voltage which will change the VCO frequency. The idea is extended for fault coverage in MEMS device by a linear relationship between the MEMS sensor and the VCO output frequency. The proposed system serves as an analog BIST unit for the MEMS sensor. The simulation has been carried out in Cadence Virtuoso 180nm technology with a supply voltage of 1.8V. The PFD used produces a dead zone of 7ps. The total power consumed by the system is 7.574mW.

WCI-05.4 Anomalies in Landsat Imagery and Imputation
Soumya Goswami (Amrita School of Engineering, Bangalore, India); Sangeeta K (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India)

In remote sensing applications the data and imagery are incomplete so that some geographical coordinates will not have measured values of the variables of interest. There are several reasons for the occurrence of image gaps. It can be instrumentation error, losses of image data during transmission or the cloud coverage. These missing values degrade the representativeness of true dataset and lead to difficulties in several applications due to which recovery of original image form partial or incomplete image is of key importance. In this paper spatial imputation based on machine learning algorithms has been used to fill the unknown pixels in a satellite image. Missing value estimation is better when the imputation algorithm considers the correlation in existing values of the dataset. Types of data loss which we have dealt with in this paper is dropped scans and shutter synchronization anomalies in satellite image. The variations of k-NN approach has been used to treat the anomalies that occur in land satellite data due to various reasons.

WCI-05.5 Gabor Moments based Shot Boundary Detection
B H Shekar and K P Uma (Mangalore University, India)

In this paper, we present a shot boundary detection method based on gabor transformed first order moments. Unlike other approaches where gray scale images are considered for processing with huge feature space, we have worked in different colour spaces and found that only first order moments are good enough to capture the shot boundaries. In the proposed method, each frame of a given video is convolved with six different oriented Gabor filter under each channel of a colour component and first order moments were computed which were used for shot boundary detection task. The subset of TRECVID 2001 dataset is considered for experimentation to evaluate the performance of the proposed method. A comparative analysis with some of the existing algorithms is also presented. In addition, we have conducted experimentation in different colour spaces to identify the suitable colour space in Gabor transformed domain for shot boundary detection task.

WCI-05.6 Gaussian filter based a trous algorithm for image fusion
Susmitha Vekkot (Amrita School of Engineering, India); Pancham Shukla (London Metropolitan University, United Kingdom (Great Britain))

Image fusion integrates complementary information from various perspectives in order to provide a meaningful interpretation of useful features and textures in multisource images. Here, we present a multiresolution algorithm based on Stationary Wavelet Transform for fusion of two test images of same size. The algorithm uses a Gaussian low-pass filtering technique for the high frequency subbands of SWT decomposition. The new approach gave sharper edges and structural enhancement than region based approaches involving calculation of energy around salient features. The key feature of Gaussian filtering is the flexibility of using filters with different values for standard deviation depending on the application and the range of detail necessary for processing.

WCI-05.7 High performance SIFT features clustering of VHR satellite images for disaster management
Ujwala Bhangale (I.I.T. Bombay); Surya Durbha (I. I. T. Bombay, India)

Disaster management applications require real time responses to handle the time critical situations. High performance computing technologies such as Graphics processing Unit (GPU) can serve this purpose, though it is not possible to provide real time responses but near real time responses can be surely helpful for these kinds of applications. This work focuses on high performance computing methodology, essential to speedup the analysis of very high resolution remote sensing data of earthquake. Agglomerative Hierarchical Clustering (AHC), one of the well known hierarchical clustering approaches is applied on robust scale invariant feature transform (SIFT) features with 128 dimensions. Agglomerative clustering with complete linkage strategy is implemented on GPU using Compute unified device architecture (CUDA). Fermi architecture based Tesla C2075 NVIDIA GPU with 448 cores is used to achieve high performance computing. It is observed that major operations of AHC such as proximity matrix calculation and minimum value extraction is providing good amount of speedup, 580X for proximity matrix calculation and 18X for minimum value extraction over single core of Intel i7 CPU.

WCI-05.8 Primary User Authentication in Cognitive Radio Network using Signal Properties
Suditi Choudhary and Muzzammil Hussain (Central University of Rajasthan, India)

Cognitive Radio (CR) is an emerging radio mechanism that dynamically adjusts its radio parameters, so as to utilize the unused spectrum bands (spectrum holes) to overcome the spectrum shortage problem. The wireless devices are increasing enormously and leading to the problem of spectrum shortage. In CR, the available free spectrum is allocated to unlicensed users (Secondary Users) but not by disrupting the communications of licensed users (Primary Users). Hence, it is essential to distinguish the signals of Primary Users (PUs) from that of Secondary Users (SUs) at physical layer, without understanding the data in the signal. In this paper we proposed an algorithm to authenticate PUs at physical layer in Cognitive Radio Network (CRN), using signal properties like Angle of Arrival (AoA) and its distance. The proposed algorithm is simulated in Matlab and its performance is evaluated and is found to be efficient in simulated environment.

WCI-05.9 Seeded Watershed Segmentation Based Proteomics for 2D-Gel Electrophoresis Images
Shrinivas Desai (K L E Technological University, India); Savitha Desai (B V B College of Engineering & Technology, India)

Proteome analysis is most frequently accomplished by a blend of two-dimensional gel electrophoresis (2DGE) to dissect and envisage proteins and mass spectrometry (MS) for protein recognition. Even though this technique is influential, mature, and responsive, questions remain regarding its ability to characterize all of the elements of a proteome. The process of screening for proteins is laborious and protein pattern differences between gel images can be very subtle and tedious to detect by naked eye. Hence there is tremendous need for automatic detection of proteins by computer based tools. In this paper we propose software tool, based on watershed segmentation and point matching as a promising method for protein detection. Proteomics has become an important part of life Sciences especially after the completion of sequencing the human genome. For the analysis, the software Protein Image Registration (PIR) is used. The proposed tool detects and presents proteins along with their properties such as name, molecular mass and pH score.

WCI-05.10 Shadow detection and removal from real images:State of art
Remya K Sasi (National Institute of Technology, Calicut, India); Govindan V. k (NIT Calicut, India)

Shadow detection and removal is used in various image processing applications like video surveillance, scene interpretation and object recognition. Ignoring the existence of shadows in images may cause serious problems like object merging, object lose, misinterpretation and alternation of object shape in various visual processing applications like segmentation, scene analysis and tracking. Many algorithms have been proposed in the literature that deals with shadow detection and removal from images as well as videos. A comparative and empirical evaluation of the existing approaches in video has already been reported, but we lack a similar one in case of still images. This paper presents a comprehensive survey of existing shadow detection and removal algorithms reported in the case of still images. Evaluation metrics involved in shadow detection and removal techniques are evaluated and the inefficiency of conventional metrics like accuracy, FScore etc in detection phase are also explored. Quantitative and qualitative evaluation of selected methods are also discussed.To the best of our knowledge this is the first article that exclusively discusses shadow detection and removal methodologies for images.

WCI-05.11 Shape Representation and Classification Based on Discrete Cosine Transformation and IDSC
B H Shekar (Mangalore University, India); Bharathi Pilar (University College Mangalore)

In this paper, we propose a combined classifier model based on two dimensional discrete cosine transform (2D-DCT) and Inner Distance Shape Context (IDSC) to classify shapes accurately. DCT is capable of capturing the region information and the inner-distance is insensitive to shape articulations. We propose to integrate these two techniques for accurate shape classification. The Euclidean distance metric in case of DCT and Dynamic Programming (DP) in case of IDSC were respectively employed to obtain similarity values and hence fused to classify given query shape based on minimum similarity value. The experiments are conducted on publicly available shape datasets namely MPEG-7, Kimia-99, Kimia-216, Myth and Tools-2D and the results are presented by means of Bull's eye score and precision-recall metric. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach. The experimental results demonstrate that the proposed approach yields significant improvements over baseline shape matching algorithms.

WCI-05.12 Steganography using Cuckoo Optimized Wavelet Coefficients
Anuradha Singhal and Punam Bedi (University of Delhi, India)

Secret data hiding has become center of information security research with the enormous growth of the internet. Many researchers are actively working towards increasing the capacity of hidden content, i.e., hiding bigger data/message in a cover without being perceivable. This paper proposes to use Cuckoo optimized algorithm for finding best coefficients in the Wavelet Transform domain of an image for embedding secret data to obtain stego image. Huffman encoding is applied on secret data to give encoded bit stream which is embedded in sub bands obtained from wavelet transform of the cover to form stego object. Huffman coding has characteristics of lossless compression which can be used to increase embedding capacity. The algorithm is implemented in Matlab and results are compared with existing LSB steganography and steganography using PSO in wavelet domain in terms of PSNR and SSIM.

WCI-05.13 StegTrack: Tracking images with hidden content
Veenu Bhasin, Punam Bedi, Aakarshi Goel and Sukanya Gupta (University of Delhi, India)

This paper presents the design and implementation of StegTrack, a novel proactive steganalysis tool. StegTrack is an antivirus like tool to track steganograms among images on a computer. To the best of our knowledge such a tool does not exist in literature. Once installed on a machine, StegTrack always remains active. It keeps track of user's entire file system and detects arrival of new images in the system. Every image entering the system is tested for steganography. Steganalysis, the process of discrimination between stego-objects and non-stego-objects without prior knowledge of steganography method employed to hide the data, has two major components feature extraction and classification. The StegTrack tool gives user the flexibility of choosing the feature extractor as well as classifier although a default is provided for both. The tool provides various feature extraction options like features based on Markov Model, co-occurrence matrix, neighboring joint density probabilities, Run-length matrix, SPAM and statistical features. The classifier opted as default in the StegTrack is ELM to provide results in real time. This proposed tool also provides a new feature - cleaning of stego-image, where image is rendered unfit for extracting hidden material from it. A prototype for the tool was implemented in MATLAB and Java.

WCI-05.14 RosyRecommends: Collaborative filter analysis of listening behavior using user similarity metrics based on timbral clusters
Jasmine Hsu and Adarsh Jois (New York University, USA)

The purpose of this project is to build a large scale music recommendation system. The million song dataset is very dense and rich with a lot of information contained by way of meta data and spectral properties of the audio signal. The TasteProfile data set contains the play count triplets for each user. This is a sparse data set. Both of these have close to 1 million data points. In order to build a scalable Collaborative filtering system that can be directly used on Taste Profile subsets, we will use Apache Mahout. Most modern recommendation systems use user based models to generate recommendations. Apache Mahout has provisions to build such recommender systems. Our primary intent is to leverage the dense and sparse data to build a model that uses certain user and item based features to generate recommendations. We make use of kNN over the dense dataset in order to map the music into clusters. We then generate user-cluster mapping using this.

Tuesday, August 11 14:30 - 17:30 (Asia/Kolkata)

T5: Tutorial -5: Energy Efficient Cooperative Cognitive Radio Network with Challenges in Security

Dr. Santi Prasad Maity (Indian Institute of Engineering Science and Technology, Shibpur)
Room: LT1

Cognitive radio (CR) has emerged as a promising solution to overcome the spectrum scarcity problem; the concept can be used to increase the spectrum efficiency through its capabilities to detect and thereby opportunistically access the vacant spectrum slots. Cooperative networks have also been recognized as a key technology for improving bandwidth utilization. Cooperative Cognitive Radio Network (CCRN) does it through fast and reliable spectrum sensing over the primary user (PU) band and consequent throughput improvement by power saving operation of the cooperative nodes which has an effect on interference reduction on PUs' network. Two important issues on CCRN are Energy and Security. Energy detection is the simplest, widely used and optimal sensing technique where no prior knowledge is required on PU signaling technique. Energy has a direct impact on optimal transmission power, optimality in sensing and transmission slot on frame based system design. Furthermore, the nodes in CCRN are assumed to be battery driven and power limited; hence energy efficient system design is highly demanding for enhanced network lifetime. However, cooperative/collaborative spectrum sensing is again vulnerable to different attacks, for example spectrum sensing data falsification (SSDF) attack, which demands fusion strategy to be secured or reliable or improved sensing to be done in presence of PUEA (primary user emulated attack). This tutorial will present an in-depth discussion on Energy efficient CCRN design, issues and challenges, security in collaboration ranging from theory to practice, from architecture to algorithms, and from policy to implementation.

T6: Tutorial -6: A Novel Strategy for Development of Low Cost Big Data Enabled Scalable Sensor Network

Dr. Ankita Kalra (Indian Institute of Technology (BHU), Varanasi); Dr. N s Rajput (Indian Institute of Technology (BHU), Varanasi)
Room: LT2

Introduction to the Wired Sensor Network and Big Data connectionPreparation of Sensor Array with suitable analog and digital interfaceIntroduction to Single Board Controllers and ProcessorsAcquisition of Sensor Array Data using Controllers BoardsIntroduction to Apache Hadoop, its features and architectureIntroduction to Apache Flume, its features and architectureCreation of Remote Client Node using Hadoop and Flume on Raspberry PiCreation of Multi-Node Hadoop Master Node (Cluster)Integration of Flume Source and Avro Sink Agent with Hadoop MasterAcquisition of Sensor Data from Remote Client(s) and its HDFS storageAnalysis of Sensor Data using MapReduce and Machine Learning LibraryDiscussion on this Framework for Big Data Acquisition & Processing

T4: Tutorial -4: Internet of Things: Grand Vision, Major Challenges, and Strategic Research Agenda

Dr. San Murugesan (University of Western Sydney, Australia)
Room: MSH

Besides connecting billions of computers, mobile devices and people, the Internet is now poised to connect 'things' such as cars, sensors, controllers, TVs, machinery, and electrical appliances, creating Internet of Things (IoT) - also called Internet of Everything, Internet of Anything. IoT, considered as the next big thing in IT, facilitates deployment many new applications and services that were unimaginable till recently. It's bound to reengineer and transform everything -- business, industry, healthcare, and personal and social life - again, as computers, mobile devices and the Internet have done.This tutorial will provide a brief holistic introductory overview of IoT and present a panoramic view of its emerging grand vision. We will identify and discuss the challenges, issues and barriers facing the realization of the vision. Finally, we will outline a comprehensive research agenda that highlight several areas and problems that requires further study and development and help in realizing the IoT's vision and fuller potential.

Tuesday, August 11 14:59 - 16:00 (Asia/Kolkata)

ICACCI Poster - 01: ICACCI Poster Session - I

ICACCI Poster - 01.1 Hybrid Approach to Crime Prediction using Deep learning
Jazeem Azeez (Hindustan University, India); D. John Aravindhar (Hindustan Institute of Technology and Science, India)

Prevention is better that Cure. Preventing a crime from occurring is better than investigating what or how the crime had occurred. Just like vaccination is given to a child to prevent disease, in today's world with such higher crime rate and brutal crime happen-ings, it have become necessary to have a vaccination systems that prevents from crimes happening. By vaccinating society against crime it refers to various methods such as educating peoples, creating aware-ness, increasing efficiency and proactive policing methods and other deterrent techniques. Inspired by two different existing approach to crime prediction, the first one present a visual analytics approach that provides decision makers with a proactive and pre-dictive environment in order to assist them in making effective resource allocation and deployment deci-sions. Crime incident prediction has depends mainly on the historical crime record and various geospatial and demographic information [1]. Even though it's promising, they do not take into account the rich and rapidly expanding social & web media context that surrounds incidents of interest. Next approach is based on the semantic analysis and natural language processing of Twitter posts via latent Dirichlet alloca-tion, Topic detection & Sentiment analysis[3[4]]. But both the techniques faces inherent limitations. Crime that happens these days are have following key characteristics such as crimes repeating in a periodic fashion, crimes occurring as a result of some other activity and occurrence of crimes pre indicated by some other information.

ICACCI Poster - 01.2 Analyzing Impact of Image Scaling Algorithms on Viola-Jones Face Detection Framework
Himanshu Sharma (Academy of Scientific and Innovative Research, India); Sumeet Saurav (CEERI Pilani, India); Sanjay Singh (CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI) & Academy of Scientific & Innovative Research (AcSIR), India); Ravi Saini (CSIR-CEERI Pilani, India); Anil Saini (CEERI & ACSIR, India)

In today's world of automation, real time face detection with high performance is becoming necessary for a wide number of computer vision and image processing applications. Existing software based system for face detection uses the state of the art Viola and Jones face detection framework. This detector makes use of image scaling approach to detect faces of different dimensions and thus, performance of image scalar plays an important role in enhancing the accuracy of this detector. A low quality image scaling algorithm results in loss of features which directly affects the performance of the detector. Therefore, in this paper we have analyzed the effect of different image scaling algorithms existing in literature on the performance of the Viola and Jones face detection framework and have tried to find out the optimal algorithm significant in performance. The algorithms which will be analyzed are: Nearest Neighbor, Bilinear, Bicubic, Extended Linear and Piece-wise Extended Linear. All these algorithms have been integrated with the Viola and Jones face detection code available with OpenCV library and has been tested with different well know databases containing frontal faces.

ICACCI Poster - 01.3 Consensus based ensemble model for spam detection
Paritosh Pantola (Thapar University, Patiala, India); Anju Bala (Thapar University, India)

In machine learning, ensemble model is combining two or more models for obtaining the better prediction, accuracy and robustness as compared to individual model separately. Before getting ensemble model first we have to assign our training dataset into different models, after that we have to select the best model suited for our data sets. In this work we explored six machine learning parameter for the data set i.e. Accuracy, Receiver operating characteristics (ROC) curve ,Confusion matrix, Sensitivity, Specificity and Kappa value. After that we implemented k fold validation to our best five models. The data set used in this study available in

ICACCI Poster - 01.4 A Hierarchical Approach to Adaptive Distributed Scheduling in Cloud
Sonu Goel, Prashant Mishra, Nishant Narang and Shashank Srivastava (MNNIT Allahabad, India); Prasenjit Maity (Motilal Neheru National Institute of Technology, India); Aditya SaXENA (IIITA, India)

In today's era Cloud Computing is emerging as a solution to data storage problems. It provides a platform to request computational resources with its "on demand pay per use policy" [1] and hence opening gates to accessing virtually unlimited resources with minimal hardware and software at the clients' end. This paper aims at the development of a cloud services provisioning framework by developing a dynamic priority job scheduler cum load-balancer for the cloud .The main motive of this paper is to provide a means of managing the job requests in a flexible and cost-effective way, both for the customer and the cloud service provider. In order to make the cloud scalable and adaptable to the changing needs and the increasing number of the users, proper and judicious allocation of resources is the utmost demand. A load balancer plays a very critical role in the scheduling of services into the cluster of virtual machines formed inside the cloud and also ensures optimum utilization of the processing power of various virtual systems. The paper presents a hierarchical approach to give a scalable model for task scheduling. It lists a primitive three tier and an improved four tier architecture mitigating various provisioning concerns like optimal utilization of resources, handling the large number of requests and providing reliable cost effective services. In the scheduler module, is implemented an adaptive dynamic priority scheduling scheme. The scheduler framework takes into account the client - cloud interface and the inner mobility of the job request. The hierarchy takes care of the two important concerns in cloud provisioning that are task scheduling and virtual resource allocation.

ICACCI Poster - 01.5 Object Oriented Accountability Approach in Cloud for Data Sharing with Patchy Image Encryption
Swapnil Taru (K J College Of Engineering and Management Research Pune, India); Vikas Balasaheb Maral (University of Pune & KJEI's KJCOEMR PUNE 48, India)

Global widespread use of cloud computing presents a new approach for delivery model and consumption of different IT services based on internet. Cloud computing provides highly scalable and virtualized resources as a service on demand basis. Cloud computing provides flexibility for deploying applications at lower cost while increasing business agility. The main feature of using cloud services is that user's data are more oftenly processed at remote machines which are unknown to user. As user do not own these remote machine used for speed up data processing or operate them in cloud, users can lose control of own confidential data. Despite of all of advantages of cloud this remains a challenge and acts as a barrier to the large scale adoption of cloud. To address above problem in this paper we present a novel highly decentralized information accountability framework called as CIA (Cloud Information Accountability) that will protect user's data and also monitor the data flow in cloud. We propose the Object oriented approach that performs automated logging mechanism to ensure any access to user's data will trigger authentication. We use the JAR (JAVA Archive File) programmable capabilities to create dynamic travelling object containing user's data. To strengthen the distributed data security we use the chaos image encryption technique specific to image files. Chaos is patchy image encryption technique based on pixel shuffling. Randomness of the chaos is made utilized to scramble the position of the pixel of image.

ICACCI Poster - 01.6 Fragile Video Watermarking for Tampering Detection and Localization
Rupali Patil and Shilpa Metkar (College of Engineering, Pune, India)

Authentication is required to decide the originality of video signal. In this paper we proposed effective fragile video watermarking technique to embed and extract watermark in DCT domain with high capacity and transparency. Two watermarks are embedded into each frame. The first watermark is bits of the digital signature of hash value of the frame in frequency domain and second watermark is bits of micro-block numbers and frame numbers. The watermarks are embedded into video frames one by one in highest non-zero coefficient of quantized DCT coefficient. The first watermark is used to detect tampering and second watermark is used to localize the area being tampered. This technique causes significantly smaller video distortion as bits are embedded into the highest frequency coefficients. The embedded watermark is extracted and verified using public key. The block numbers and frame numbers are inserted in order to detect intra-frame and inter-frame tampering such as addition or removal of content within frames, frame reordering, dropping or addition of extra frames. If the video is being tampered we may extract one watermark correctly but other may get destroyed. As a result tamper detection of watermarked digital video will proved to be more authentic.

ICACCI Poster - 01.7 VLSI Implementation of Bit Serial Architecture based Multiplier in Floating Point Arithmetic
Jitesh Ramdas Shinde (Jitesh R Shinde); Suresh Salankar (G H Raisoni College of Engineering Nagpur, India)

VLSI implementation of Neural network processing or digital signal processing based applications comprises large number of multiplication operations. A key design issue, therefore in such applications depends on efficient realization of multiplier block which involves trade-off between precision, dynamic range, area, speed and power consumption of the circuit. The study in this paper investigates performance of VLSI implementation of bit serial architecture based multiplier (Type III) in floating point arithmetic (IEEE 754 Single Precision format). Results of implementation of 32x32 bit multiplier on FPGA as well as on Backend VLSI Design tool indicate that bit serial architecture based multiplier design provides good trade-off in terms of area, speed, power and precision over array multiplier and other multipliers approach proposed since last decade. In other words, bit serial architecture based multiplier (Type III) approach may provide good multi-objective solution for VLSI circuits.

ICACCI Poster - 01.8 Multi-objective Optimization for VLSI Implementation of Artificial Neural Network
Jitesh Ramdas Shinde (Vaagdevi College of Engineering & TMVLSI, India); Suresh Salankar (G H Raisoni College of Engineering Nagpur, India)

Neural Network's capability to mimic the structures and operating principles found in the information processing systems possessed by humans & other living creatures has made today Artificial Neural Network (ANN) a technical folk legend. The main hurdle in the VLSI implementation of neural network (NN) is that either the design can be area efficient or power efficient or speed efficient; but not all area-time-speed efficient simultaneously. Optimizing one parameter affects the other. At the same time NN also demands that the design should have high degree of precision and dynamic range which makes multi-objective optimization of VLSI implementation of neural network (NN) a complex goal. In this paper an optimal multi-objective optimization approach for VLSI implementation of feed forward neural network has been suggested. Simulation results with 45 nm & 90 nm tech file on Synopsis Design Vision Tool, Aldec's Active HDL tool, Altera's Quartus tool & MATLAB showed that the bit serial architecture (TYPE III) based multiplier implementation and use of floating point arithmetic (IEEE -754 Single Precision format) in ANN realization may provide a good multi-objective solution for VLSI implementation of ANN.

ICACCI Poster - 01.9 Ensemble Approach to Detect Profile Injection Attack in Recommender System
Ashish Kumar (Maharishi Dayanand University, Rohtak, India); Deepak Garg (Bennett University, Greater Noida, India); Prashant Singh Rana (Thapar University Patiala Punjab, India)

Recommender systems apply knowledge discovery techniques to specific problem of making personalized recommendation for the products or services to the users. The huge growth in the information and the number of visitors to the web sites especially on e-commerce in last few years creates some challenges for recommender systems. E-commerce recommender systems are highly vulnerable to the profile injection attacks, involving insertion of fake profiles into the system to influence the recommendations made to the users. Prior work has shown that even a small number of malicious profiles can bias the system significantly. In this paper, we describe the ensemble approach to the problem of detection of profile injection attacks.

ICACCI Poster - 01.10 Games as nonliterary narratives: A temporal view of Aarseth's ontic dimensions
Anjana Anil and Chitra Ramakrishnan (Amrita Vishwa Vidyapeetham, India)

Games have redefined the way humans interact with computers and challenged the traditional definitions applied to narratives. Narratologists and ludologists have used a variety of approaches taken from their respective fields to study game narratives or texts. The current study creates a measure of narrativity based on existing literature to identify a game and applies the ludo-narratological ontic dimensions (World, events, objects and agents) created by Aarseth to the analysis of the game. Using temporality as a tool, the study attempts to identify the hierarchy embedded within the ontic dimensions. It is proposed that while such temporal hierarchies differ according to the narrativity measure of each game, the emerging common patterns will create interesting ludo-narrative templates for game design.

ICACCI Poster - 01.11 Virtual Group Study Rooms for MOOCS
Divya Mahadevan (Amrita Vishwa Vidyapeetham University, India); Ashwini Kumar (Amrita E-Learning Reeach Lab, India); Kamal Bijlani (Amrita E-Learning Research Lab, India)

E-Learning has been growing exponentially in the past few years due to the advent of MOOCs. But the collaborative modes of study, which are complimentary to the online learning experience, are still in its infant stage. In this paper, we propose a framework for collaborative mode of learning using online group study, which can complement the existing methods for collaboration and interaction such as forums and meetups in MOOCs. This framework can be seamlessly integrated with MOOCs and students can create small groups and share their ideas to work collaboratively like as if they do in a physical group study that students often conduct just before their exams, among their classmates. Here, we also present comparative study of features of the existing collaboration tools to analyze the suitable features for improved collaboration in E-Learning platforms. We have also included the specifications of the proposed framework that we developed and integrated with the Open edX platform to measure the effectiveness of group study in a private instance of MOOC.

ICACCI Poster - 01.12 Influence of Learners' Motivation on Responses to Facebook Promotions of Online Courses
Rahul Ramesh (Amrita School of Business, India); Deepak Gupta (Amrita School of Business & ASB, India)

The purpose of this study is to develop and validate a model exploring the relationship between motivation for online learning and the responses towards Facebook promotions for online courses. A conceptual model was created and empirically tested using data from a survey of 151 Indian undergraduate and postgraduate university students. Quota sampling method was used to identify the sample and the analysis used Binary Logistic Regression. The study indicated a limited support for the influence of learning motivation factors on the propensity to respond to Facebook promotions for online courses. It also revealed that while gender made a significant difference in the responses, with females being more influenced by Facebook promotions to take up online courses, age or educational qualifications were not found to be significant differentiator. What did make a difference was the relative importance the students placed on course related factors. Facebook promotions were seen to be significantly more effective for those for whom the course name and peer reviews were important, but significantly less effective for those for whom certifications and placements were important.

ICACCI Poster - 01.13 Telemedicine for Emergency Care Management using WebRTC
Vidul Ayakulangara Panickan (University of Massachusetts Amherst, USA); Shibin Hari, Pranave Kp and Vysakh Kj (Amrita School Of Engineering, India); Archana KR (Amrita University, India)

Implementing telemedicine for emergency care management in the rural parts of India has some major challenges which include, lack of infrastructure, high initial cost of investment, sophisticated technology, Lack of properly trained personnel, software interoperability issues, limited funding for telemedicine. To overcome these issues, we put forward a new emergency telemedicine system based on browser real time communication standard WebRTC [4]. This system is a lot cheaper and less sophisticated than the available telemedicine systems. They can be easily adopted by rural health centers and emergency medical services, thereby providing the rural areas access to high standard emergency healthcare delivery systems.

ICACCI Poster - 01.14 Disaster Analysis Through Tweets
Himanshu Shekhar (B. V. B. College of Engineering and Technology, India); Shankar Gangisetty (KLE Technological University, Hubballi, India)

Social networks offer a wealth of information for capturing additional information on people's behavior, trends, opinions and emotions during any human-affecting events such as natural disasters. During disaster, social media provides a plethora of information which includes information about the nature of disaster, affected people's emotions and relief efforts. In this paper we propose a natural-disaster analysis interface that solely makes use of tweets generated by the Twitter users during the event of a natural disasters. We collect streaming tweets relating to disasters and build a sentiment classifier in order to categorize the users' emotions during disasters based on their various levels of distress. Various analysis techniques are applied on the collected tweets and the results are presented in the form of detailed graphical analysis which demonstrates users' emotions during a disaster, frequency distribution of various disasters and geographical distribution of disasters. We observe that our analysis of data from social media provides a viable, economical, uncensored and real-time alternative to traditional methods for disaster analysis and the perception of affected population towards a natural disaster.

ICACCI Poster - 01.15 Is Viral Marketing an effective and reliable method of advertising and branding? A perspective of Gen- Y of India
Rajiv Prasad (Amrita Vishwa Vidyapeetham University & Amrita School of Business, India); Uditi Rawat (Amrit Vishwa Vidyapeetham University, India)

The world of marketing has been revolutionized in recent times. So many new methods of marketing are being experimented with. Marketing is imperative for any business and the world today focuses on gaining more attention from the customers, where people are rigorously trying to find out effective and cost saving techniques of reaching out to the customers. Viral Marketing is also one such technique which is gaining popularity these days. Viral Marketing if done effectively is a cheap and fast means of marketing. Primarily it works on the principle of "Electronic Word of Mouth". Viral marketing is still in its nascent stage and has a huge potential if researched extensively. There is a high uncertainty and unpredictability about a marketing campaign going viral. Hence, the purpose of this study is to explore if viral marketing can really help in effective advertising and branding, and what are the elements that are essential for making a content go exponentially popular on internet. The study is specifically focussed on the perception of Gen-Y i.e., people born between 1977 to 1994, as this population segment has the most active internet users. With the use of a structured questionnaire conducted with 128 respondents within India, we analysed their responses and perspectives on viral marketing in order to develop a deeper understanding about this topic. This study has generated many new insights in the area of viral marketing, especially about the most important elements that enable an ad campaign to go viral. This study might help the marketers to come up with better campaigns which would have more possibility to become successful in terms of their reach to the consumers.

ICACCI Poster - 01.16 Advanced Algorithm for gender prediction with Image Quality Assessment
Anusree Bhaskar (CUSAT, India); Aneesh Raghavan Parameswaran (College of Engineering Thiruvananthapuram, India)

Forged biometric systems are a crucial obstacle in the field of biometrics. Fake biometric identifiers can be of the form where one person imitates as another by falsifying data and thereby gaining an illegitimate advantage. This can be achieved either by using fake self-manufactured synthetic or reconstructed samples. Gender classification has become an essential part in most human computer interactions especially in high security areas where gender restrictions are provided. In this paper, software based multi-biometric system that is used to classify real and fake face samples and a gender classification are presented. The main objective of the paper is to improve biometric detection in a fast, non-intrusive way which maintains the generality that is lacking in other anti-spoofing methods. The proposed method incorporates liveness detection, extracts 30 general image quality measures from the input image and then classifies the input into real or fake sample. Algorithm for Gender classification is developed in accordance with the facial features. There features are classified into two i) appearance based ii) Geometric based. The image quality assessment algorithm is developed and tested with ATVS database. The gender classification with image quality assessment is developed and tested with medical students database.

ICACCI Poster - 01.17 Embedded Computer Vision: Problems Faced Running OpenCV programs on the Raspberry Pi and Possible Solutions
Abhijeet Suryatali and Vidhyadhar Dharmadhikari (Shivaji University, India)

For any embedded system product development, hardware platform is very important. Raspberry pi used in this paper is one of the best hardware platform for embedded linux based product development. This paper gives small introduction to the fascinating field of embedded computer vision. It sights into running our video content analysis algorithms on raspberry pi, including the use of USB webcam and its swanky new camera board. Raspberry pi is a very powerful tool and has wide range of applications. This paper is written especially for the students who are doing their projects based on Computer Vision using OpenCV library on raspberry pi so that they would get help from this paper.

Tuesday, August 11 16:00 - 17:50 (Asia/Kolkata)

WCI-Poster-01: WCI-Poster-01: WCI Communication Networks and Distributed Systems

Chair: Kamatchi R (Amity University, Mumbai, India)
WCI-Poster-01.1 Fuzzy based Estimation of Received Signal Strength in a Wireless Sensor Network
Meenalochani M (Kings College of Engineering, Pudukkottai, India); Selvaraj Sudha (National Institute of Technology, Tiruchirappalli, India)

The main challenge faced by any Wireless Sensor Network is the network reliability. Received Signal Strength Indicator is a parameter which plays a major role in addressing this issue. Received Signal Strength Indicator can be obtained from the transceiver which is an indicative measure of the quality of the link established by a Wireless Sensor Network. It depends on many factors including the distance between the nodes, atmospheric variations, interference from other signal sources and transmission power. Motes designed by the manufacturers provide a built in code that displays this. However, these motes are costly and are specific to an application. So, if sensor nodes are to be designed for some specific applications, then measurement of Received Signal Strength becomes questionable. Hence, an attempt is made to estimate this through a Fuzzy Logic Controller. The proposed Received Signal Strength evaluation is based on the distance between the nodes, temperature, humidity and the transmitted power. The proposed Fuzzy logic estimator is implemented in MATLAB. To show the effectiveness of this approach, experiments are conducted in real environment using motes and the Received Signal Strength values perceived by these motes are compared with the simulated results. The results confirm the efficiency of the proposal and also ascertain the flexibity and usage of this technique in the sensor node design.

WCI-Poster-01.2 A Theoretical Model for Big Data Analytics using Machine Learning Algorithms
Ananthi Sheshasaayee (Research Supervisor, India); J V N Lakshmi (SCSVMV University, India)

Big Data processing is currently becoming increasingly important in modern era due to continuous growth of the amount of data generated in various fields. Architecture for Big Data usually ranges across multiple machines and clusters consisting of various sub systems. To potentially speed up the processing, a unified way of machine learning is applied on top of MapReduce frame work. A broadly applicable programming model MapReduce is applied on different learning algorithms belonging to machine learning family for all business decisions. This paper presents parallel implementation of various machine learning algorithms, includes K-Means, Logistic Regression implemented on top of MapReduce model.

WCI-Poster-01.3 Smart and Secure Monitoring of Industrial Environments using IoT
Shruthi Puranik and Jayashree Mohan (National Institute of Technology Karnataka, India); Chandra Sekaran (National Instittute of Technology Karnataka, India)

The Internet of Things (IoT) standard is giving rise to complex smart systems where in it has been made conceivable to captivate objects we experience in regular life, to interact and exchange information over a wireless network. The steep surge in industrialization and poor strategies used in controlling industrial pollution has resulted in degradation in the quality of environment around us. Negligence of leakage within an industry can result in massive hazards like the Bhopal Gas Tragedy. This paper proposes a secure IoT framework to smartly connect industrial surroundings. Our proposed framework helps in monitoring the level of pollutants, particulate matter and effluents released into the environment, notifying concerned authorities whenever their permissible level surpasses. Also we smartly connect the houses in close vicinity, so that precautionary measures can be taken to evacuate people in times of unexpected leakages. Our paper also discusses the various technologies and the security assessments being done to make it a complete secure system.

WCI-Poster-01.4 A Multi-Objective Initial Virtual Machine Allocation in Clouds using Divided KD Tree
Monica Gahlawat (Gujarat Technological University, India); Priyanka Sharma (Raksha Shakti University)

Recently increased demand in computational power resulted in establishing large-scale data centers. The developments in virtualization technology have resulted in increased resource utilization across data centers, but energy efficient resource usage becomes a challenge. It has been estimated that by 2015 infrastructure and energy costs would contribute about 75%, whereas IT would contribute just 25% to the overall cost of operating a data center. Various algorithms have been developed for optimizing the utilization of the resources but as the popularity of the cloud computing is increasing there is need to explore new idea to reduce the overall power consumption which leads to minimize overall operational cost. The paper focuses on a novel multi-objective approach which tries to minimize the virtual machine scheduling delay and active servers by switching off the remaining for energy efficient computing using a multidimensional data structure Divided KD Tree.

WCI-Poster-01.5 Smart Mom: An architecture to monitor children at home
Sohini Roy (Arizona State University, USA); Uma Bhattacharya (Bengal Engineering & Science University, India)

Pervasive computing has emerged as a technological renaissance in which anything and everything of the surroundings can be made to act as a computing device. With the advent of the concept of Internet of Things (IoT), the concept of computation has extended its horizon from the traditional computing devices to everything belonging to the material world. Since a long time, it has been a serious problem especially for the working mothers to take care of their kids at home. Even if those women leave their children at home with a governess, they are not free from concerns like whether the child is taken care properly, if the child is physically fit or not, whether there is any risk around the child at the current moment, whether the child is afraid and so on. The paper addresses this problem and aims in minimizing the concerns of a working mother who leaves her child at home. In this paper a smart home architecture named as Smart Mom is proposed which can give pervasive care to children at home, having age between five to fourteen years. The overall modular architecture is given here and each module is also described in details.

WCI-Poster-01.6 Cognitive Radio for Smart Home Environment
Sudha T (APJ Abdul Kalam Technological University, Kerala, india, India); Kiran Selvan and Anand V (University of Calicut, India); Krishna Anilkumar (IIT Hyderabad, India); Kavya VG and Anagha Madhav KP (University of Calicut, India)

A plethora of smart home devices are currently being used to enhance the resident experience. With these devices increasingly communication enabled, there is a need for a channel to enable the communication. With Cognitive Radio Technology as the backbone, the spectral congestion can be eluded from the smart home environment and provide a green alternative for existing smart home. This paper systematically investigates the novel idea of applying the cognitive radio for a smart home environment and evolving a prototype to implement the same.

WCI-Poster-01.7 Effect of Spam Filter on SPOT Algorithm
Anamika Anamika (Amrita Vishwa Vidyapeetham, India); Konda Padmini and Guduru Vinela (Amrita Vishwa Vidyapeetham, Bengaluru, India); Sangeeta K (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India)

Compromised machine is any computing resource whose availability, confidentiality, integrity has been negatively impacted either intentionally or unintentionally, by an untrusted source. These machines are often used to elevate various security attacks such as DDoS (attempt to make a machine resource unavailable to its intended users), spamming and identity theft (practice of using other person's name and personal information). The most important of these attacks are spamming wherein these machines are used to send large chunks of unsolicited mails called as spam. To detect these unwanted and unsolicited email and prevent them from getting into the user's inbox we use Spam filter. To detect the machine as compromised, SPOT algorithm has been designed. In this paper, we compare the effect of different types of spam filters on the SPOT algorithm on finding the compromised machines.

WCI-Poster-01.8 Two Phase Static Analysis Technique for Android Malware Detection
Priyadarshani Kate and Sunita Dhavale (Defence Institute of Advanced Technology, India)

The growing popularity of Android based smart phones has greatly fuelled the spread of android malware. Further, these malwares are evolving rapidly to escape from traditional signature-based detection methods. Hence, there is a serious need to develop effective android malware detection techniques. In this paper, we propose two phase static android malware analysis scheme using bloom filters. The Phase I involves two different bloom filters that classify a given sample into malware or benign class based on permission feature set only. The evaded malicious samples from Phase I are further analyzed by Phase II consisting Naïve Bayes Classifier using permission and code based mixed feature set. Inclusion of Phase I classification makes the technique computationally less intensive; while addition of the Phase II classification improves the overall accuracy of the proposed model. Experimental results indicate both detection accuracy and computational efficiency of the proposed technique.

WCI-Poster-01.9 A Web Metric Collection and Reporting System
Ruchika Malhotra (Delhi Technological University, India); Anjali Sharma (National Physical Laboratory, India)

The web genre classification distinguishes between pages by means of their features such as style, presentation layout, etc rather than on the topic; improving search results returned to the user by providing genre class of a web page apart from topic. Hence, if a user is able to specify the genre of search like 'Help, FAQ, Wikipedia' etc, chances of getting results in accordance to his interest are high. The classification of web pages into genre is a challenging task as the information is semi-structured, heterogeneous and dynamic. Therefore it is required to find appropriate features which describes web page in the context of genre to increase the genre classification and accuracy of the search result. In this paper we propose a Metric Collection and Reporting System (MCRS) for Web Application, an automated tool designed to collect 126 significant attributes of web pages for genre identification. MCRS collects and reports important style, presentation layout, form, linguistic, lexical and meta-content features of web page. It collects 88 HTML tags metrics clustered in five groups namely Text formatting tags, Document Structure Tags, External object tags, instruction tags and Navigation Tags. MCRS also reports thirty-eight text metrics including punctuation metrics to describe the lexical attributes of the web page. The NLP module has also been integrated into the system to identify linguistic properties of the web content. The MRCS can be used in parallel with topic search to increase the quality of information retrieval through web genre identification.

WCI-Poster-01.10 Automatic Generation of Templates using Ontology
S Thenmalar and Geetha T. V. (Anna University, India)

Generally, Information Extraction (IE) methods use predefined templates to determine the slot fillers to obtain relevant information. The slots define the important information necessary for the particular template. However, effective of the information is decided by the predefined template. In this paper, we mine the templates from a domain corpus to act as the predefined templates for IE. We automatically generate the templates using domain ontology for identifying the slots of the template. The performance of the proposed work is compared with the existing automatic template generation system and evaluated based on the precision metric. The automatic generation of templates using ontology produces the precision of 0.82

WCI-Poster-01.11 Secure E-ticketing System based on Mutual Authentication using RFID
Lekshmi S Nair, Arun VS and Sijo Joseph (Amrita Viswa Vidyapeetham, India)

E-ticketing system based on contactless smart card like RFID tags which are excellent alternative to paper ticketing system makes public transportation system easy to handle both in terms of accuracy and efficiency. This system is capable of solving three major issues of the existing system: Mutual authentication of E-ticket with the bus terminal and creation of travel record without any digital trace of user called Privacy preservation of user, bus needs to maintain the permanent internet connection to the backend which is practically difficult in real-world transportation system because on network unavailability in most of the rural area, blacklisting the users without contacting the backend database. Existing systems are not considering these issues which directly influence the efficiency and security of transportation system. RFID tag is vulnerable because anyone can read it without authorization. We have implemented certification-based mutual authentication algorithm and establishing the security channel before transmitting the confidential data between RFID tag and bus terminal for solving these issues, this system also gives travel history view option even it handle user in terms of pseudonym. We implemented an advanced e-ticketing system for next generation public transportation using the low cost device like Raspberry pie and a RFID module, which make the system affordable for both transportation agent and user. The system gives prior importance on security and preserving privacy of the user.

Tuesday, August 11 18:00 - 19:00 (Asia/Kolkata)

PT: Plenary Talk: Recent Advances in 5G Mobile Network Technology

Prof. Dr. Zoran S. Bojkovic, University of Belgrade, Serbia
Room: MSH

While mobile traffic is growing,the need for more sophisticated broadband services is going to push the limit on current standards to provide integration between wireless technologies and higher speeds. Thus,a new generation of mobile communications, the fifth generation (5G) becomes necessity. Future 5G wireless networks will be a combination of different enabling technologies,,while the biggest challenge will be to make them all working together.In spite of the fact that the standardization of 5G specification in standards bodies such as the Third Generation Partnership Project (3GPP), and the ratification of 5G standards by the International Telecommunication Union (ITU) are still several years away, many share the vision of targeting 2020 for the initial commercialization of 5G cellular with drastically enhanced user experiences in several aspects including Gbps data rate support. There are key enabling technologies for 5G that will make much more diversified requirements than the previous generations. For example,in contrast to 4G networks, 5G network should achieve 1000 times the system capacity, 10 times the spectral efficiency, higher data rates (i.e., 10Gb/s for call center users and 5Gb/s for cell edge users), 25 times the average cell throughput, 5 times reduction in end-to-end latency and support 100 times more connected devices with 10 times larger battery life for-low power devices. In this talk, the concept of the road to 5G with heterogeneous network (HetNet) architecture will be pointed out. Macro and small cells may be connected to each other resulting in different levels of coordination across the network for mobility and interference management. The second part deals with mobility for 5G network. The emphasis will be on IP mobility management which is based on centralized data path. The third part provides main drivers in the research for 5G application. New traffic types and data services are emerging, especially machine-to- machine communication to support concepts such as the smart grid, smart homes and cities, as well as e-health. Internet of Things (IoT), Gigabit wireless connectivity and Tacktile Internet should be mentioned , too. These applications have very diverse communication requirements. Finally, standard activities conclude the talk.

Tuesday, August 11 19:00 - 20:30 (Asia/Kolkata)

Banquet: Cultural Programme and Banquet

Wednesday, August 12

Wednesday, August 12 9:00 - 14:00 (Asia/Kolkata)

R3: Registration Starts

Room: Atrium

Wednesday, August 12 9:30 - 10:30 (Asia/Kolkata)

K6: Main Track Keynote-6: Multicore Computing: Trends and Perspectives

Dr. Alex Aravind, University of Northern British Columbia (UNBC), Canada
Room: MSH

Computer architecture is undergoing a revolutionary change, and in the future nearly all computers are expected to be "multicore". As the multicore processors are becoming the norm, concurrent programming is expected to emerge as the mainstream software development approach. This new trend poses several challenges including performance, power management, system utilization, and fair and predictable response. Such a demand is hard to meet without the cooperation from hardware, compilers, operating system, and applications. Particularly, an efficient scheduling of shared resources to the application threads is fundamentally important in assuring the above mentioned system performance. Among the problems, scheduling of cores to threads (scheduling) and coordinating threads in accessing shared resources (synchronization) are most important. In this talk, after briefly reviewing the state of the art, I will discuss some of our work relevant to multicore scheduling and synchronization.

Wednesday, August 12 10:30 - 11:30 (Asia/Kolkata)

WCI-K3: WCI Keynote -3: Graph Theoretical Modeling for Engineering Problems and Systems

Dr. Arati M. Dixit, Pune University
Room: 308

Mathematical modeling plays a significant role in solving any problem or designing a system. Some of the problems we see around can be represented as a Graph Theoretical Model, which facilitates solving them by simply proposing methodology to solve a 'graph'! As a case study Wireless and Mobile Sensor Network Reliability of Unmanned Ground Vehicles (UGV) is discussed. Taking into account the amount of data being generated to be processed, reconfigurable computing significantly improves the performance of these systems. This talk also gives insight into tools and techniques supporting reconfigurable computing.

K7: Main Track Keynote-7: Signal Analytics for Multimedia and Biomedical Applications

Dr. Sri Krishnan, Ryerson University, Toronto, Canada
Room: MSH

Most of the real world signals possess non-stationary and non-linear characteristics. Information processing and feature extraction from these signals is a challenging task. This talk will focus on four generations of signal processing algorithms developed for analysis and interpretation of signals. Recent advances in using sparse signal representation and compressive sensing of long-term signals will also be covered. The application of the extraction and classification of features from audio and speech signals, and biomedical signals (cardiac electrograms, pathological speech signals and gait rhythms) will be discussed in detail.

Wednesday, August 12 11:50 - 12:40 (Asia/Kolkata)

WCI-K4: WCI Keynote -4:

Dr. Jharna Majumdar, NMIT, Bangalore, India
Room: 308

K8: Main Track Keynote-8: Knowledge Graph Analytics

Dr. Sameep Mehta, Senior Researcher and Manager, Graph and Text Analytics, IBM Research
Room: MSH

Wednesday, August 12 12:40 - 13:30 (Asia/Kolkata)

WCI-K5: WCI Keynote-5: Bridging cyber and physical worlds - the opportunities and challenges of IoT technology

Ms. Tanuja Ganu, Co-Founder, DataGlen Inc
Room: 308

Tanuja is a Co-founder at DataGlen Inc., an early-stage startup that focuses on building cutting edge Big Data technologies for the Internet of Things (IoT). In the past, she has worked as Research Software Engineer at IBM Research, India. At IBM, her work was focused on applying innovative machine learning and optimization approaches for energy related problems such as demand side management techniques and embedded analytics research. Her research interests are in machine learning, data mining, embedded analytics and optimization. She has been awarded MIT TR35 2014 global young innovators award and also serving on judges committee for MIT TR35 2015. She has won IBM Eminence and Excellence award for embedded analytics energy initiatives and IBM first invention plateau award. Prior to joining IBM Research India, she led SharePoint Center-of-Excellence team at Tata Consultancy Services Ltd. She pursued her Masters in Computer Science at Indian Institute of Science (IISc), Bangalore and undergraduate studies in Computer Science at Walchand College of Engineer, Sangli.

K9: Main Track Keynote- 9: Internet of Things and Their Enabling Services: Observations, Challenges and Possible Solutions

Dr. Dhananjay Singh, Hankuk (Korea) University of Foreign Studies, Korea
Room: MSH

The development of IoT and their computational mechanism in order to allow smart and scalable networks to share computing resources without pre-existing network infrastructure. The main reason for the issues is the linkage and heavy use of TCP/IP stack protocols. The current research is being directed towards formulating relationships between Future Internet and distributed systems. Therefore in this talk, I am going to share my experience in constructing a high performance network architecture (Future Internet Architecture) for IoT application and services which requires the utilization of distributed computing. As we know, IoT applications are expected to have huge impact to us in a very near future. According to different requirements of applications, how to design a system architecture that contains both batch processing and real-time processing capacity. Real-time processing is used to process continuous unbounded streams of data with low latency. So, we need to efficiently and effectively manage various IoT applications and we should carefully check the data that are generated by IoT devices to understand the environment states and monitor IoT services. There are several applications which IoT researchers are considering and in this talk, I am going to discuss about Global Healthcare Monitoring System, Smart Transportation System, Intelligent Building Monitoring, Battlefield and IoT applications for smart cities.

Wednesday, August 12 14:15 - 19:10 (Asia/Kolkata)

VisionNet-01: Communication and Signal Processing (VisionNet/SIPR)-I

Room: 501
Chairs: Mallesham Dasari (Stony Brook University, USA), T Rama Rao (SRM IST, India)
VisionNet-01.1 Low Cost and Power Software Defined Radio using Raspberry Pi for Disaster Effected Regions
Vijendra Tomar (IIT INDORE, India); Vimal Bhatia (Indian Institute of Technology Indore, India)

Radio communication is extremely critical for public safety, national safety, and emergency communication systems. During emergency situations like natural disasters, terrorist attacks and plane crashes, inter-operability problems are faced by military and civil safety officials where different aid agencies have radios working in different frequency bands, waveforms and protocols. At the same time, these radios need to be low cost, small size, portable, and require low power to operate. Software defined radio (SDR) technology helps to solve the above problems by implementing radios that can operate on multiple frequency bands and multiple protocols by software control. In this paper, we describe a novel SDR system that we have developed using Raspberry Pi, and a low cost front end solution. The system is low cost, small size, consumes low power and is portable, hence can be quickly deployed in disaster affected areas to save life and resources.

VisionNet-01.2 Sign Language Recognition using Facial Expression
Siddhartha Pratim Das (Gauhati University, India); Anjan Kumar Talukdar (Guahati University, India); Kandarpa Kumar Sarma (Gauhati University & Indian Institute of Technology Guwahati, India)

Vision- based approaches of recognition of sign languages have made spectacular advances in the last few years. These also include many works in the area of speech processing to convert speech to text. A vision-based approach to classify facial gestures (lip movement, eye brow pattern etc.) for communication designed especially for the differently abled persons is a less explored area. In our work , we explore certain approaches to classify facial gestures to enhance its effectiveness and incorporate it to any sign language or vision-based gesture recognition movements for precise decision making. In our work, we have designed a real time system to detect alphabets by recognizing the lip pattern based on texture and shape. The system takes live video input and processes it in real time. Object detector of computer vision toolbox is used to classify the lips from extracted frames of video input. Five consecutive frames are extracted so as to trace the movements caused while speaking a particular syllable. Histogram of oriented gradients (HOG) of extracted lip image is used as features for recognition. The recognizer is designed using Artificial Neural Network (ANN) to recognize four classes viz. the lips movements formed for the four alphabets 'A','B','C','D'. The entire system is modelled and tested for real time performance with a video of 10 frames per second. Experimental results show that the system provides satisfactory performance with recognition rate as high as 90.67%.

VisionNet-01.3 Self-Adaptive Interface for Comprehensive Authoring
Ramya Ravi (Tata Consultancy Services, India); Kamal Bijlani (Amrita E-Learning Research Lab, India); T R Sharika (Amrita Vishwa Vidyapeetham, India); Ashwini Kumar (Amrita E-Learning Reeach Lab, India)

The advent of MOOCs gives, different varieties of teachers with different computer proficiency, the opportunity to create online courses. However, to provide an authoring environment that suits all types of teachers is a challenging task. We address this issue by categorizing the teachers into four broad classes based on their computer proficiency and providing them with customized authoring experience. Our system achieves this by collecting data about the teacher's performance during initial authoring process. This helps the system to identify the category of the teacher and adjust its interface. The system also adheres to ADDIE instructional design model. From the experiments, it is observed that the proposed system showed a better performance among teachers than the existing authoring environment.

VisionNet-01.4 Authoring in MOOCS with Wizard Based Recommendations for Improving Learner Engagement
T R Sharika (Amrita Vishwa Vidyapeetham, India); Ashwini Kumar (Amrita E-Learning Reeach Lab, India); Kamal Bijlani (Amrita E-Learning Research Lab, India); Ramya Ravi (Tata Consultancy Services, India)

Increasing popularity of Massive Open Online Courses has raised the demand of effective tools to author them. In this paper we propose an authoring model that helps novice and intermediate teachers to create online courses with less effort and improved effectiveness for learners. This model enables teachers to author pedagogically rich online courses with limited effort and time. And help teachers author with least effort and evaluate them. The pedagogical model made is a combination of Bloom's Taxonomy and ARCS Motivational design to help in making motivating and engaging courses. The courses created are ensured to be pedagogical compliance by a pedagogical evaluation and ensured effective for students. This model helps novice teachers in authoring engaging and attentive courses for MOOC learners

VisionNet-01.5 A Comparative Study of Several Array Geometries for 2D DOA Estimation
Sharareh Kiani Harchegani (University of Ontario Institute of Technology, Canada); Amir Mansour Pezeshk (Sharif University of Technology, Iran)

In this paper, a comparison between several array geometries, including planar arrays and volume arrays, for two-dimensional Direction of Arrival (DOA) estimation using Multiple Signal Classification (MUSIC) is presented. For each geometry, various criteria is taken into consideration and a comparative study of the performance of geometries is carried out. The geometries together with their ultimate direction finding performance are compared based on Root Mean Square Error (RMSE), the ambiguity functions, and Cramer-Rao Bounds (CRB). Furthermore, the effects of phase and amplitude variations of the array element radiation pattern, namely Vivaldi and Monopole antenna, on DOA estimation performance are studied. The advantages and drawbacks of each geometry vis-à-vis the employed DOA estimation technique are shown through a numerical comparison.

VisionNet-01.6 Classification of Heart Sound Signals Using Multi-modal Features
Mandeep Singh (Thapar Institute of Engineering & Technology, India); Simarjot Kaur Randhawa (Dr BR Ambedkar National Institute of Technology Jalandhar, India)

Cardiac auscultation is a technique of listening to heart sounds. Any abnormality in the heart sound may indicate some problem in the heart. In this paper, the phonocardiogram (PCG) signal i.e. the digital recording of the heart sounds has been studied and classified into three classes namely normal signal, systolic murmur signal and diastolic murmur signal. Various features were extracted for the classification. A total of 28 features from different domains were extracted and then reduced to 7 most significant features using feature reduction technique. The selected features were used to classify the PCG signal into three classes using state of the art classifiers. The classifiers which were used in this study were k-NN (k Nearest Neighbour), fuzzy k-NN and Artificial Neural Network (ANN). Highest accuracy of 99.6% was achieved using both k-NN and fuzzy k-NN as classifiers.

VisionNet-01.7 Secure Communication over Trellis using Fundamental Cut-set and Fundamental Circuits
Selvakumar R (VIT University, India); Pavan Kumar Chandrahas (Indian Institute of Information Technology Dharwad, India); Raghunadh Bhattar (SAC, ISRO, Ahmedabad, India)

Trellis representation of codes helps in analyzing and understanding the nature of the codes. Trellis has the connected graph nature where all paths from the 'root' vertex to 'goal' vertex forms the codewords. Efficient encoding and decoding algorithms are existing for communication over trellis. In the conventional communication system, Trellis is constructed for the encoded message at the sender and the algorithm such as Viterbi is used to decode the encoded message at the receiver. Any receiver with such decoding mechanism can be able to decode the message, which gives the chance for the intruder to get the message making the communication insecure. In this paper we propose a reliable and secure communication system which provides reliability by the Error Correction Techniques and Security by the graph based Cryptosystem. Using such system intruder's access to the information can be avoided and also if any errors occurred during transmission over noisy channel can be corrected. We have used Kernel codes and it's Trellis representation to demonstrate the construction of reliable and secure cryptosystem.

VisionNet-01.8 The Relationship between the WHT and the CIT
Rajesh C Roy (Adi Shankara Institute of Engineering and Technology, India)

The Walsh-Hadamard Transform is an important signal transform that converts real input to real output, and it has found applications in many signal processing tasks. The Cosine Integer Transform is a recently developed transform and it shares the real-to-real conversion property of the Walsh-Hadamard Transform. Although these two transforms have different kernels, there exist relations between the two transforms. This paper attempts to study and mathematically formalize these relations.

VisionNet-01.9 A robust Algorithm for Speech Polarity Detection Using Epochs and Hilbert Phase Information
Govind D (Koneru Lakshmaiah Education Foundations, India); Hisham PM and Pravena D (Amrita Vishwa Vidyapeetham, India)

The aim of the proposed work presented in this paper is to determine the speech polarity using the knowledge of epochs and the cosine phase information derived from the complex analytic representation of original speech signal. The work presented in this paper is motivated by the observation of variations in the cosine phase of speech around the Hilbert envelope (HE) peaks according to the polarity changes. As the HE peaks represent approximate epochs location, the phase analysis is performed by using algorithms which provide better resolution and accuracy of estimated epochs in the present work. In the present work, accurate epochs locations are initially estimated and significant HE peaks are only selected from the near vicinity of the epochs location for phase analysis. The cosine phase of the speech signal is then computed as the ratio of signal to the HE of speech. The trend in the cosine phase around the selected significant HE peaks are observed to be varying according to the speech polarity. The proposed polarity detection algorithm shows better results as compared with the state of the residual skewness based speech polarity detection (RESKEW) method. Thus, the improvement in the polarity detection rates confirms significant polarity information present in the excitation source characteristics around epochs location in speech. The polarity detection rates are also found to be less affected for different levels of noise addition which indicates the effectiveness of the approach against noises. Also, based on the analysis of mean execution time, the proposed polarity detection algorithm is confirmed to be 10 times faster than the RESKEW algorithm.

VisionNet-01.10 Preliminary studies Towards Improving the Isolated Digit Recognition Performance of Dysarthric Speech by Prosodic Analysis
Vishakh A (Amrita Vishwa Vidyapeetham, India); Govind D (Koneru Lakshmaiah Education Foundations, India); Pravena D (Amrita Vishwa Vidyapeetham, India)

The objective of the present work is to improve the digit recognition performance of speech signals affected with dysarthria. The paper presents preliminary studies performed on universal access dysarthric speech recognition (UADSR) database. The works presented in the paper are organized into three stages. Firstly, the degradation in the digit recognition performance is demonstrated by testing the dysarthric digits with the acoustic models built using the digit samples spoken by controlled speakers. Secondly, the prosodic analysis is performed on the dysarthric isolated digits that are available in the database. Finally, the prosodic parameters of the dysarthric speech is manipulated to match with the normal speech which is used to build the acoustic models. Based on the experiments conducted, the manipulation of duration parameters using the state of the art time-domain pitch synchronous overlap add (TD-PSOLA) method observed to be significantly improving the recognition rates in contrast to other prosodic parameters. The improvement in the word recognition rates are also found to be in accordance with the intelligibility of the dysarthric speakers and hence proves the significance of using customized prosodic scaling factors according to the intelligibility levels of each of the subjects.

VisionNet-01.11 Cooperative Spectrum Sensing Improvement Based on Fuzzy Logic System
Ammar Abdul-Hamed (University of Mosul, Iraq); Ahmed Reja (Jamia Millia Islamia, India); Arkan Hussein (Tikrit University & JAMIA MILLIA ISLAMIA, Iraq); Mirza Tariq Beg (Jamia Millia Islamia New Delhi, India); Mainuddin Mainuddin (Engineering, Jamia Millia Islamia New Delhi, India)

To attenuate the channel fading effects, a spectrum sensing policy for coordinating a cooperative sensing is proposed by adding a Hopping Sequence (HS) module to the detectors based on a fuzzy logic system (FLS). A significant effect on the spectrum efficiency is the ability of a secondary user (SU) to utilize a frequency slot for transmission in an idle channel. Hence, the SU needs to sense the related spectrum in order to classify a licensed frequency band as occupied or vacant. To implement the HS module, two hopping methods are proposed- Random Hopping (RA-H) and Sequential Hopping (SE-H). An enhancement in detection probability over conventional cooperative sensing has been achieved.

VisionNet-01.12 Comparative study of digital holography reconstruction methods
Indranil Acharya (Vellore Institute of Technology, India)

Digital holograms which are recorded in the CCD sensors is subjected to numerical reconstruction for calculation of amplitude and phase using numerical reconstruction algorithms. This reconstruction process offers high flexibility and provides new possibilities of exploitation in domains like pattern recognition, 3D microscopy, encryption etc. In this paper,a comparative analysis of different holography reconstruction methods is presented .

VisionNet-01.13 The application of pulse altimeter for linear references detection
Artem Sorokin, Vladimir Vazhenin and Lubov Lesnaya (Ural Federal University, Russia)

Algorithm of linear references detection is based on statistical processing of reflected pulse altimeter signal. In assumption, linear references formed from two underlying surfaces and differences between statistical distribution of pulse amplitude depends on type of underlying surface. Differences allows us to decide which type of surface reflects our signal. We use method of maximum posteriori probability to make a decision. In this paper it is shown how to determine position of linear reference and what is the accuracy of this process. Minimum of maximum posteriori probability closely connected with linear reference position. We use this fact to detect linear reference position. This algorithm allows to realize autonomic navigation. We use Doppler sharpening to increase accuracy of detection of linear reference position and increase number of discriminated surfaces.

VisionNet-01.14 Fitness function based sensor degradation estimation using H∞ filter
Sirshendu Arosh (Indian Institute of Technology, Bombay, India); Surya Prakash and Soumitra Keshari Nayak (Indian Institute of Technology Bombay, India)

In a practical scenario, sensors are used for health monitoring, condition monitoring of systems, weather, etc. but it is highly probable that the sensor itself degrades. This degradation in the sensors causes erroneous true negative results. Therefore, to improve the reliability and eliminate incorrect data, it is important to gauge the health of sensor and periodically determine its' degradation level to change it within stipulated time. This paper introduces an efficient method to estimate the level of sensor degradation using its fitness function, which will indicate the level of fitness of the sensor to be used and estimate the future values from the fitness function. H∞ filter based estimation theory is used to predict the future values of the fitness function of the sensors. Thus, to maintain the desired value and right turnout, an Automatic Gain Controller circuit is explained which helps us suggest to put back the degraded sensor. The results are presented along with the estimated fitness function.

VisionNet-01.15 Selection of Optimal Denoising Filter using Quality Assessment for Potentially Lethal Optical Wound images
Dhiraj Manohar Dhane and Maitreya Maity (Indian Institute of Technology Kharagpur, India); Arun Achar and Chittaranjan Bar (Midnapore Medical College Vidyasagar Road Paschim Medinipur, India); Chandan Chakraborty (IIT Kharagpur, India)

The objective of this paper is to select the best filter for the camera captured digital wound image pre-processing. Digital wound image give the most essential information about the wound. The information may be size, status of the wound, tissue composition and rate of healing. These images are most often corrupted by impulse or random noises while capturing them. Corrupted images often suffer from sharpness, chrominance and luminance. Application of several filtering schemes such as linear and non-linear filtering suppresses noise and improves the image quality. In this paper, a comparative study of total five filters has been performed using mathematical morphology operations for removing the impulse/random noise. These five filters were applied on seventy-five randomly selected wound images from the developed image database as well as online chronic wound image database. In order to assess the quality of the filtered image, seven quality measures have been applied. Local first order statistics (LFOS) is the best filter and efficient filter in the context of reduced mean square error (MSE) and high peak signal to noise ratio (PSNR) between the original and filtered image.

Wednesday, August 12 14:30 - 19:00 (Asia/Kolkata)

S20: S20-ICACCI Industry Track

Room: 302
S20.1 Morphology Based Radon Processed Neural network for Transmission Line Fault Detection
Vinayesh Sulochana (Coventry, India); Anish Francis (VIT,KSEB)

A novel method for classifying transmission line faults is presented in this paper. Mathematical morphology is applied along with radon transform for extracting features needed for fault classification. The features are trained with radial basis network for fault detection. In the present work an HVDC transmission line which is divided into three zones is taken as the work background. Detailed simulation results are shown. The present methodology for fault detection is fast, and the use of morphology operator along with radon transform, reduces the computational complexity compared with other conventional methods in fault detection

S20.2 Machine Learning for Seizure Prediction:A revamped approach
Saikumar Allaka and Lavi Nigam (Quadratyx); Deepthi Karnam and Sreerama Murthy (Quadratyx, India); Petro Fedorovych and Vasu Kalidindi (Nfinity, India)

Occurrence of multiple seizures is a common phenomenon observed in patients with epilepsy: a neurological malfunction that affects approximately 50 million people in the world. Seizure prediction is widely acknowledged as an important problem in the neurological domain, as it holds promise to improve the quality of life for patients with epilepsy. A noticeable number of clinical studies showed evidence of symptoms (patterns) before seizures and thus, there is large research on predicting seizures. There is very little existing literature that systematically illustrates the steps in machine learning for seizure prediction, limited training data and class imbalance are a few challenges. In this paper, we propose a novel way to overcome these challenges. We present the improved results for various classification algorithms. An average of 21.71% improvement in accuracy is attained using our approach.

S20.3 Acute Ischemic Stroke Detection Using Wavelet Based Fusion Of CT And MRI Images
Praveen R Mirajkar and Arun Vikas Singh (PES Institute of Technology, India); Kishan Bhagwat (SS Institute Of Medical Sciences, India); Ashalatha E (Bapuji Institute of Engineering & Technology, India)

Ischemic stroke is a condition which causes death of brain cells due to lack of blood supply and hence stops normal brain functioning. Due to finite academic studies of ischemic stroke detection, the success rate to detect stroke initial period is low using only CT image . Fusion of CT and DW-MRI images creates a composite image which provides more information than any of the input signals provided by a single modality alone. Diffusion Weighted MRI is widely used for the detection of acute ischemic stroke compared to CT. In this paper Images of both modalities i.e CT and DW-MRI are used. The proposed algorithm consists of four phases phase Pre-processing of both images is the first phase Finding equivalent CT image of an input pathological MRI image for image fusion is done in second phase .In third phase image registration and fusion of CT and MRI image is done and segmentation of stroke lesion from fused image is performed in fourth phase . The performance evaluation of fusion is computed using Peak Signal to Noise Ratio(PSNR) and Root Mean Square Error(RMSE).

S20.4 Quantitative Assessment of Applications for Cloud Bursting
Krishna Kumar Gopinathan, Raghu P Pushpakath and Sachin Kanoth Madakkara (US Technology International Pvt. Ltd.)

Cloud bursting is a deployment model in cloud computing that allows organizations to utilize additional computing resources temporarily from the public cloud, for running certain heavy-duty applications. However, not all applications in an organization are suitable for cloud bursting, due to several reasons. Cloud bursting of applications is constrained by the way those applications are built and deployed inside an organization's internal IT infrastructure. While certain applications may allow cloud bursting from an architecture perspective, they may not yield the anticipated business outcomes. Thus, without a formal assessment of applications, organizations cannot identify the ones that could produce maximum business benefits from the cloud bursting model. In this paper, we discuss an assessment framework that we could use to evaluate applications for their suitability for cloud bursting. The framework makes use of a parameter based approach where an application's cloud bursting suitability is measured using a number of relevant parameters. We also present three case studies where we have applied this framework and scored the applications using a selected set of assessment parameters.

S20.5 Mobile Cloud Integration for Industrial data Interchange
Pinku Hazarika (Siemens Technology & Services Pvt. Ltd, India); Sanath Shenoy (Siemens, India); Seshubabu Tolety (Siemens Technology and Services Pvt. Ltd., India); Naresh Kalekar (Siemens, India)

Software with mobile-cloud integration is no longer a necessity but a need in modern day's technology trends. Industrial automation domain has been rather conservative towards adopting new technology changes. However efforts are being made to connect industrial automation software with mobile and cloud. An established protocol namely OPC UA is being experimented for this purpose. OPC UA is a leading industrial M2M communication protocol. It enables interoperable data exchange between applications from different vendors across different hardware platforms and operating systems. A systematic attempt to adopt the protocol to handheld devices like mobile phones, tablets and cloud platforms opens up new possibilities in the industrial automation domain. This forms the first step towards IOT integration with industrial devices. The opportunities here are not limited to controlling the plants, visualizing the data through smart devices. Additionally cloud platforms facilitate acquisition and processing of industrial data. The cloud's infrastructure capabilities can also help seamless communication with OPC UA conformant devices and applications across different geographies. This paper explains a use case of successful mobile-cloud integration with HMI panels. It also captures statistics on different behavioral characteristics observed post integration with mobile and cloud.

S20.6 Boiler Efficiency Estimation from Hydrogen Content in Fuel
Chaya Lakshmi (Basaveshwar Engineering College, Vishvesvsaraya Technological University, Belgaum, Karnataka, India); Ds Jangamshetti (Basaveshwar Engineering College, India); Savita Sonoli (Vishvesvsaraya Technological University, Belgaum, Karnataka, India)

More than 45 % of world's electricity is generated from thermal power plants by utilizing coal as a fuel. Boilers and turbines are the most basic components in thermal power plants. Efficient utilization of heat energy produced from chemical composition of fuel assures enhanced electricity production. Performance degradation of boiler is mainly due to losses. This paper presents innovative method to predict boiler efficiency using DASY Lab software suite. The prediction of boiler efficiency is based on the coefficient of correlation between boiler efficiency and the boiler loss due to hydrogen content of fuel. Polynomial regression method is used to obtain the line of best fit. Loss due to hydrogen content in fuel has a strong correlation with the boiler efficiency. Hence, this method uses loss due to hydrogen content in fuel and simplifies the steps in finding boiler efficiency. The hydrogen content of fuel, temperature of flue gas, ambient temperature, and gross calorific value of fuel are used for finding the efficiency of boiler. The maximum error in predicting the boiler efficiency is 1.52 %, which signifies the authenticity of the proposed method.

S20.7 Globally Accessible Machine Automation Using Raspberry Pi
Vemula Sandeep, Kureti Lalith Gopal, Sithugari Naveen and Amudhan Arumugam (National Institute of Technology Puducherry, India); Lakshmi Sutha Kumar (National Institute of Technology Puducherry, Karaikal, India)

In the present world, there are many high tech appliances in our homes that make our lives easier. It is necessary to control these appliances remotely. Our proposed system will enable a user to control electronic appliances in their home with high mobility and security. A set of switches will be controlled by internet with the use of a micro-controller board. A Raspberry pi micro-controller board obtains user input from a website that is accessed through a user name and password. The customized user friendly website has several buttons to control the appliances. A Raspberry pi will be located in a room and will be connected to all electronic appliances in the home with the help of electromagnetic relays. The Raspberry pi can be controlled from any distant place with the help of weaved cloud service. Webiopi framework gives us a platform to interact with Raspberry pi's General Purpose Input Output pins. The Raspberry pi then either passes or stops current through an electromagnetic relay connected to the intended switch and this closes/opens the circuit allowing the appliance to run or get switched off. So, the designed automation system allows a way to control appliances from a distance in a user friendly way.

S20.8 Bus Bandwidth Monitoring, Prediction and Control
Nitin Chaudhary (University of Southern California, Los Angeles, USA); Pallavi Thummala (Samsung Research Institute Bangalore & Samsung Electronics, India); Zaheer Sheriff (Samsung Research Institute Bangalore - India, India)

With the introduction of Dynamic voltage/ frequency scaling (DVFS), the supply voltage to the various components on System-on-Chip (SoC) can be controlled depending on system load to track power-performance curve for best possible efficiency. Although dynamic power management solutions like DVFS for CPU and GPU are quite common and implemented in almost every embedded device, unfortunately the same is not true about the bus fabric interconnecting different subsystems on a SoC infrastructure. Due to inherent shared nature of the bus and hardware complexities, it is either operated at fixed frequency by targeting the maximum load case scenario or at varied frequency, using a static lookup table based on processor frequencies. In this paper we analyze the shortcoming of the later approach and propose a real-time bus bandwidth prediction of the bus connected to Processor cores using Hardware Performance Counters (HPCs) and block layer in Operating system to address them.

S20.9 Enhancement of Unambiguous DOA Estimation for Phase Comparison Monopulse Radar
Vandana R (VTU, India)

The paper describes the proposed method for Enhancement of Unambiguous DOA Estimation range using difference in angle between delta and sum channels of Phase Comparison Monopulse (PCM) for broad beam width in case of Radar application. The proposed method requires no hardware changes nor any complex computation or complex processing as compared to the methods discussed in section V. The paper also highlights the advantage of proposed method over the conventional DOA estimation which uses the monopulse ratio or the phase difference between the two receive channels. The proposed method has been simulated for different baseline separation, different beam width and for different DOA. The proposed method gives the unambiguous estimate of direction of arrival of the target till the baseline separation is lambda rather than half lambda which is often the case for high precision measurement. The same can be applied to other antennas with minor modifications.

S20.10 An Approach for Feature-Level Bug Prediction using Test Cases
Prateek Anand (Radisys Corporation and Member (Affiliate), IEEE Computer Society & Radisys Corporation, Bangalore, India)

Bug Prediction approaches have traditionally generated a lot of interest primarily due to potential savings in terms of cost, manpower and reputation. Consequently, a number of approaches have been suggested based on code metrics, process metrics, previous defects, testing metrics and multivariate models. With respect to granularity of prediction, these approaches predict at the class level, file level, package level or binary level. This paper presents a novel approach to bug prediction by utilizing test cases execution path in the code in release i and ranks the software functionalities or features in decreasing order for future defects in release (i+1) due to the code churn. The approach derives importance from two facts - 1) The prediction is done at the feature level, instead of class, file, package or binary level, since it is an accepted fact in software systems that certain features are more critical than others and faulty working of these features can jeopardize the entire software system 2) The approach suggested is non-intrusive, in the sense that it can be easily integrated into existing software development life cycle without significant efforts. Due to unavailability of feature-based test cases and relatively less number of features in open source projects, which is a necessary requirement of the study, case studies were performed on twelve releases of four industrial projects. Additionally, the predictive accuracy was evaluated on eight releases of these four industrial projects using normalized discounted cumulative gain. These studies indicate the validity of this approach and demonstrates that the presented approach has an average normalized discounted cumulative gain of 0.684 for predicting the top 10 faulty features.

S20.11 Design and Implementation of Sample and Hold Circuit in 180nm CMOS Technology
Prakruthi T G and Siva Yellampalli (Visvesvaraya Technological University, India)

This paper presents the design of sample and hold circuit which is having less effect in hold mode.And also it has less acquisition time,aperture jitter.Charge injection and Clock feedthrough are also considerably reduced upto 60%.The proposed architecture is designed in 180nm CMOS Technology using Cadance Virtuoso.

S20.12 Hardware-Software Co-design of Elliptic Curve Digital Signature Algorithm over binary fields
Bhanu Panjwani and Deval Mehta (Indian Space Research Organization, India)

Elliptic curve digital signature algorithm (ECDSA) is elliptic curve analogue of digital signature algorithm. This paper presents implementation of ECDSA on NIST recommended Elliptic curves in binary field of size 163 bits. The work involved implementation of different modules of ECDSA on reconfigurable hardware platform (Xilinx xc6vlx240T-1ff1156). The private key generation and binary weight calculation (used in scalar multiplication) is done in software using Microblaze (soft core of Xilinx). The private key along with the other global parameters for ECDSA are passed from Microblaze to the programmable logic of FPGA where final signature generation and verification is performed. Two implementations have been done based on different word sizes in Montgomery multiplication over binary fields. The first implementation requires 0.367 ms with 11040 slices for signature generation and 0.393 ms with 12846 slices for signature verification at a clocking frequency of 100 MHz. The second implementation requires 0.615 ms with 8773 slices for signature generation and 0.672 ms with 9967 slices for signature verification at the same clocking frequency. These implementations are faster compared to other implementations reported in literature for binary curves.

S20.13 Interfacing ICRH DAC system with WEB
Ramesh Joshi (Institute for Plasma Research, India)

HTML5 [1] has been recently evolution in web technologies which enables user to create web based control system user interfaces (UI). These user interfaces can work on cross browser and cross device compatible. Control system studio (CSS) Operator Interfaces (OPI) allow user to create user interfaces with drag and drop fashion. These developed interfaces developed in CSS BOY[2] can be seamlessly display in web browsers without any modification in original OPI file using WebOPI [3]. For this purpose, WebOPI was implemented by SNS as a web-based system using Ajax (asynchronous JavaScript and XML) with Experimental physics and instrumentation control system (EPICS) [4]. On the other hand, it uses generic Python/JavaScript and a generic communication mechanism between the web browser and web server. This interface uses the epics channel access gateway in glue with OPI which enables monitor and control process EPICS variables with different widgets. Apache Tomcat web server has been used to deploy application. Programmable logic controller (PLC) based data acquisition and control (DAC) system has been developed for 45.6 MHz, 100 kW Ion Cyclotron Resonance Heating (ICRH) system using EPICS and MODBUS. It can monitor and control 32 analog inputs, 16 digital inputs, 16 analog outputs and 16 digital outputs using MODBUS protocol. Several python embedded as well as external script have been used in design of the control system software. WebOPI provides seamless interface between with local CSS OPI and EPICS process variables using channel access gateway. Multiple web browser based client can communicate with single instance of user interface simultaneously. This paper introduced WebOPI that facilitate the goal of bringing control system UIs to the web.

S18: S18 - Signal and Image Processing- II

Room: 303
Chair: Supriya Hariharan (Cochin University of Science and Technology, India)
S18.1 Detection of Retinopathy of Prematurity using Multiple Instance Learning
Priya Rani and Elagiri Ramalingam Rajkumar (VIT University, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India); Melih Kandemir (Heidelberg University, Germany); Digvijay Singh (Medanta-The Medicity, India)

This paper proposes a new method for detecting Retinopathy of Prematurity (ROP) using multiple instance learning (MIL) approach from retinal images captured by RetCam, a digital retinal camera. In this work, a set of features having significant relevance to capture ROP characteristics, have been extracted and miGraph MIL method has been used as the classifier to learn from the extracted features. The diagnostic image is split into a grid of patches, and instances are constructed from each grid element by extracting a set of features from it. All the feature sets or group of instances belonging to the same image are grouped into a bag. Labels are assigned for instances and for the bags as a whole. Finally, the bags along with their labels are fed into a MIL classifier for classification. A good performance of miGraph on the ROP retinal images is observed and the initial experimental results are promising. In our literature survey, we observed that current research on detection of ROP using MIL has not been reported till now. Our results indicate that MIL offers an easy, yet effective, paradigm for ROP screening.

S18.2 On the Classification of Sleep States By Means of Statistical and Spectral Features from Single Channel Electroencephalogram
Ahnaf Rashik Hassan (Goran, Dhaka & Bangladesh University of Engineering & Technology (BUET), Bangladesh); Syed Khairul Bashar and Mohammed Imamul Hassan Bhuiyan (Bangladesh University of Engineering and Technology, Bangladesh)

Traditional sleep scoring based on visual inspection of Electroencephalogram (EEG) signals is onerous for sleep scorers because of the gargantuan volume of data that have to be analyzed per examination. Computer-aided sleep staging can alleviate the onus of the sleep scorers. Again, most of the existing works on automatic sleep staging are multichannel based. Multichannel based sleep scoring is not pragmatic for the implementation of a wearable and portable sleep quality evaluation device. Due to all these factors, automatic sleep scoring based on single channel EEG is garnering increasing attention of sleep researchers. In this work, we propound a single channel based solution to sleep scoring. First, we decompose the EEG signals into segments. We then compute various statistical and spectral features from the signal segments. After performing statistical analyses, we perform classification using artificial neural network. Results of various experiments perspicuously manifest that the proposed scheme is superior to state-of-the-art ones in accuracy.

S18.3 Comparative Evaluation of Age Classification from Facial Images
Raunak M. Borwankar, Gaurav S. Pednekar, Saurabh A. Deshpande and Purva S. Sawant (Don Bosco Institute of Technology, Mumbai, India); Satishkumar Chavan (Don Bosco Institute of Technology, Mumbai & University of Mumbai, India)

Researchers have made efforts to achieve age classification using spatial and transform domain techniques with various classifiers. Spatial Domain techniques are based on human perception and susceptible to noise and image processing operations. Transform domain techniques provide high flexibility and robustness in selection of features and better classification efficiency. This paper uses transform domain feature extraction techniques to achieve maximum possible age classification efficiency. The transforms used in this paper to extract features are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Dual Tree Complex Wavelet Transform (DTCWT). The features extracted from facial images are classified into a range of age groups viz. child, adolescent, young, middle aged and old aged using variance, k-nearest neighbour (kNN) and hybrid variance as classifiers. The experimental results prove that the feature extraction using DTCWT with Hybrid variance classifiers provides better classification efficiency than that of DCT and DWT.

S18.4 Phase Unwrapping with Kalman Filter based Denoising in Digital Holographic Interferometry
P. Ram Sukumar (ISRO, India); Rahul Waghmare, Rakesh Kumar Singh and Gorthi R K Sai Subrahmanyam (Indian Institute of Space Science and Technology, India); Deepak Mishra (IIST, India)

Phase information recovered through interferometric techniques is mathematically wrapped in the interval (-π, π]. Obtaining the original unwrapped phase is very important in numerous number of applications. This paper discusses a Fourier transform based phase unwrapping method. Kalman filter is proposed for denoising in post processing step to restore the unwrapped phase without any noise. The time taken for execution is very less compared to the many available methods in the literature. The proposed method is highly robust to noise and performs better even at lower SNR values (5 dB-10 dB) with a very less value of RMS error.

S18.5 Noise Sensitivity of Teager-Kaiser Energy Operators and Their Ratios
Pradeep Kr. Banerjee (Max Planck Institute for Mathematics in the Sciences, Germany); Nirmal B. Chakrabarti (Indian Institute of Technology Kharagpur, India)

The Teager-Kaiser energy operator (TKO) belongs to a class of autocorrelators and their linear combination that can track the instantaneous energy of a nonstationary sinusoidal signal source. TKO-based monocomponent AM-FM demodulation algorithms work under the basic assumption that the operator outputs are always positive. In the absence of noise, this is assured for pure sinusoidal inputs and the instantaneous property is also guaranteed. Noise invalidates both of these, particularly under small signal conditions. Post-detection filtering and thresholding are of use to reestablish these at the cost of some time to acquire. Key questions are: (a) how many samples must one use and (b) how much noise power at the detector input can one tolerate. Results of study of the role of delay and the limits imposed by additive Gaussian noise are presented along with the computation of the cumulants and probability density functions of the individual quadratic forms and their ratios.

S18.6 An Improved Approach for automatic denoising and Binarization of Degraded Document Images Based on Region Localization
Garima Chutani (CDAC, India); Tushar Patnaik (CDAC, Noida, India); Vimal Dwivedi (Tallinn University of Technology, Estonia)

This paper presents a new binarization approach in which the binarization algorithm is neither applied on whole image as in case of global thresholding nor on sub images as in case of local thresholding. In this approach, using the concept of bounding box and edge detection method, region localization is done and the objects of interest i.e. the textual part is separated from the background part of the image. After the extraction of object of interest, local binarization is applied to only that region using the overlapping windows. The result of this approach is then compared with the result of other local binarization algorithms like midgrey, sauvola, niblack, mean etc. In this experiments were done on different data sets including DIBCO2009, DIBCO2010, DIBCO2011, DIBCO2012 and DIBCO2013. Basic evaluation parameters used for comparison are precision, recall, F-measure, mean square error (MSE), signal to noise ratio (SNR) and peak signal to noise ratio (PSNR)

S18.7 A Comparative Analysis of Various Image Enhancement Techniques for Facial Images
Neha Sharma (Mody University of Science and Tech, India); Sumeet Saurav (CEERI Pilani, India); Sanjay Singh (CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI) & Academy of Scientific & Innovative Research (AcSIR), India); Ravi Saini (CSIR-CEERI Pilani, India); Anil Saini (CEERI & ACSIR, India)

Image enhancement is one of the most important pre-processing step used in a number of Computer Vision applications. Its importance can be judged by a number of image enhancement algorithms which have been developed over the time for different applications. All these algorithms differ from one another in terms of processing speed, computational complexity, and quality of image and so on. Therefore, in order to exploit the usefulness of these algorithms it is necessary to have their good understanding. With this objective in mind, this paper presents a comparative analysis of six different commonly used image enhancement algorithms. The performance of these algorithms has been measured both quantitatively and qualitatively for different test images. From our analysis, we found that Modified CLAHE outperforms all other techniques in terms of PSNR and AMBE which shows its better contrast enhancement and brightness preservation capabilities.

S18.8 Detection of High-Risk Macular Edema using Texture Features and Classification using SVM Classifier
Aditya Kunwar, Shrey Magotra and Partha Mangipudi (Amity University, Noida, India)

In medical image processing, positive means of texture feature detection around the macula region with specified radius in digital retinal images is still an open issue. Diabetic macular edema is a complication caused due to diabetic retinopathy and is the true cause of blindness and visual loss. In this paper, we have presented a computerized method for texture feature extraction around the specified radius taking macula as the centre. By proper segmentation techniques, the region of 1DD (DISC DIAMETER) around the macula centre was extracted out. The extracted region contains a great amount of abnormalities like micro-aneurysms, hard-exudates and hemorrhages, thereby texture features vary greatly. Unlike other well-known approaches of machine learning classifier techniques, we propose a combination of texture feature extraction from the region of interest around macula and classification using SVM. The segmented region containing abnormalities differ greatly in texture and a promising "accuracy > 86%" was obtained between the "normal'' and "abnormal" type classification. The performance evaluation of the automated system was determined by Sensitivity, Specificity and Accuracy with values obtained of 91%, 75% & 86 % respectively.

S18.9 A Study of F0 Estimation Based on RAPT Framework using Sustained Vowel
Prarthana Karunaimathi and Dennis Gladis (Presidency College, India); Usha Dalvi (SRM University, India)

Acoustic analysis plays a vital role in the study of human voice. Acoustic parameters such as Fundamental frequency (F0) can be derived directly from voices. It is one of the most proven parameters that determines the normality or abnormality of voice disorders in people. The Statistical measures of fundamental frequency such as mean F0 and standard deviation of F0 are suitable for clinical assessment of voices. In this study, these measures are computed using RAPT and DR. Speech algorithms. The common error that occurs in a fundamental frequency estimation algorithm is the octave error. The post-processing method namely de-step filter is applied to correct this error. This study uses Dr. Speech as the benchmark and evaluates the robustness of the RAPT algorithm for the original F0 contour and de-step filtered contour. The results indicate that the use of the filter gives significant improvement to the performance of the RAPT algorithm thus making it suitable for clinical evaluation of the fundamental frequency.

S18.10 A Noval Method on Action Recognition Based on Region and Speed of Skeletal End points
Rohith P, Harsha MP, Gayathri Unni and Aparna Beena (Amrita Vishwa Vidyapeetham, India); M Geetha (Amrita University, Amritapuri, Kollam India Clappana P O, India)

This paper proposes a method to recognize human actions from a video sequence. The actions include walking, running, jogging, hand waving, clapping and boxing. The actions are categorized after recognition using a decision tree. Apart from other algorithms, our proposed method recognizes single human actions considering the speed, direction and the percentage of endpoints as a novel approach. In addition to action recognition this paper also proposes an error correction method for removing bifurcation. The system has been checked on various data sets and it performed well.

S18.11 Gait Recognition Using Skeleton Data
Prathap C (SIT, Tumakuru, India); Sumanth Sakkara (PESIT, Bangalore, India)

Biometric systems are becoming important since they provide efficient and more reliable means of human identity verification. Gait Recognition has created much interest in computer vision society over the last few years. in this paper, we have presented a Gait based human identification system using skeleton data acquired by using Microsoft kinect sensor. The sensor acts as a digital eye which takes the color information as well as depth information through IR sensor. The static and dynamic features of each individual are extracted using the skeleton information. Classification is performed using two different algorithms. First is the Levenberg-Marquardt back propagation algorithm, second is the correlation algorithm. 90% recognition rate is achieved with correlation algorithm where as for Levenberg-Marquardt back propagation algorithm proposed system is able to achieve a recognition rate of 94% for 5 persons with fixed Kinect sensor setup.

S18.12 Activity Detection with Dendrite Threshold Model
Alex James (IIITMK, India); Anuar Dorzhigulov and Daniyar Bakir (Nazarbayev University, Kazakhstan); Swathikiran Sudhakaran (Fondazione Bruno Kessler, Italy)

This paper presents an activity detection system using dendrite threshold logic neuron models. This method generates a dendrite weight matrix from the background image and detect the changes in the subsequent images through the trained neuron outputs. Using only one layer of dendrite neuron cells with simplistic threshold logic cells, an accuracy of 98% is reported in realistic imaging conditions. The real-time implementation of the system is done using OpenCV libraries to be deployed in raspberry pi platform.

S18.13 An Improved Content Aware Image Resizing Algorithm based on a Novel Adaptive Seam Detection Technique
Binita Saha (Techno India University, India); Tanmoy Dasgupta (Jadavpur University & Techno India University, India); Samar Bhattacharya (Jadavpur University, India)

An image can be considered to be a combination of both significant (foreground) objects and some less significant (background) objects. Content aware image resizing (CAIR) algorithm uses the different edge detection methods to segregate the useful objects from the background. When applied to an image, CAIR can resize the image to a very different aspect ratio without destroying the aspect ratio of the useful objects in the image. However, this method fails when the useful objects in the image are very closely situated. To take care of this, this paper proposes and develops a modified version of the algorithm. Instead of merely finding the edges,the important objects are detected by drawing contours around them with the help of level set based Chan Vese Image Segmentation algorithm and constant convergence rate Modified Delta Bar Delta learning algorithm. Then Seam Carving algorithm is applied which uses Dynamic Programming. A seam which is a 8 connected curved path from top to bottom (vertical seam) or left to right (horizontal seam) is drawn on the unnoticeable pixels of the lesser significant portions by the process of seam carving which helps to resize the image to a new size. The optimum path of the seam is defined by an image energy function which protects the content of the image. If the seams are continuously removed and inserted then the size of an image can be expanded and contracted respectively in both directions.

S18.14 Spectral-Spatial Hyperspectral Image Compression based on Measures of Central Tendency
Gayatri Deore (College of Engineering Pune, India); Srividya Rajaraman (College of Engineering, Pune, India); Rujuta Awate (Deloitte Consulting, Mumbai, India); Saili Bakare (Whirlpool of India Limited, Pune, India)

Hyperspectral images have become an active research topic due to their higher spectral resolution provided by dense spectral sampling at each pixel by a number of narrow and contiguous bands of wavelength. In this paper, we propose a lossy compression approach that uses a novel technique of applying central measures to exploit inherent spectral correlation in consecutive bands of hyperspectral images and use of vector quantization on transform coefficients to exploit spatial correlation in order to achieve higher compression. It is generally perceived that use of compressed hyperspectral images may affect the results of post-processing stages such as classification and unmixing, however this possible adverse effect has been considered in this algorithm by the use of a spectral distortion measure, Spectral Angle Mapper (SAM) along with conventional Peak Signal to Noise Ratio and Compression Ratio to evaluate performance of the algorithm.

S18.15 Boosting Retrieval Efficiency with Image Replacement Based Relevance Feedback
Vimina E R (Rajagiri College of Social Sciences, India); Poulose Jacob K (Cochin University of Science and Technology, India); Navya Nandakumar (Rajagiri College of Social Sciences, India)

Relevance feedback has been employed in Content Based Image Retrieval systems to bridge the semantic gap between the low level features and high level semantics of the image. This paper proposes a short term learning relevance feedback algorithm that utilizes the statistical features of the feedback images for determining the relevance of the candidate image in the next iteration and for achieving improved precision. The similarity of the candidate image with the feedback image set is determined by computing the cumulative sum of the displacements of the feedback image centroid caused by replacing each element in the feedback image set with the candidate image in the database. Experimental results show that using the proposed image replacement algorithm, improved precision of 8% can be achieved even with a single image given as feedback by the user. Also it is seen that optimum number of feedback images needed for obtaining improved performance is 2-10.

S18.16 A Novel Dimensionality Reduction Method for Cancer Dataset using PCA and Feature Ranking
Nitika Sharma (CDAC, noida, India); Kriti Saroha (CDAC, India)

In data mining, a well known problem of "Curse of Dimensionality" occurs due to presence of large number of dimensions in a dataset. This problem leads to reduced accuracy of machine learning classifiers because of presence of many insignificant and irrelevant dimensions or features in the dataset. Data mining applications such as bioinformatics, risk management, forensics etc., generally involves very high dimensionality. There are many methods that are being used to reduce dimensionality and find Critical Dimensions that represents complete dataset but using lesser dimensions. This paper introduces a novel method to reduce dimensionality using Principal Component Analysis and Feature Ranking. For analysis of proposed method, dimensionality reduction of Breast Cancer dataset has been done. The results indicate that for the chosen dataset, proposed method can effectively reduce dimensionality without compromising on classification accuracy and computation cost.

S19: S19-Sensor Networks, MANETs and VANETs- I

Room: 305
Chair: Santhosh Kumar G (Cochin University of Science and Technology, India)
S19.1 A Novel Routing Protocol using Heterogeneous Zigbee Modules for Mobile Sensor Network
Rupam Some (Indian Institute of Engineering Science & Technology, India); Indrajit Banerjee (Indian Institute of Engineering Science and Technology, Shibpur, India)

Wireless sensor network comprising tiny, low cost sensors can be either stationary or mobile, deployed over a widespread geographic region. Mobility of sensors offers research challenges, more specifically in the field of routing and energy consumption. This paper proposes a routing protocol, using Zigbee module ProFlex01 and CC2025 that exploits all cluster member sensor nodes to route through a network path already used for sending the acknowledgement to the cluster head (CH) thus minimizing the cost of transmission. The proposed protocol also aims to minimize the overlapping cluster area for efficient inter cluster communication. The novelty of the protocol is established through curtailing the steps for cluster head selection and selection of network path for data transmission.

S19.2 Detection of Black Hole Node in Wireless Sensor Networks using DSR protocol
Navjyot Kaur (Lovely Professional University); Renu Dhir (National Institute of Technology, India)

The proposed system is for detection of blackhole nodes in the wireless sensor networks where dynamic source routing is used as routing protocol. NS2 is used to simulate and translate physical activities to events. Events are processed in order of their scheduled occurrences and time. Simulation is performed using different scenarios by varying network sizes. The network size varies from small, intermediate, moderate to large network. In every network size the behaviour of blackhole node is similar. The malicious node affects the topologies by decreasing the throughput of network. The sent/receive ratio of the network falls precipitously. With presence of blackhole node number of packets dropped rises abruptly. With presence of blackhole node the packet generation has amplified which is all because of the fake packets generated by malicious nodes. The performance evaluation parameters used showcase that these factors can detect the malicious node along with potential blackhole nodes which are behaving aberrantly. Using NS2, firstly deployment of sensor nodes is done. Afterwards, the role of malicious nodes is assigned to few nodes. The communication among the multiple sensor nodes starts after assigning the roles. The communication in network is done using on demand dynamic source routing protocol. When the communication is over, each message transfer entry is maintained in the trace file of simulation. By using the entries made in trace file, the "awk" files extract various parameters like number of packets dropped, sent and received. The results help in interpreting the behaviour of malicious and potential suspicious nodes. The detection system uses the results to help in saving the topology and entire network from the data loss. This loss might have occurred if we are not able to detect the malicious nodes in time. The results collected, compare the behaviour of wireless sensor network in presence and absence of blackhole nodes. The graphs give a clear indication that the blackhole nodes are very harmful to the network in terms of the number of packets dropped, decreased throughput, variation in the packets sent and received.

S19.3 SNR Based Master-Slave Dynamic Device to Device Communication in Underlay Cellular Networks
Giriraja C V C V (Amrita Vishwa Vidyapeetham, India); Telugu Kuppu Shetty Ramesh (Amrita University, India)

In cellular networks Device to Device (D2D) communication is used to improve the resource utilization and the throughput. SNR Based Master-Slave Dynamic D2D Communication Algorithm (SMSDCA) is proposed to improve the resource utilization of the cellular network and improved Quality of Service (QoS). It is achieved by allocating some User Equipment (UE) as master in that cluster based on SNR and energy, who communicates with Base Station (BS) and the other UE's by using D2D. Other UE's requesting for data in that cluster are made as slaves. SMSDCA uses dynamic management of clusters and devices. In this we are handling both static and dynamic users need. For non-data requests it allocates the channel as per existence but for data requests SMSDCA will be used. In this algorithm D2D channel will be allocated by BS. Energy of the device is computed based on activities of the user, i.e., voice calls, duration and number of data packets transferred. The master can be changed dynamically based on SNR, energy and movement of device in cluster limit. By using this, the new users requesting for data and users moving from neighboring clusters are taken care accordingly. In this paper MATLAB simulation results shows that by using this algorithm the throughput and number of users can be served will be increased in comparison with Interference aware graph based resource sharing scheme for D2D communication [3].

S19.4 Cross Layer Routing and Rate Adaptation for Video Transmission in Multi-Radio Wireless Mesh Networks
Kiran Patil (BVBCET, India); Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); Uma Mudenagudi (B. V Bhoomaraddi College of Engineering and Technology, Hubli, India); Jyoti Amboji (Bvb, India)

In this paper, we propose a cross layer optimization technique for video transmission in multi-radio Wireless Mesh Networks (WMNs). WMNs are used as back haul to connect various networks such as Wi-Fi, Wi-MAX etc. to the internet. The presence of multiple radios in this networks increases the capacity but introduces an interference. Thus, designing a better routing metric to find optimal path is an important issue. Further, as the routing metric and rate adaptation decisions are strongly related, the joint approach is needed to improve the performance of the network. In this work, we propose the cross layer optimization technique by designing a routing metric which considers the link quality parameters from various layers and use some of these parameters for rate adaptation to improve the QoS parameters of the network. We implement our technique using AODV protocol in NS2. The results reveal that the joint approach improves the QoS parameters such as throughput, packet delivery fraction (PDF), Peak signal to noise ratio (PSNR) and frame delay compared to existing approaches.

S19.5 A preemptive approach to reduce average queue length in VANETs
Punam Bedi, Vinita Jindal, Ruchika Garg and Heena Dhankani (University of Delhi, India)

This paper proposes a novel preemptive algorithm for optimization of traffic signals in VANETs to reduce large queuing at crossroads by allowing smooth movement of traffic on the roads without much waiting. The proposed algorithm selects green light timings according to real time vehicular density and can select any phase out of predefined order depending on the traffic density on that phase to reduce the congestion. It also ensures that no vehicle halts on the crossroad for more than a threshold limit. Implementation of the algorithm is done using the open source tools: Open Street Map (OSM), Simulation of Urban MObility (SUMO), MObility model generator for VEhicular networks (MOVE), Traffic Control Interface (TraCI) and python scripts. The accuracy of the proposed algorithm is validated both graphically and statistically using hypothesis testing by comparing it with non-preemptive adaptive traffic signal control and the pre-timed approach. The comparisons are performed by simulation on three types of networks: simple simulated network, complex simulated network and Real Time University of Delhi map. Results illustrate the efficiency of the proposed approach in all the three networks with reduced average queue length at the crossroads with 95% confidence.

S19.6 Increasing Traffic Safety during Single Lane Passing using Wireless Sensor Networks
Unnikrishnan H and Vishnu Narayanan B (Amrita University, India); Alin Devassy Ananyase (Robert Bosch(RBEI) & Amrita Center for Wireless Networks and Applications, India); Anand Ramachandran (Amrita University, India)

Overtaking traffic on undivided roadways is one of the major causes of accidents that results in fatalities and severe injuries. This is especially true in developing countries such as India, where roads are narrow, yet traffic is heavy and fast. The primary cause of such accidents is the inability of drivers to gauge the speed and distance of oncoming traffic as well as the impairment of vision due to lack of adequate light (at night), the presence of large vehicles in front as well as sharp turns and curves in the road. We present a novel concept that uses wireless sensor networks to assist a driver in making a safe decision on overtaking a vehicle in the presence of oncoming traffic on an undivided two-lane roadway. Our system broadcasts a request for information from oncoming vehicles. The oncoming vehicle responds back with position and speed information, allowing us to automatically make a go/no-go decision on whether to try and overtake or not, taking into account the aforementioned information as well as the capability of our own vehicle. In this concept paper, we present the solution approach and hardware prototype design as well as the test-bed used to test the prototype. We postulate that this system, when deployed, would significantly reduce driver stress and decrease the number of accidents during single lane passing on undivided two-lane roadways.

S19.7 An overhearing based routing scheme for Wireless Sensor Networks
Riddhiman Sett and Indrajit Banerjee (Indian Institute of Engineering Science and Technology, Shibpur, India)

Congestion control in Wireless Sensor Networks (WSN) has always been a major challenge as contention for medium and limited storage space in nodes lead to loss of packets through collisions and buffer overflows. The congestion problem is tougher for multi-hop mesh networks, required when the deployment area is large compared to the communication range of individual nodes and the network is Ad-Hoc in nature. The objective is to design a routing scheme that minimizes congestion in such type of networks without increasing communication overheads. Often these networks use medium access control protocols like Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) that require sensor nodes to listen to the medium using an overhearing technique. This paper proposes an energy efficient routing scheme for multi-hop WSNs having mesh topologies. Here, a sensor node takes advantage of overhearing to predict the onset of congestion in the neighbouring nodes by estimating their respective buffer occupancies. This information is utilized directly in routing. The advantage of this approach is a significant reduction in congestion in the medium leading to reduction in number of packets being dropped and consequently minimization of average transmission delay. Also there is no additional communication overhead to implement the scheme. Software simulation demonstrates its effectiveness.

S19.8 A Grouping based Prioritized Sensor data fusion Algorithm for deciding the State of a Coal Mine
Anibroto Sarkar (Indian Institute of Engineering, Science & Technology, Shibpur, India); Indrajit Banerjee (Indian Institute of Engineering Science and Technology, Shibpur, India)

The main constraint of a wireless sensor network is the limited battery power and short lifetime. One of the main reasons of energy consumption is the data transmission. Each node senses the data and sends them over to the base station. The sensor data fusion reduces the volume of message transmission and makes the network energy efficient. In this paper, we have presented a data fusion algorithm which minimizes the computation cost, communication cost and in the same way it reduces the consumption of energy. The algorithm derives the state of the network using the concept of priority of the sensors. The proposed algorithm gives a better false alarm rate than the existing data fusion algorithm used in the coal mine.

S19.9 Bandwidth Efficient Hybrid Synchronization for Wireless Sensor Network
Dnyanesh S Mantri (Pune University & Sinhgad Institute of Technology, Lonavala, India); Neeli Rashmi Prasad (ITU, Center for TeleInFrastructure (CTIF), USA); Ramjee Prasad (Aalborg University, Denmark)

Data collection and transmission are the fundamental operations of Wireless Sensor Networks (WSNs). A key challenge in effective data collection and transmission is to schedule and synchronize the activities of the nodes with the global clock. This paper proposes the Bandwidth Efficient Hybrid Synchronization Data Aggregation Algorithm (BESDA) using spanning tree mechanism (SPT). It uses static sink and mobile nodes in the network. BESDA considers the synchronization of a local clock of node with global clock of the network. In the initial stage algorithm established the hierarchical structure in the network and then perform the pair-wise synchronization. With the mobility of node, the structure frequently changes causing an increase in energy consumption. To mitigate the problem BESDA aggregate data with the notion of a global timescale throughout the network and schedule based time-division multiple accesses (TDMA) techniques as MAC layer protocol. It reduces the collision of packets. Simulation results show that BESDA is energy efficient, with increased throughput, and has less delay as compared with state-of-the-art.

S19.10 Composite Interference Mapping Model for Interference Fault-free Transmission in WSN
Beneyaz Ara Begum (Academy of Scientific and Innovative Research (AcSIR) & CSIR-Indian Institute of Chemical Technology, India); Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India)

Most approaches to model interference in Wireless Sensor Networks (WSNs) focus primarily on establishing the basic conditions for successful communication between two nodes. However, no attempt has been made to devise models that arrest interference-laden transmissions right at the source. As a consequence, faulty transmissions pervade the WSN, and the nodes are forced to receive and decode the faulty data before deciding to reject it, if and when the fault is detected. This results in considerable cost overheads in terms of processing time, throughput and energy consumption. To tide over these problems a new holistic framework for stemming the flow of faulty transmissions has been developed. The "Composite Interference Mapping" (CIM) model introduced in this paper, maps the potential interference faults for all active links in the network. A new "Time Division Multiple Frequency" (TDMF) approach has been established to determine an Interference Fault-free Transmission (IFFT) schedule for all active links. Composite Interference Transmission Scheduling (CITS) algorithm based on CIM and TDMF is developed to maximize network throughput. The completeness of the CIM model in providing a comprehensive interference map of the WSN is analytically proved. To support the concepts introduced, analytical validation of TDMF is presented in the paper. The CIM model may well pioneer a new thought process and trigger the development of a host of applications.

S19.11 A Vehicular Dynamics Based Technique for Efficient Traffic Management
Shagun Aggarwal (Chandigarh University, India); Rasmeet S Bali (Chandigarh University, Mohali, India)

The passive control of traffic lights especially in urban scenario can adversely affect the movement of vehicles. The uneven distribution of vehicles on different lanes due to statically varying traffic lights also severely affects factors such as vehicle waiting time, fuel consumption as well as environment. This paper proposes a Vehicle to Roadside based data dissemination scheme that periodically considers vehicular density, average velocity of vehicles and current traffic scenario (current time and day) within the range of a particular Road Side Unit to dynamically manage traffic lights duration and mitigate congestion problem. The above three values will also be used to categorize the lanes into three classes i.e. densely populated; medium populated or sparsely populated lanes which is used as the basis for vehicles commuting through that lane in future. The Road Side Units then broadcast this information to all the traffic lights in the region and accordingly the proposed system computes the optimum values of time duration for all the traffic lights. By combining traffic lights with the lane classes, the proposed scheme improves the overall vehicular traffic throughput. Simulation results exhibit the superior performance of scheme in comparison to statically controlled traffic light based conventional timing system.

S19.12 Performance analysis of erasure coding based data transfer in Underwater Acoustic Sensor Networks
Geethu K s and Babu A v (National Institute of Technology Calicut, India)

This paper investigates the performance of two popular packet level erasure coding approaches, i.e., end-to-end and hop-by-hop, for improving the data transmission reliability in Underwater Acoustic Sensor Networks (UWASNs). In the end-to-end approach, encoding and decoding are performed by the source node and sink node, respectively, while the intermediate nodes just forward the data packets without any processing. In this case, the number of redundant packets to be transmitted by the source (i.e., code rate) is chosen based on packet success probability along the path. On the other hand, the hop-by-hop erasure coding is used to achieve reliability over each hop along the path. In this case, apart from the source node, all the intermediate nodes perform the encoding process and each node adaptively computes the number of redundant packets for the next hop based on the quality of the corresponding hop. We present an analytical model to find the communication overhead, energy consumption and average delay of the multi-hop UWASN under the above erasure coding approaches. The results show that hop-by-hop erasure coding outperforms end-to-end erasure coding .

S19.13 A Switched Beam Antenna Array with Butler Matrix Network Using Substrate Integrated Waveguide Technology for 60 GHz Communications
Nishesh Tiwari (SRM University, India); T Rama Rao (SRM IST, India)

A switched beam antenna array based on substrate integrated waveguide (SIW) technology is designed and simulated for 60 GHz communications. The antenna array is fed by 4x4 planar butler matrix network in order to achieve the switched beam characteristic. The use of SIW technology offers a low cost and a planar design. Each of the components is designed and verified through simulations in electromagnetic field simulation tool, CST MWS. The components are integrated later to form the switched beam antenna array. The return losses and isolations are lower than -10 dB from 57 GHz to 64 GHz for all of the input ports. The peak gain for the switched beam antenna array is 16 dBi at 60 GHz.

S19.14 LoENA: Low-overhead Encryption based Node Authentication in WSN
Tanusree Chatterjee (Indian Institute of Engineering Science & Technology & Techno International NewTown, Kolkata, West Bengal, India); Pritam Banerjee (Indian Institute of Engineering Science & Technology, Shibpur, India); Sipra DasBit (Indian Institute of Engineering Science & Technology, Shibpur)

Nodes in a wireless sensor network (WSN) are susceptible to various attacks primarily due to their nature of deployment and unguarded communication. Therefore, providing security in such networks is of utmost importance. The main challenge to achieve this is to make the security solution light weight so that it is feasible to implement in such resource constrained nodes in WSN. So far, data authentication has drawn more attention than the node authentication in WSN. A robust security solution for such networks must also facilitate node authentication. In this paper, a low overhead encryption based security solution is proposed for node authentication. The proposed node authentication scheme at the sender side consists of three modules viz. dynamic key generation, encryption and embedding of key hint. Performance of the scheme is primarily analyzed by using two suitably chosen parameters such as cracking probability and cracking time. This evaluation guides us in fixing the size of the unique id of a node so that the scheme incurs low-overhead as well as achieves acceptable robustness. The performance is also compared with a couple of recent works in terms of computation and communication overheads and that confirms our scheme's supremacy over competing schemes in terms of both the metrics.

S19.15 Secure Multipoint Relay Node Selection in Mobile Ad Hoc Networks
Jeril Kuriakose (St. John College of Engineering & Manipal University Jaipur, India); Amruth V (Bearys Institute Of Technology, India); Vikram Raju R (Manipal University Jaipur, India)

Mobile Ad Hoc Networks (MANETs) has become a broad area in wireless networks, because of its reduced deployment cost, ease of use, and deliverance from wires. MANETs uses a decentralized and infrastructure-less network, thus making the nodes to route its messages / data with the help of intermediate nodes. Routing in MANETs is carried out with the help of broadcasting schemes, among which multipoint relay (MPR) is found to be the eectual and uncomplicated scheme. The MPR scheme broadcasts its messages only to the selected MPR nodes. The selected MPR node may be any node in the network, and there are no obligatory and adequate conditions that provides assurance about the selected node's integrity. In this paper, we have proposed a novel approach in the MPR node selection, by adding a security feature prior to the MPR node selection. We have veried the time constraints and eciency with the help of localization techniques. Future events are also been discussed.

S24: S24-Computer Architecture and VLSI- I

Room: 306
Chair: Jonathan Joshi (Vanmat Technologies. Pvt. Ltd, India)
S24.1 Effective Design of Logic Gates and Circuit using Quantum Cellular Automata(QCA)
Libi Balakrishnan (MGR Educational and Research Institute, India); Thiagarajan Godhavari (Dr MGR Educational and Research Institute University, India); Sujatha Kesavan (Prof & NA, India)

Quantum cellular automaton (QCA) is an efficient and emerging nanotechnology to create quantum computing devices. It is polarization based digital logic architecture. QCA cell is the basic unit to build logic gates and devices in quantum domain. This paper proposes an effective design of logic gates and arithmetic circuit using QCA. Here the gates and circuits are designed using minimum number of QCA cells and with no crossovers. So these designs can be used to construct complex circuits. The simulations of the present work have been carried out by means of QCA designer tools. The simulation results help to implement the digital circuits in nanoscale range.

Shruti Murgai (Amity University, India); Ashutosh Gupta (Amity University Uttar Pradesh, India); Gayathri Muthukrishnan (Amity University, India)

Arithmetic Logic Units are one of the most important unit in General purpose processors and major source of power dissipation. In this paper we have presented an optimized Arithmetic and Logic Unit through the use of an optimized carry select adder. Carry select adders have been considered as the best in their category in terms of power and delay. In this context a full adder optimized in terms of power has been used in synthesizing a carry select adder. Combined with the new adder structure, there is a substantial improvement in terms of power and delay. The total device power and hierarchy power has been reduced to 12.5 % and 53.39 % respectively. 3 % reduction in total completion time has also been observed. The circuit has been synthesized on kintex FPGA through Xilinx 14.3 using 28 nm technology in Verilog HDL and results has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment

S24.3 FSMD RTL Design Manipulation for Clock Interface Abstraction
Syed Saif Abrar (IBM, India); Maksim Jenihhin and Jaan Raik (Tallinn University of Technology, Estonia)

The rapid raise of embedded systems design complexity and size has emphasized the importance of high-performance simulation models. This has resulted in emergence of design methodologies at the higher levels of abstraction such as Electronic System Level (ESL) and Transaction Level Modelling (TLM) and SystemC language as the main instrument. In practice, system architects and system integrators often have access to a library of legacy Register Transfer Level (RTL) HW IP cores or obtain new ones from IP design houses. To address simulation performance, such RTL IP cores are manually recreated at more abstract levels, which implies significant tedious and error-prone effort. The current paper addresses this problem by proposing a novel approach for automated FSMD RTL design manipulation for clock interface abstraction. The manipulation approach takes as an input FSMD (Finite State Machine with datapath embedded) RTL design in VHDL and transforms it to an equivalent Algorithmic State Machine (ASM) representation in SystemC with explicit separation of design functionality by states. Finally, the clock interface is abstracted up to optimize the simulation performance. The manipulation details are demonstrated on a case study design and the first experimental results show simulation speed-up and prove feasibility of the proposed approach.

S24.4 Optimizing Response Time and Synchronization in User Virtualization Via Adaptive Memory Compression
Mayank Kulkarni, Rushikesh Ghatpande, Nikunj Karnani and Jibi Abraham (College of Engineering, Pune, India)

In this paper, we address the problem of prolonged latency periods for user log-in and log-off in User Virtualization using Virtual Pooled Desktops. These latency periods are caused due to large chunks of User Profile data transferred over the network in order to achieve User Virtualization. Thus we require efficient and quick 'synchronization' of data from the client to the server. Data transfer time and file characteristics are factors that assist us to determine how synchronization can be improved. File type and File size are the key characteristics used to determine which compression algorithms will be applicable. Our contribution in this paper is the development of a new system for synchronization, using various compression algorithms and a 'learning' based model. The learning model helps to determine whether or not to compress the file and plays an integral part in building a ranking system of compression algorithms, based on their performance on a specific type and size of a file.

S24.5 Efficient Don't-Care Filling Method to Achieve Reduction in Test Power
Sinduja V, Siddharth Raghav and Anita Jp (Amrita Vishwa Vidyapeetham, India)

The issue of increasing power consumption is a problem which researchers from different fields are trying to solve. Since VLSI technology has become ubiquitous in today's world, this field is a prime candidate for power reduction. Nowadays we see a tremendous growth in chip density and reduction in dimensions which contributes to an escalation in clock rate. This leads to the prevalence of a category of faults called transition faults. It arises due to time delays in propagation of signals through different paths. The delays maybe caused by different factors like the resistance of the path, the number of branches in it etc. Transition faults are becoming more significant in ICs with the reduction of timing margins to achieve maximization of performance and minimization of power. This paper proposes a way to achieve power reduction by the method of x-filling. The method ensures that the power values during testing are optimum and is as close to the functional power as possible. In this paper, ISCAS'89 benchmark circuits have been used with an industrial 90nm technology. The tools used were Synopsys TetraMAX and Synopsys Design Compiler. Experimental results show a considerable reduction in average shift power and average capture power.

S24.6 Gating techniques for 6T SRAM cell using different modes of FinFET
Deeksha Anandani and Anurag Kumar (VIT University, India); V S Kanchana Bhaaskaran (VIT Chennai, India)

The proposed idea is to incorporate fine grain and coarse grain power gating techniques for the SRAM cell and SRAM array respectively. Independent gate FinFET, tied gate FinFET and independent gate FinFET with pass gate feedback are employed for power gating the 6T SRAM cell. The leakage power, stability (SNM) and delay comparisons have been made. The simulations are carried out using Cadence Virtuoso tools, employing the 32nm Predictive Technology Model (PTM) files for the MOS devices and 32nm BPTM files for the FinFETs. The results validate the advantage of using power gating for reduced static power dissipation. The independent gate FinFET based 6T SRAM cell using pass gate feedback incurs 2.23uW of power during the hold operation, as against the comparatively negligible power dissipation of 27.67pW with the power gating.

S24.7 Network on Chip Based Multi-function Image Processing System using FPGA
Zalak Dave (Eduvance, India); Shivank Dhote (Vidyalankar Institute Of Technology, India); Ganesh Gore (Eduvance, India); Jonathan Joshi (Vanmat Technologies. Pvt. Ltd, India); Abhay Tambe (Reanu Microelectronics Pvt. Ltd., India); Sachin Ratikant Gengaje (Walchand Institute of Technology, India)

Multifunction parallel image processing systems use standard buses to do inter core communication. Faster and scalable approaches are needed to improve the throughput of the system, but for data heavy applications like Image Processing (IP) algorithms there is a need for constant data transfer between different functional blocks on chip. The solution would either be hardwired buses or controlled communication. Networks-On-Chip (NoC) present a systematic solution, and can succeed a hardwired bus solution in a scalable form. This paper presents a multi-function image processing system prototyped on a single reconfigurable platform. The different IP cores have been implemented keeping in mind on-the-fly processing times and frame rates. The different modules are interconnected using a Torus architecture NoC with an information heavy packet structure and capable of addressing multiple nodes simultaneously. The implementation was done using a low cost Spartan 6 FPGA. Frame rates for standard sizes and chip utilization has been reported.

S24.8 FPGA Implementation of Reconfigurable Modulation System
Mangala Joshi (PES Institute of Technology, India); Manikandan J (PES University (PESU), India)

Communication systems are extensively used in a large number of applications such as radar, aerospace, naval/maritime communication, underwater communication, mobile communication and many more. The most important module in designing communication system includes design of modulators.Different applications demand different types of modulators.Reconfigurable computing is considered as a state-of-the-art approach for system design, wherein the same hardware can reconfigure itself to perform different functionalities and is hence employed for various applications. Design and implementation of reconfigurable modulators on Virtex-5 FPGA is proposed in this paper, wherein the type of modulation can be dynamically reconfigured on-the-fly based on the requirement at any particular instance. The type of modulators that are used for reconfiguration in this work includes Amplitude Modulation (AM), Frequency Modulation (FM), Frequency Shift Keying (FSK), Phase Shift Keying (PSK), and Amplitude Shift Keying (ASK). In this paper, different approaches of triggering employed for proposed reconfigurable modulator design is reported. Also the proposed design is implemented using single and two reconfigurable blocks and the results are reported. It is observed that 10.20 - 91.43% of hardware resources and 76.38% of power are saved on using the proposed reconfigurable modulator over the conventional non-reconfigurable modulator design.

S24.9 Scaling number of cores in GPGPU: A Comparative Performance Analysis
Winnie Thomas (Veermata Jijabai Technological Institute, India); Rohin D. Daruwala (V. J. Technological Institute, India)

The Single Instruction Multiple Thread (SIMT) architecture based, Graphic Processing Units (GPUs) are emerging as more efficient than Multiple Instruction Multiple Data (MIMD) architectures in exploiting parallelism. A GPU has numerous shader cores and thousands of simultaneous fine-grained active threads. These threads are grouped into Cooperative Thread Arrays (CTAs). All the threads within a CTA are further grouped as warps. Though warps within a CTA are scheduled for execution on the same core, only one warp is executed at a time due to hardware constraint. The subsequent way in which a GPU exploits parallelism is by employing multiple shader cores to execute multiple warps simultaneously. We explore latter way of exploiting parallelism by increasing number of cores and its impact on different types of applications. We first categorize a number of general purpose GPU workloads into the ones that consumes less (L) DRAM bandwidth and the ones whose bandwidth requirement is heavier (H). We observed that the workloads that get boost in its performance are under type-L when number of cores increase. Whereas type-H workloads experience performance degradation. The maximum performance gain in terms of instructions per cycle (IPC), is 2.03x for type -L workloads. We then observed the impact of scaling on percentage of good cycles for all workloads. Our results show that additional pressure on bandwidth caused by scaling number of shader cores is detrimental for type-H workloads and a boost to type-L workloads at the cost of reduction in percentage of good cycles in both types.

S24.10 Memory array with complementary resistive switch with memristive characteristics
Sneha Patil (VIT University Chennai Campus); Srs Prabaharan (VIT University Chennai Campus, India)

Emerging solid state memory devices based on different materials and volatility has been widely acknowledged like NVRAMs (or Memristor). Evolution of new solid state ionic conductors and in particular (Memristor) brought impetus to the creation of new domain of larger storage capabilities for the future electronic systems. The achievements of these emerging technologies are kind of encouraging when compared with the existing memory types. Accordingly, a new memory architecture called Resistive Random Access Memory (ReRAM) memory faces challenges like sneak path current flowing through neighbouring cells which limits array size. To deal with such issue is to enforce a crossbar array using complementary resistive switch (CRS). CRS has recently been proclaimed as a great beneficiary to conventional charge based memories. But, the nanoscale advantage of these devices poses new challenges in designing such memories as well. In this paper, our purpose is to familiarize Memristor principle and a preliminary note on various understanding of Memristor is also described and a novel non-linear memristive based complementary resistive switch memory model for effective simulation and analysis. The CRS has two memristor connected in anti-serially. Four different state of CRS which significantly reduces sneak path current as compared to memristor based architecture. Here, CRSs can be viewed as primary logic building block in array and two modes of resistance states of CRS stores the information. Thus, our aim is to elucidate as to how CRS is beneficial for reducing sneak path current.

Wednesday, August 12 14:30 - 18:30 (Asia/Kolkata)

ISTA-01: ISTA - Ad-hoc and Wireless Sensor Networks

Room: 308
Chairs: Ratnadeep Deshmukh (Babasaheb Ambedkar Marathwada University, India), Dnyanesh S Mantri (Pune University & Sinhgad Institute of Technology, Lonavala, India)
ISTA-01.1 An Empirical Study of OSER Evaluation with SNR for Two-Way Relaying Scheme
Akshay Pratap Singh (S. G. S. I. T. S. Indore, M. P., INDIA, India); Anjana Jain (Shri G. S. Institute of Technology & Science, India); Shekhar Sharma (S. G. S. I. T. S. INDORE, India)

The two-way relaying scheme is a network coding scheme which provides efficient utilization of spectrum. The performance of a two-way relaying scheme based on analog network coding (ANC) can be evaluated in context of many parameters such as overall symbol error rate (OSER), overall outage probability (OOP) and ergodic sum rate (ESR). In this paper we analyse the OSER over Nakagami-m fading channels for analog network coding scheme. The variation of OSER with respect to signal to noise ratio (SNR) has been observed for various factors such as fading parameter, modulation scheme used and number of antennas at the source and destination terminal. The threshold range of SNR for a tolerable value of OSER of 0.001 is computed as 20 to 25 dB. The results demonstrate the necessity of high SNR in two way relaying scheme for tolerable OSER. Further the analysis may be useful for selection of various physical layer parameters in design of communication networks.

ISTA-01.2 An Encryption Technique to Thwart Android Binder Exploits
Yadu Kaladharan (Amrita Vishwa Vidyapeetham, India); Prabhaker Mateti (Wright State University, USA); Jevitha K P (Amrita Vishwa Vidyapeetham, India)

Binder handles the interprocess communication in Android. Whether the communication is between the components of the same application or different applications, it happens through Binder. Hence captivating it can expose all the communications. Man-in-the-Binder is one such exploit that can subvert the Binder mechanism. In this paper, we propose an encryption mechanism that can provide confidentiality to app communications to prevent such exploits.

ISTA-01.3 Analysis of Communication Delay and Packet Loss During Localization Among Mobile Robots
Madhevan Balasundaram (IIITDM Kancheepuram, India)

Wheeled mobile robots moving in an unknown environment are made to face many obstacles while navigating in a planned or unplanned trajectory to reach their destination. But, no information is available regarding the failure of a leader robot of a group in both unknown and uncertain environments and the subsequent course of action by the follower robots. As the leader fails, one of the follower robots within the group can be assigned as a new leader so as to accomplish the planned trajectory. The present experimental work is carried out by a team of robot comprises of a leader robot and three follower robots and if the present leader fails, a new leader is selected from the group using leader follower approach. But, the problem of localization among multi mobile robots is subjected to communication delay and packet loss. The problem of data loss is analyzed and show that it can be modeled as a feedback system with dual mode observers. An algorithm has been developed to compensate this packet loss during communication between controller and server in a wifi-based robotics environment. Further, the simulation results prove that the algorithm developed is efficient compared to single mode observers, for both unknown and uncertain environments among multi robots.

ISTA-01.4 Android Smudge Attack Prevention Techniques
Amruth M D and Praveen K (Amrita Vishwa Vidyapeetham, India)

Graphical patterns are widely used for authentication in touch screen phones. When a user enters a pattern on a touch screen, epidermal oils of his skin leave oily residues on screen called smudge. Attackers can forensically retrieve this smudge which can help them to deduce the unlock pattern. In this paper we analyze some existing techniques and propose new techniques to prevent this attack. We propose Split pattern, Temporal lock, Random PIN lock and Wheel lock to reduce or prevent smudge attack. Usability and shoulder surfing resistance were also considered while designing these techniques. This paper explains how the proposed techniques are effective against smudge attacks.

ISTA-01.5 Distributed Air Indexing Scheme for Full-Text Search on Multiple Wireless Channel
Vikas Goel (KIET Group of Institutions Ghazibad, India)

Wireless data broadcast is the most popular method to disseminate frequently requested data efficiently to a large number of mobile devices. Full text search is a popular query type, used in document retrieval systems. Many research efforts have been made which focuses how to apply full text search on wireless broadcast. By increasing numbers of broadcast channels is a logical way to minimize the energy consumption and access latency. In this paper, we further extend the problem of generating a broadcast sequence of data items to facilitate energy efficient full text search on multiple wireless channels. To support our proposed indexing scheme, we propose a data access algorithm and a data broadcast model for full text search indexing scheme on multi channel. Since the energy of portable devices is limited, minimization of energy consumption and access latency for broadcasting are the important issues. Performance of the proposed scheme is further analyzed and compared with existing full text search indexing schemes. The results show the efficiency of our approach with respect to energy consumption and access latency.

ISTA-01.6 Network Optimization Using Femtocell Deployment At Macrocell Edge in Cognitive Environment
Joydev Ghosh (National Research Tomsk Polytechnic University, RUSSIA, Russia); Subham Bachhar (B C Roy Engineering College, India); Uttam Kumar and Ajit Rai (The New Horizons Institute of Technology, India); Sanjay Dhar Roy (National Institute of Technology Durgapur, India)

This research focuses on the problem of cell edge user's coverage in the context of femtocell networks operating within the locality of macrocell border where pathloss, shadowing, Rayleigh fading have been included into the environment. As macro cell edge users are located far-away from the macro base station (MBS), so that, the underprivileged users (cell edge users) get assisted by the cognitive-femto base station (FBS) to provide consistent quality of service (QoS). Considering various environment factors such as wall structure, number of walls, distance between MBS and users, interference effect (i.e., co-tier and cross-tier), we compute downlink (DL) throughput of femto user (FU) for single input single output (SISO) system over a particular sub-channel, but also based on spectrum allocation and power adaptation, performance of two tier network is analyzed considering network coverage as the performance metric. Finally, the effectiveness of the scheme is verified by extensive matlab simulation

ISTA-01.7 Power Budgeting and Cost Estimation for the Investment Decisions in Wireless Sensor Network Using the Energy Management Framework Aatral with the Case Study of Smart City Planning
Anuradha Subramanian (Madurai Kamaraj University & None, India); Thangaraj Muthuraman (Madurai Kamaraj University, India)

Energy engineering study in the field of Wireless Sensor Network (WSN) attracted many researchers in the last decade. The growing interest of researchers has contributed a variety of energy optimization solutions in the field for WSN. There is a need for consolidating all these energy efficiency initiatives at hardware,software, protocol level and algorithmic and architectural corrections and publish them as services for the energy management. The challenge is how this independent energy management framework helps in monitoring, optimizing and coordinating with the energy harvesting units of a typical WSN application bed set up and facilitate the entire energy management. One step further, can the energy management framework, keep track of benchmarks of energy usage for a typical WSN profile and help in recording the operating cost of the WSN application bed in terms of energy is the quest behind this framework Aatral. The independent energy framework Aatral helps not only managing the energy auditing, optimization, harvesting associated with theWireless Sensor Network but also keeps track of the operating cost, cost estimations and helps in deciding the investments by its special module called Energy Economics Calculator. This paper explains the architecture and design principles of the energy management framework and its functionality of power budgeting, cost estimation, investment decision with a use case of smart city planning with building depreciation sensors, traffic sensors, temperature sensors, intruder detection sensors, monitoring sensors, current leakage sensors .

ISTA-01.8 Real Time CO2 Monitoring and Alert System Based on Wireless Sensor Networks
Parvathy Pillai (Amrita Vishwa Vidyapeedam, India); Supriya M (Amrita Vishwa Vidyapeetham & Amrita School of Engineering, India)

Carbon Dioxide (CO2) is an inevitable part in atmosphere for the existence of life on the Earth. But if increased beyond a certain limit it is harmful as it is the major cause for global warming. The only effective way to fight this is to store CO2 away from the atmosphere for a long time. This is achieved by using Carbon Dioxide Capture and Storage (CCS). But one major concern that arises when CO2 is stored in large quantities is it's leakage from reservoir. An online CO2 emission monitoring and alert system is developed in order to monitor the leakage of CO2 in real time in CCS, try to reduce the leakage up to a controllable level and to give alert to the consent person, if the CO2 leakage goes beyond an uncontrollable limit. The data is stored in real time for further analysis. Advantages of wireless sensor networks are exploited in order to precisely sense the CO2 concentration.

ISTA-01.9 Real Time Water Utility Model Using GIS:A Case Study in Coimbatore District
Praveen Kumar (Amrita Vishwa Vidyapeetham University, India); P Geetha (Amrita University, India); GA Shanmugha Sundaram (Amrita Vishwa Vidyapeetham University & SIERS Research Laboratory, India)

Water has become the eternal wonder in 21st century with rapid increase of population and expansion of city limits. The demand for the water has grown up exponentially.Water distribution network needs an efficient modeling for the operation and maintenance with minimal errors in catering to the needs of people with the equitable amount of water through out the year.Creating a simulation model of a real time water distribution network with the account of the pressure and elevation to analyze the flow distribution between the nodes and demand in the network. Geographical information system is an effective tool for decision support using ArcGIS and Water-gems software.Here we tried to characterize the size of pipes with the different diameters of pipes used in the network.The results of the simulation model shows drastic change in the demands resulting in consequences like back-flow, high pressure zone and negative pressure leading to the leakage of pipes making more investment towards the installation and maintenance cost. Main aim of this research is to carry out hydraulic modelling of water distribution network using GIS and reducing the leakage in the pressure zones in saving the time and to minimize the expenditure towards the maintenance.Thus by creating an equity model for the water distribution network in fulfilling minimal demand required across the city

ISTA-01.10 Wind Farm Potential Assessment Using Gis
B Bhavya (Amrita Vishwa Vidyapeetham, India); P Geetha and K p Soman (Amrita University, India)

Wind energy harvest mainly resides on the place where the trapping of wind energy is efficient along with a self-sustained transmission grid. It becomes a compete source of renewable energy and a proper Geographic Information System (GIS) is required for mapping. This paper employs such techniques for the identification of wind potential area along with the pavement of the transmission grid in Avinashi taluk of Tamil Nadu. This helps in the potential identification of the wind farm installation place, there by solving the energy crisis in Avinashi.

ISTA-01.11 An Optimization to Routing Approach under WBAN Architectural Constraints
Aarti Sangwan (Mody University of Science and Technology, Laxmangarh, India); Partha Bhattacharya (Mody University of Science and Technology, Laxmangarh)

WBAN is having its significance in various application areas but the most critical dedicated application is to monitor the medical patient. WBAN is a specialized network with data level, node level criticalities. In this work, an optimization to the routing approach under architectural constraints is defined. The work has analyzed various node level, data level and communication level criticalities. The paper has defined a route optimization algorithm under these critical vectors. The work is here analyzed in terms of network life and energy consumption over the network. The experimentation is provided with real time constraints and the obtained results shows that the presented work has improved the network life.

ISTA-01.12 M-SEP: A variant of SEP for WSNs
Tenzin Jinpa (GGSIP University, India); B v r Reddy (GGSIP, India)

Energy efficiency of the protocol is one of the deciding factor while considering the efficiency of a protocol in WSN(Wireless Sensor Network).Thus in this paper we present an improved version of SEP protocol called M-SEP. M-SEP will inherit some properties of the SEP while introducing the multilevel power transmission in the protocol. Thus extending the lifetime of the network. In a nutshell idea is to acknowledge the existence of the different minimum energy requirement while transmitting data packet in WSN i.e. the intra transmission of packets require lower energy than that of the inter transmission or from the cluster head to base station transmission. By implementing the multilevel power transmission in the SEP protocol we improves the efficiency of the SEP protocol as shown by the simulation result,we called this protocol M-SEP or Modified Stable election Protocol.

ISTA-01.13 Network Monitoring & Internet Traffic Surveillance System: Issues & Challenges in India
Rajan Gupta and Sunil K Muttoo (University of Delhi, India)

NETRA or Network Traffic Analysis is an Internet traffic surveillance system developed by the Indian Government. It has been designed by various intelligence agencies of India. The conceptualization, monitoring & development have been done by a group consisting of the Centre for Artificial Intelligence and Robotics, Defense Research & Development Organization and National Technical Research Organization. The prime purpose of this application is to analyze the internet traffic and draw inferences for various suspected activities in the country. But the design scheme and implementation level analysis of the system shows few weaknesses like limited memory options, limited channels for monitoring, preset filters, ignoring big data demands, security concerns, social values breach and ignoring ethical issues. These can be covered through alternate options which can improve the existing system. The paper reviews the architectural framework and existing scheme of NETRA system and suggests improvements for the weak areas. The existing framework of NETRA system has been compared to similar international level surveillance systems like Dish Fire, Prism and Echelon. The similarity and differences amongst these systems are identified and recommendations are provided based on them. The analysis of surveillance system will help in developing several other mini spy-cum-monitoring models which can be further customized for various applications and communication channels in India.

ISTA-01.14 Power Efficient Routing by Load Balancing in Mobile Ad Hoc Networks
Ravi G (Sona College of Technology, India); Kishana Ram Kashwan (Sona College of Technology (An Autonomous Institution), India)

In Mobile Ad hoc Network (MANET), energy efficient routing in the hybrid domain based on residual energy provides many benefits. The energy efficient routing approach based on load balancing method can spread out data traffic and is still free from being overburdened of nodes. This can be done through creating many alternatives for efficient utilization of available resources. It keeps the network alive and attempts to maximize the network lifetime. The hybrid routing approach is a combination of the good features of reactive and proactive routing approaches in the MANET. An energy efficient routing protocol is pro-posed to fruitfully accomplish various challenges. Reliable Zone Routing Protocol (RZRP) is one such example. It is designed to minimize energy consumption in proactive routing and mobility prediction in reactive routing approach. RZRP has the ability to discharge the basic requirements for an efficient energy consumption technique. It is reliable and can increase the network lifetime of MANET. The results are analyzed and compared with Zone Routing Protocol (ZRP).

ISTA-01.15 Active and entire candidate sector channel utilization based close loop antenna array amplitude control technique for UMTS and CDMA networks to counter non uniform cell breathing
Archiman Lahiry (School Of Electronics Engineeering, KIIT University, Bhubaneshwar); Amlan Datta and Sushanta Tripathy (KIIT University, India)

The paper introduces a self optimized close loop antenna array amplitude control system to counter the effect of non-uniform cell size breathing in UMTS and CDMA networks by avoiding the overshooting cells in the entire network automatically. In the proposed antenna array amplitude control system the active sector's channel utilization as well as the relative channel utilizations of the entire candidate sectors is considered. We introduce a feedback system to detect physical parameters of the entire cell site sector antennas, like antenna heights above the ground level, antenna azimuths, antenna mechanical tilts, antenna latitudes and longitudes to avoid the chances of overshooting cells when the antenna array amplitudes are controlled remotely from the OMC-R. Remote mechanical antenna tilts are used to eliminate overshooting cells. The contribution of the proposed work is to develop an overshooting resistant antenna array amplitude control system to counter the non-uniform cell size breathing.

Wednesday, August 12 14:30 - 19:00 (Asia/Kolkata)

ISTA-02: ISTA - Business Intelligence and Big Data Analytics

Room: 308
Chair: Philip Samuel (Cochin University of Science& Technology, India)
ISTA-02.1 Data Integration of Heterogeneous Data Sources Using QR Decomposition
Sandhya Harikumar (Amrita Vishwa Vidyapeetham & Amritapuri, India); Mekha Meriam Roy (Amrita Vishwa Vidyapeetham, India)

Integration of data residing at different sites and providing users with a unified view of these data is being extensively studied for commercial and scientific purposes. Amongst various concerns of integration, semantic integration is the most challenging problem that addresses the resolution of the semantic conflicts between heterogeneous data sources. Even if the data sources may belong to similar domain, due to the lack of commonality in the schema of databases and the instances of databases, the unified result of the integration may be inaccurate and difficult to validate. So, identification of the most significant or independent attributes of each data source and then providing a unified view of these is a challenge in the realm of heterogeneity. This demands for proper analysis of each data source in order to have a comprehensive meaning and structure of the same. The contribution of this paper is in the realization of semantic integration of heterogeneous sources of similar domain using QR decomposition, together with a bridging knowledge base. The independent attributes of each data source are found that are integrated based on the similarity or correlation amongst them, for forming a global view of all the data sources, with the aid of a knowledge base. In case of an incomplete knowledge base, we also formulate a recommendation strategy for the integration of the possible set of attributes. Experimental results show the feasibility of this approach with the data sources of same domain.

ISTA-02.2 Discovering Context Using Contextual Positional Regions Based on Chains of Frequent Terms in Text Documents
Anagha R Kulkarni (Cummins College of Engineering for Women, India); Vrinda Tokekar (Devi Ahilya University, India); Parag Kulkarni (College of Engineering Pune, India)

While assigning importance to terms in Vector Space Model (VSM), most of the times, weights are assigned to terms straightaway. This way of assigning importance to terms fails to capture positional influence of terms in the document. To capture positional influence of terms, this paper proposes an algorithm to create Contextual Positional Regions (CPRs) called Dynamic Partitioning of Text Documents with Chains of Frequent Terms (DynaPart-CFT). Based on CPRs, Contextual Positional Influence (CPI) is calculated which helps in improving F-measure during text categorization. This novel way of assigning importance to terms is evaluated using three standard text datasets. The performance improvement is at the expense of small additional storage cost.

ISTA-02.3 Document Classification with Hierarchically Structured Dictionaries
Remya R.K. Menon (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, Amritapuri, India); Aswathi P (Amrita Vishwa Vidyapeetham, India)

Classification, clustering of documents, detecting novel documents, detecting emerging topics etc in a fast and efficient way, is of high relevance these days with the volume of online generated documents increasing rapidly. Experiments have resulted in innovative algorithms, methods and frameworks to address these problems. One such method is Dictionary Learning. We introduce a new 2-level hierarchical dictionary structure for classification such that the dictionary at the higher level is utilized to classify the K classes of documents. The results show around an 85% recall during the classification phase. This model can be extended to distributed environment where the higher level dictionary should be maintained at the master node and the lower level ones should be kept at worker nodes.

ISTA-02.4 Ensemble Prefetching Through Classification Using Support Vector Machine
Chithra Gracia (National Institute of Technology & Tiruchirapalli, India); Selvaraj Sudha (National Institute of Technology, Tiruchirappalli, India)

Owing to the steadfast growth of the Internet web objects and its multiple types, the latency incurred by the clients to retrieve a web document is perceived to be higher. Web prefetching is a challenging yet achievable technique to reduce the thus perceived latency. It anticipates the objects that may be requested in future based on certain features and fetches them into cache before actual request is made. Therefore, to achieve higher cache hit rate group prefetching is better. According to this, classification of web objects as groups using features like relative popularity and time of request is intended. Classification is aimed using Support Vector Machine learning approach and its higher classification rate reveals effective grouping. Once classified, prefetching is performed. Experiments are carried out to study the prefetching performance through Markov model, ART1, linear SVM and multiclass SVM approach. Compared to other techniques, a maximum hit rate of 93.39% and 94.11% with OAO and OAA SVM multiclass approach is attained respectively. Higher hit rate exhibited by the multiclass Support Vector Machine demonstrates the efficacy of the proposal.

ISTA-02.5 Formal Architecture Based Design Analysis for Certifying SWS RTOS
Yalamati Ramoji Rao (Jawaharlal Nehru Technological University & National Aerospace Laboratories, India); Manju Nanda (Principal Scientist, India)

In recent times Formal Techniques have been strongly recommendation in the engineering life-cycle of safety -critical systems. With this, Architecture Analysis & Design Language (AADL) is a widely spectrum accepted architecture modeling language that can be wrap with Formal Modeling techniques, that proficiently helps in the design of a safety-critical system and circumscribes various analytical features for modeling the hardware and software architecture/s, against the required as per the guidelines set aside in RTCA DO-178C (333- Formal Based Modeling). This paper discussion the use of architecture modeling language along with formal based techniques for the analyse of RTOS architecture which is important in the correct implement of the given requirements. The architecture of the RTOS is expresses and analyzed using AADL. A suitably case study such as Stall Warning System/Aircraft Interface Computer (SWS/AIC), RTOS scheduler is modeled and analyzed. The analyses results are mapped to the workflow prescribed in RTCA DO-178C for generating the certificate artifact and establishing the effectiveness of architecture based design analysis in the software engineering process.

ISTA-02.6 Intelligent Distributed Economic Dispatch in Smart Grids
Meheli Basu and Raghuraman Mudumbai (University of Iowa, USA); Soura Dasgupta (The University of Iowa, USA)

This paper considers the optimal economic dispatch of power generators in a smart electric grid for allocating power between generators to meet load requirements at a minimum total cost. We present a decentralized algorithm where, each generator independently adjusts its power output using only a measurement of the frequency deviation of the grid and minimal information exchange with its neighbors. Existing algorithms assume that frequency deviation is proportional to the load imbalance. In practice this is seldom exactly correct. We assume here that the only thing known about this relationship is that it is an unknown, odd, strictly increasing function. We provide a proof of convergence and simulations verifying the efficacy of the algorithm.

ISTA-02.7 Requirement of New Media Features for Enhancing Online Shopping Experience of Smartphone Users
Anuja Koli (MIT Institute of Design, India); Anirban Chowdhury (UPES School of Design (SoD) & University of Petroleum and Energy Studies (UPES), India); Debayan Dhar (MIT Institute of Design, India)

Now-a-days consumers are using different social media (e.g. Facebook, WhatsApp etc.) to share the product quality information to take feedback about the product from their friends, family members, colleagues etc. The aim of this study was to found out the feasibility of implementation of different media features in e-retailing platform for taking feedback about the product to enhance the consumers' experience of online purchase. With this intention, user survey was conducted using a standardized questionnaire which includes items about the users' demographic information, likeliness to share product information for getting online feedback, probable acceptance of future online purchase system having new media features, willingness to use proposed online purchase system with new media features, priority to use specific media features for sharing product choice information through online. Results of the present study suggest that people would like to share product comparison screen than screen share and voice chat options in e-commerce websites. Users also preferred to get personalized reviews through e-retailers' website itself. Therefore, these preferred media features may be integrated to create a better user experience in online purchase platform which in turn helps in quick product decisions during online purchase.

ISTA-02.8 Research and Development of Knowledge Based Intelligent Design System for Bearings Library Construction Using SOLIDWORKS API
Jayakiran Reddy Esanakula (Sri Padmavati Mahila Visvavidyalayam & Tirupati, India); Cnv Sridhar (Annamacharya Institute of Technology and Sciences, Rajampet, India); V Pandu Rangadu (JNTUA College of Engineering, India)

The traditional design method of bearing is mainly based on the manual design process which invites numerous calculations. The small change in shape or size of assembly component will cause massive chain reaction like revision of blueprint because of many interrelated design issues. Hence, the bearing design needs to be changed in order to match the altered component. Advanced design methods such as CAD/CAM provide solutions for these issues by using of parametric modeling technique. This paper presents a typical knowledge based engineering system for rapid design and modeling of bearings based on operating conditions by integrating commercially available CAD package SolidWorks with Microsoft Access. An inference engine and proper user interface was developed for bearing design for assisting the engineering designers. The developed system proved itself as better application of engineering by utilizing the reuse of the design knowledge.

ISTA-02.9 Towards Development of National Health Data Warehouse for Knowledge Discovery
Shahidul Khan and Abu Sayed Md. Latiful Hoque (Bangladesh University of Engineering & Technology (BUET), Bangladesh)

Availability of accurate data on time is essential for medical decision making. Healthcare organizations own a large amount of data in various systems. Researchers, health care providers and patients will not be able to utilize the knowledge in different stores unless integration of the information from disparate sources is completed. Developing health data warehouse is a complex process and also consumes a significant amount of time but it is essential to deliver quality health services. In this paper the architecture of a data warehouse model and the development process suitable for integrating data from different healthcare sources have been presented. We have developed a Star schema suitable for large data warehouse. Integrating health data requires a rigorous preprocessing and we have completed the preprocessing of national health data by applying efficient transformation techniques. Finally the knowledge discovery potentials from the data warehouse are also presented with relevant examples.

ISTA-02.10 Gender Profiling From PhD Theses Using k-Nearest Neighbour and Sequential Minimal Optimisation
Hoshiladevi Ramnial and Shireen Panchoo (University of Technology, Mauritius); Sameerchand Pudaruth (University of Mauritius, Mauritius)

Author profiling is a subfield of text categorisation in which the aim is to predict some characteristics of a writer. In this paper, our objective is to determine the gender of an author based on their writings. Our corpus consists of 10 PhD theses which was split into equal sized segments of 1000, 5000 and 10000 words. From this corpus, a total of 446 features were extracted. Some new features like combined-words, new words endings and new POS tags were used in this study. The features were not separated into categories. Two machine learning classifiers, namely the k-nearest neighbour and support vector machines classifier were used to assess the practicability and utility of our study. We were able to achieve 100% accuracy using the sequential minimal optimisation (SMO) algorithm with 40 document parts. Surprisingly, the simple and lazy k-nearest neighbour (kNN) classifier which is often discarded in gender profiling studies achieved a 98% accuracy with the same group of documents. Furthermore, 5-NN and 7-NN even outperformed SMO when using 400 document parts of 1000 words each. These values are much higher than those obtained in previous studies. However, we ours is a new dataset and the results are therefore not directly comparable. Thus, our experiments provide further evidence that it is possible to infer the gender of an author using a computational linguistic approach.

ISTA-02.11 An Intelligent Model for Privacy Preserving Data Mining: Study on Health Care Data
Jisha Jose Panackal (Vidya Academy of Science & Technology, India); Anitha S Pillai (Hindustan University, Chennai, India)

Critical challenge in developing a privacy protection mechanism is to preserve maximum information because protection mechanisms normally impact on the quality of data and which are served not accordingly with the data utility. Practical solutions to address various socio-economic needs with special emphasize on the utility of data have not been devised yet. To publish maximum information while protecting the privacy, we propose an intelligent mechanism and this paper includes a comprehensive study and explores how effectively the privacy of individuals can be protected with minimum information loss. Empirical evaluations on original health care data related to Indian Population show the effectiveness of the new approach, namely Adaptive Utility-based Anonymization (AUA).

ISTA-02.12 Bridging the gap between Users and Recommender Systems: A change in perspective to User Profiling
Monika Singh (Jamia Millia Islamia, India); Monica Mehrotra (Jamia Millia Islamia, Delhi, India)

One of the prevalent research challenges in the field of recommender system is to do better user profiling. There are some advanced user profiling techniques found in the literature to achieve the same. User profiling aims to understand the user well and as a result recommending the most relevant items to the user, where relevant means items returned as a result of intelligent techniques from various fields,mainly from data mining. This work is an attempt to answer the question 'who understands a user the most?' The three obvious answers are Recommender System's high end approaches (e.g. data mining and statistical approaches), neighbors of the user or the user herself. The correct answer would be the last one, which is a user knows herself the best. In this direction, we propose to make users empowered and responsible for registering their preferences and sharing the same at their discretion. More personalized solutions can be offered when a user tells what she prefers and can contribute explicitly to the recommendation system results generation. When a user is given the handle to communicate her preferences to the recommender system, more personalized recommendations can be given which not only are relevant (as tested by sophistication evaluation matrices for recommender systems) but also plays wonder to users' satisfaction.

ISTA-02.13 Efficient User Profiling in Twitter Social Network Using Traditional Classifiers
Raghuram M A, Akshay K and Chandrasekaran K (National Institute of Technology Karnataka, India)

Any discussion in a social media can be fruitful if the people involved in the discussion are related to that field. In a similar way to advertise an event we must find the users who are interested in the content of the event. Since social networks like twitter contain a large number of users the categorization of those users based on their interests will help this cause. In this paper we present an efficient supervised machine learning approach which categorizes twitter users based on three important features(Tweet-based, User-based and Time-series based) into six categories - Politics, Entertainment, Entrepreneurship, Journalism, Science \& Technology and Healthcare . We compare the proposed feature set with different traditional classifiers like Support Vector Machines, Naive-bayes and obtain upto 89.82\% accuracy in classification. We also propose a design for a real-time system for Twitter user profiling along with a prototype implementation.

ISTA-02.14 Fuzzy Differential Evolution based Gateway Placements in WMN for Cost Optimization
Merlin Sheeba (Sathyabama University, India)

Mesh node placement problem is one of the major design issues in Wireless Mesh Network (WMN). Mesh networking is one of the cost effective solution for broadband internet connectivity. Gateway is one of the active devices in the backbone network to supply internet service to the users. Multiple gateways will be needed for high density networks. The budget and the time to setup these networks are important parameters to be considered. Given the number of gateways and routers with the number of clients in the service area, an optimization problem is formulated such that the installation cost is minimized satisfying the QOS constraints. In this paper a traffic weight algorithm is used for the placement of gateways based on the traffic demand. A cost minimization model is proposed and evaluated using three global optimization search algorithms such as Simulated Annealing (SA), Differential Evolution (DE) and Fuzzy DE (FDE).The simulation result shows that FDE method achieves best minimum compared with other two algorithms.

ISTA-02.15 Prediction of Urban Sprawl Using Remote Sensing, GIS and Multi-layer Perceptron for the City Jaipur
Pushpendra Singh Sisodia (Poornima College of Engineering, India); Vivekanand Tiwari and Anil Kumar Dahiya (Manipal University Jaipur, India)

The population of India has rapidly increased from 68.33 million to 121.01 million from 1981 to 2011, respectively. It is estimated that by the year 2028 India will hold the largest population of the world. The prompt upsurge of the Indian population will force people to migrate from the rural area to the mega cities, to avail basic amenities. The enormous migration will increase the demand for more space to live in mega cities and will lead to a situation of unauthorized, unplanned, uncoordinated, and uncontrolled growth, and this condition called as urban sprawl. The key challenge for a planner is to achieve sustainable development and to predict the future urban sprawl in the city. Unfortunately, conventional techniques that predict urban sprawl are expensive and time consuming. In this paper, we have proposed a novel technique to predict the future urban sprawl. We have used an integrated approach of Remote Sensing, GIS, and Multilayer per-ceptron to predict the future urban sprawl for the city Jaipur up to 2021. We have compared our results with existing techniques like Linear Regression and Gaussian Process and found that the Multilayer perceptron gives better results than other existing techniques.

ISTA-02.16 Hyperspectral Image Denoising Using Legendre-Fenchel Transform for Improved Sparsity Based Classification
Nikhila Haridas and Aswathy C (Amrita Vishwa Vidyapeetham, India); V Sowmya (Amrita Vishwavidyapeetham, India); Soman K P (Amrita Vishwa Vidyapeetham, India)

A significant challenge in hyperspectral remote sensing image analysis is the presence of noise, which has a negative impact on various data analysis methods such as image classification, target detection, unmixing etc. In order to address this issue, hyperspectral image denoising is used as a preprocessing step prior to classification. This paper presents an effective, fast and reliable method for denoising hyperspectral images followed by classification based on sparse representation of hyperspectral data. The use of Legendre-Fenchel transform for denoising is an effective spatial preprocessing step to improve the classification accuracy. The main advantage of Legendre-Fenchel transform is that it removes the noise in the image while preserving the sharp edges. The sparsity based algorithm namely, Orthogonal Matching Pursuit (OMP) is used for classification. The experiment is done on Indian Pines data set acquired by Airborne Visible Infrared Imaging Spectrometer(AVIRIS) sensor. It is inferred that the denoising of hyperspectral images before classification improves the Overall Accuracy of classification. A statistical comparison of the accuracies obtained on standard hyperspectral data before and after denoising is analysed to show the effectiveness of the proposed method. The experimental result analysis shows that for 10% training set the proposed method leads to the improvement in Overall Accuracy from 83.18% to 91.06% , Average Accuracy from 86.17% to 92.78% and Kappa coefficient from 0.8079 to 0.8981.

S23: S23-Security, Trust and Privacy- III/Wireless Communication-I

Room: 309
Chair: Monish Chatterjee (Asansol Engineering College & West Bengal University of Technology, India)
S23.1 Ciphertext Policy-Hiding Attribute-Based Encryption
Umesh Yadav (Nit Kurukshetra, India); Syed Taqi Ali (Visvesvaraya National Institute of Technology, Nagpur, India)

Ciphertext-policy attribute based encryption (CP-ABE) is becoming very important in distributed computing environment, because it makes easier to protect, broadcast and control the access of information, especially over the cloud server. In CP-ABE every plaintext is encrypted under an access structure, defined on the user's attribute and users have given private keys in advance from the trusted and reliable authority. If the user's attributes satisfy the access structure then only user can decrypt the ciphertext using his/her private keys. But, there is one privacy issue in available CP-ABE schemes, owner sends the access structure along with the ciphertext and everyone can learn the access policy. Sometimes this access structure violates the privacy of the ciphertext, i.e. it reveals the type of information the ciphertext contains and sometimes reveals the partial anonymity of the decryptor. It is desirable to hide the access structure associated to the ciphertext, especially in the medical database cloud server applications. Previous CP-ABE schemes hides only plaintext through encryption. There are few CP-ABE schemes which hides access structure (policy) as well but are not efficient in terms of computations. We proposed a secure CP-ABE scheme using composite-order bilinear groups which hides access policy.

S23.2 A Novel 3-4 Image Secret Sharing Scheme
Shyamalendu Kandar (Indian Institute of Engineering Science and Technology); Rituraj Roy and Sayantani Bandyopadhyay (Indian Institute of Engineering Science and Technology, India); Bibhas Chandra Dhara (Jadavpur University, India)
  • Image secret sharing involves sharing of a secret image into a number (say n) of pieces called shadow images in such a way that a threshold number (say k) of shares can retrieve back the original image. In this paper we have proposed a (3, 4) image secret sharing scheme where shares are generated using the concept of visual cryptography by 2 X 2 pixel grid construction. The pixels grids are scrambled using pseudo random sequence for better security. Proposed method requires less mathematical calculation than some existing techniques and does not need to take into account the size of the original image in the reconstruction phase.
S23.3 Channel Coding Performance of Optical MIMO Indoor Visible Light Communication
Mahesh Kumar Jha (CMR Institute of Technology, Bengaluru, India); Anusha Addanki (Amrita Vishwa Vidyapeetham (University), India); Lakshmi Yamujala (Centre for Development of Telematics, India); Navin Kumar (Amrita University & School of Engineering, India)

In this paper, a visible light communication (VLC) in the indoor environment based on MIMO system is investigated and analyzed. An attempt is made to optimize the placement of light emitting diodes (LED) VLC emitter in 4 × 4 MIMO configuration at ceiling and ground floor for the receiver to obtain uniform signal-to-noise ratio (SNR) for the desired bit error rate (BER) in the given area. The system is also optimized for the necessary power transmission from LED emitters for the maximum data rate. Two channel coding techniques repetition code (RC) and spatial modulation (SM) are analyzed and their BER and data rate performance results are presented for different receiver spacing moving in the entire area. The result shows that RC gives almost constant SNR whereas SM offers better SNR ratio for more spacing between receivers. In the case of SM, only one transmitter will transmit at a time. So the SNR value will be very low at other corner of the room.

S23.4 A Mixed Strategy Game Theoretic Approach to Dynamic Load Balancing in Cellular Networks
Anshul Mittal and Manish Kumar Sharma (Chandigarh University, India)

The amalgamation of cellular networks provided by different service providers will help in providing high end connectivity to mobile users as well as increasing the overall QoS. The unification of different mobile networks will also give rise to problems such as network congestion etc. Load balancing will become a crucial process to avoid network congestion and performance degradation in future cellular networks. The proposed work aims at studying the dynamics of network selection in heterogeneous cellular networks using game theory. The competition among users to share the limited available bandwidth and between service providers to generate maximum revenue by providing the appropriate QoS to each user as per their requirement is modeled as a game; and Nash equilibrium is considered as a solution to this game, with user satisfaction as the primary motive. Simulation results show that the proposed technique reduces the accumulative load and the used resources of the whole network which enables the service provider to serve more users efficiently.

S23.5 Design of Robust Adaptive Controller for a Four Wheel Omnidirectional Mobile Robot
Veer Alakshendra (Visvesvaraya National Institute of Technology, Nagpur, India); Shital Chiddarwar (Visvesvaraya National Institute of Technology Nagpur, India)

This paper presents a methodology for designing an adaptive and robust sliding mode controller for a four wheel omnidirectional mobile robot (FWOMR). First kinematic and dynamic equation is derived, considering friction forces and viscous damping effects. In presence of unknown uncertainty FWOMR is unable to track the desired trajectory. To tackle the problem an adaptive integral sliding mode controller is designed. The controller consist of two control laws, one to make the system robust and another to estimate the unknown uncertainties. Simulations are conducted with changing magnitude of uncertainties to show the effectiveness of proposed controller on FWOMR.

S23.6 A high Gain 2-Element Microstrip Array Antenna with Circular Polarisation for RFID Applications
Bhushan Bhimrao Dhengale and Deepak C. Karia (University of Mumbai, India)

Radio Frequency Identification(RFID) is a wireless technology used for tracking a tag attached to an object and uniquely identifying it. In the proposed work, a compact Microstrip antenna that has a high Gain, a good Return Loss (S11), a good Impedance bandwidth and a very good Circular Polarisation working at 2.4 GHz ISM band for RFID Applications has been designed. The antenna has been designed and simulated with Fire Resistant FR4 Epoxy as its substrate. The antenna is an array of 2 elements, has an overall dimensions of 120 x 73 x 1.5 mm3 and has a Microstrip line feed matched to 50. The impedance bandwidth achieved by the design is about 92 MHz (2.44-2.53 MHz) at the 2.4 GHz ISM band and has an S11 of about -16.10 dB. The Gain obtained is around 7.5 dB. The final design has a very good Circular Polarisation(CP) with an Axial Ratio of 0.8 dB and an Axial Ratio bandwidth of 47 MHz(2.430-2.477 MHz). The antenna has a beamwidth of 60, a broadside radiation pattern and is highly directional. The antenna has been designed using Ansoft High Frequency Simulation Software (HFSS).

S23.7 Resource Aware Traffic Grooming with Shared Protection at Connection in WDM Mesh Networks
Asima Bhattacharya (Mara-Ison Information Services Pvt. Ltd., India); Marichi Agarwal (TCS Research, India); Sana Tabassum (Asansol Engineering College, India); Monish Chatterjee (Asansol Engineering College & West Bengal University of Technology, India)

Traffic grooming schemes have attracted a lot of research attention for addressing the bandwidth mismatch between low data-rate user connection requests and a high capacity lightpath. Survivability of individual user connections is also highly important. The problem of survivable traffic grooming is known to be NP-Complete. In this paper we suggest heuristics to solve the problem in WDM mesh networks efficiently in polynomial time. For ensuring survivability we employ shared protection at connection. We propose three resource-aware static traffic grooming schemes MLWH (Minimizing Lightpaths, Wavelengths and traffic Hops), DLCR (Decreasing Link Capacity Required) and ESCU (Efficient Spare Capacity Utilization). The second and third strategies are enhancement of the first. To the best of our knowledge no other recent paper have addressed the issue of minimizing use of lightpaths, wavelengths and traffic hops, decreasing required link capacity and improving spare capacity utilization of existing logical topology for a given set of connection requests together in polynomial time. Performance comparisons show that our strategies can provide significant benefits.

S23.8 Resource sharing in D2D Communication Underlaying Cellular LTE-A Networks
Ajay Pratap (Indian Institute of Technology (BHU) Varanasi, India); Rajiv Misra (Indian Institute of Technology Patna & IIT Patna, India)

Device-to-Device (i.e. D2D) communication technology not only magnify system capacity but also utilizes the gain of physical proximity of communicating devices to support services like proximity services, traffic offloading etc. from eNB (i.e. evolved Node-B). D2D communication enhanced resource efficiency and brought down traffic load from eNB in LTE-A networks. Coexistence of D2D users and conventional cellular users in the network, efficient resource sharing among devices becomes challenging task. The optimized resource allocation problem is NP-hard problem. To maximize total utility and re-source efficiency of devices, we use interference graph model to estimate interference scenario among devices in cellular LTE-A networks. In this model devices present vertices whereas edges are formed among interfering devices. Further, we apply graph coloring based model to solve resource allocation problem in D2D communication underlaying cellular LTE-A networks. Moreover, the proposed algorithmic model gives better result than greedy orthogonal resource assignment scheme as well as achieve approaching performance with respect to optimal resource assignment scheme, in an efficient resource allocation technique with feasible complexity of O ( KN4 ) , here K is the total number of RBs whereas N is number of devices in the cellular LTE-A networks.

S23.9 A Survey based Analysis of Propagation Models Over the Sea
Sumayya Balkees (Amrita Viswa Vidyapeetham, India); Kalyan Sasidhar (DAIICT, India); Sethuraman N Rao (Amrita Vishwa Vidyapeetham, India)

The design of communication architecture over a sea environment is challenging since it requires careful analysis of different factors such as multipath propagation, antenna characteristics and mobility that affect a signal. Although existing work has addressed a few factors, there is no work which provides a complete analysis of path loss incorporating such factors and how they affect on Received Signal Level. To get a clear picture of what work has been proposed in the literature, we present a survey article that gathers information about various propagation models used and proposed for maritime environment. We analyze each work based on the simulation set up, number of factors considered, error differences between simulated and experimental data and so on. We believe this article will help to identify the current state in simulation and analysis of communication links over the sea environment.

S23.10 Novel UWB Swastika slot antenna with concentric circular slots and modified ground plane with inverted L-shaped slots
Seera Dileep Raju (Indian Institute of Technology Hyderabad & IBM, India); Bojja Haranath and Lakhan Panwar (IIT Hyderabad, India)

In this present technophile era, we are graced with myriad advancements which are nothing short of miracles. One such boon, benevolently acceding to our every requirement in the wing of information transfer within a heartbeat, is the applied science of Ultra Wide band(UWB) technology. In this paper, we analyzed different slots on micro-strip patch antenna and proposed four modified Ultra-wide band Swastika slot antennas. Both ground plane and radiating patch are modified in our proposed antennas. In first three antenna designs (antenna design 1, antenna design 2, antenna design 3) the radiating patch is modified with concentric circular slots of different dimensions while in antenna design 4, two inverted L-shaped slots on ground plane are used to achieve enhanced bandwidth and reduced return losses. All these proposed novel antennas are of compact size having dimensions of 24 mm x 24 mm and they work in the UWB range (3.1 GHz to 10.6 GHz). The antenna parameters like bandwidth, return loss, radiation pattern and impedance of these antennas are analyzed and discussed in this paper. The simulations are done using High Frequency Structure Simulator (HFSS) tool.

S23.11 A Comparative Study of different notches for WLAN rejection in a planar UWB Antenna
Divyanshu Upadhyay and Indranil Acharya (Vellore Institute of Technology, India)

In this paper, a planar ultra- wideband antenna (UWB) is analyzed and different methods for creating notches are adopted for efficient WLAN frequency rejection. The antenna is mounted on a dielectric substrate having a value of 3.38. The antenna has a bandwidth of 7.22 GHz and a gain of 1.13 dB. Three different novel approaches for creating notches are presented followed by their comparative study. Efficient stop-bands can be obtained in the frequency range of 5-6 GHz thereby rejecting WLAN frequency. The antenna exhibits stable radiation patterns. All the analysis is done in HFSS 2013.

S23.12 An Experimental Evaluation of Impact of Synchronization on GSM Network Performance
Manjunath RK (Visvesvaraya Technological University & RV College of Engineering, India); Nagabhushan Raju K (Sri Krishnadevaraya University, India)

Many articles have been published on TDM SDH backhaul synchronization and few on GSM network (NW synchronization), but very few articles have appeared on experimental trails conducted to show the impact of clock quality and Synchronization on GSM network performance specifically with regard to network hygiene and call quality(call drops and hand overs). In this paper an experimental trail conducted in India in one of the GSM network managed by a service provider, to assess the impact of Sync clock quality of a BSC (Base station controller), on GSM network performance is presented. Various methods which could be adapted for evaluation of GSM TDM network synchronization performance like clock quality, alarms, and DAC word etc. are also presented. Finally impact of Clock synchronization on some of the GSM network RF (Radio frequency) KPIs (Key performance indicators) like Hand over success rate (HOSR), Traffic channel Success rate (TCHSR), Standalone dedicated control channel completion rate (SDCCHCR) and on reduction of synchronization failure alarms to improve network hygiene is presented. In addition various issues with regard to GSM network synchronization are also discussed.

S23.13 Anonymizing Classification Data for Preserving Privacy
Sarat Chettri (Assam Don Bosco University, India); B Borah (Tezpur Univertsity, India)

Classification of data with privacy preservation is a fundamental problem in privacy preserving data mining. The privacy goal requires concealing the sensitive information that may identify certain individuals breaching their privacy, whereas the classification goal requires to accurately classifying the data. One way to achieve both is to anonymize the dataset that contains the sensitive information of individuals before getting it released for data analysis. Microaggregation is an efficient privacy preservation technique used by statistical disclosure control community as well as data mining community to anonymize a dataset. It naturally satisfies k-anonymity without resorting to generalisations or suppression of data. In this paper we propose a new method named Microaggregation based Classification Tree (MiCT). In MiCT method data are perturbed prior to its classification and we use tree properties to achieve the objective of privacy preserving classification of data. To evaluate the effectiveness of the proposed method we have conducted intensive experiments on real life data and proved that our method provides improved classification accuracy by preserving privacy.

Wednesday, August 12 14:30 - 19:10 (Asia/Kolkata)

S22: S22-Cloud, Cluster, Grid and P2P Computing

Room: 310
Chairs: Sudheep Elayidom M (Cochin University (CUSAT), India), Aaradhana Arvind Deshmukh (Aalborg University Denamrk & University of Pune India, India)
S22.1 Cascket: A Binary protocol based C client-driver for Apache Cassandra
Sarang Karpate (College of Engineering, Pune, India); Abhishek Joshi (Carnegie Mellon University, USA); Javed Dosani and Jibi Abraham (College of Engineering, Pune, India)

Few can deny the diverse applications of NoSQL Database technology today. The last decade has witnessed an exponential rise in not only data generation but also in the manipulation and management of the same. NoSQL Technologies have engendered a new platform for large enterprises to come up and have provided them a wide vista for data manipulation. Moreover, existing technologies have been augmented with the unique features that this technology offers. Apache Cassandra is one of the leading NoSQL Databases today. The rise of Cassandra has also given rise to corresponding high level client drivers. There are several drivers present today for interfacing Cassandra with languages such as PHP, Java, C++, etc. However, to the best of our knowledge, no such driver for C exists. This paper proposes such a driver, 'Cascket' that allows direct communication of Cassandra over C without the existence of a middle-ware.

S22.2 Pre processing of Evidences from Cloud components for effective forensic analysis
Saibharath S (BITS Pilani Hyderabad Campus, India); Geethakumari G (BITS-Pilani, Hyderabad Campus, India)

Business organizations are migrating from capital expenditure models to the pay per use model of Cloud computing and avoiding infrastructural costs. Cloud systems being prone to attacks, there is a need of cyber forensic mechanisms. Traditional digital forensics models and solutions cannot be applied directly in cloud platform due to its distinct features such as multi tenancy, virtualization, rapid elasticity and the segregation of duties among cloud actors. Several technical challenges under variability of architecture, data collection, analysis and anti-forensics exist in cloud forensics. In this paper, firstly a cloud forensic clustering model is proposed across multiple virtual machine instances. Every virtual machine constitutes a virtual machine disk and its corresponding RAM image. This forensic clustering solution reduces the search space, enables multi drive correlation and forms a social network of virtual machine instances. Secondly addressing variability of cloud architectures, open source cloud platforms OpenNebula and OpenStack are compared with respect to location of evidence artifacts. An acquisition approach with the pre-processing engine to handle different architectures is designed and implemented.

S22.3 Co-operation Based Game Theoretic Approach for Resource Bargaining in Cloud Computing Environment
Gopal Shyam (REVA ITM, India); Sunilkumar S. Manvi (REVA Institute of Techology and Mgmt, India)

In cloud computing, service provider (SP) must manage its resource in a fair way to ensure that cost for providing resource is low, energy consumption is minimized and resource utilization is maximized. Service user (SU) and SP in cloud have potential conflicting interests: SU prefers reliable resources at minimum cost for job execution whereas SP prefers efficient utilization of resources with a maximum profit and optimal energy consumption. Hence it is necessary to have a resource bargaining mechanism that assures job execution and satisfies needs of SUs and SPs. This paper proposes a cooperation based resource bargaining scheme using game theoretic approach to manage cost, energy consumption and resource utilization. Bargaining process is applied through different strategies based on the status of SU and SP. The scheme is simulated to evaluate parameters like job submission, bargaining steps, expected payoffs, and job execution. We observed that the proposed scheme performs better than the non-cooperative version of proposed scheme.

S22.4 Client-Side Verifiable Accounting in Infrastructure Cloud
Varun Bhardwaj (The LNMIIT, India); Anamika Sharma (Capgemini India Pvt. Ltd., India); Gaurav Somani (Central University of Rajasthan, India)

Cloud computing is a novel computing paradigm. It is a pay as you go model with on demand resource provisioning. Infrastructure clouds provide many important features like elasticity, scalability, better hardware utilization and most importantly better return on investment (ROI). Resource metering and accounting plays an important role while considering Infrastructure cloud deployment models. So far, all of the metering and accounting solutions for clouds are either purely provider side or based upon third party verification. Even, the available solutions can not be considered as pure pay as you go model as the consumer is accounted for hourly based static resources. In this work, we have implemented a novel resource accounting model which addresses these two problems by providing two novel features. First, Our model implements a client side verifiable accounting model which account resources on the consumer side virtual machine to verify the resource accounting. Second, our model implements pure pay as you go model which does accounting based on CPU program counters with low overhead and proc filesystem for I/O and network accounting.

S22.5 A Stackelberg game to incentivize Cooperation in BitTorrent
Chintan Doshi and Chandrasekaran K (National Institute of Technology Karnataka, India)

This paper presents a Stackelberg game between the tracker and peer of a torrent in a BitTorrent community which incentivizes cooperation amongst peers. We propose a change in the allocation of peers to a peer by tracker and suggest modifications in the allocation algorithm rather than keeping it completely random. By this change the role of tracker in a BitTorrent community is promoted from just a point of contact amongst peers to a moderator of cooperation among connected peers. As leechers in BitTorrent are faced with the conflict between eagerness to download and the unwillingness to upload, we mitigate this selfish behavior by incentivizing the peers with high upload-download ratio by awarding them with more peers to connect with and we punish the selfish peers who do not contribute more than a threshold value by limiting the number of peers allocated to them. We use a game theoretic model to prove that dominant strategy equilibrium exists in such game and the strategy to achieve this equilibrium is to cooperate for each peer. We further simulate the suggested incentive mechanism experimentally using Network Simulator 2.29 and prove the effectiveness of our results.

S22.6 CAB: Cloudlets as Agents of Cloud Brokers
Chhabi Rani Panigrahi (Rama Devi Women's University, Bhubaneswar, India); Mayank Tiwary (C. V. Raman College of Engineering, India); Bibudhendu Pati (Rama Devi Women's University, India); Rachita Misra (C. V. Raman College of Engineering, Bhubaneswar, India)

Cloud computing provides a variety of services to the users and charges the users based on the usage. The development of cyber foraging concept allows the users to offload the computationally intensive services to the nearest data centers for getting instant results. The Internet WAN latency acts as a major constraint to get instant results for computationally intensive services. The development of cloudlets in recent years allows the cloud users to offload computationally intensive operations to cloudlets to get the results, where the cloudlets get connected with users through LAN or Wi-Fi. The cloudlets are generally deployed based on the geographical area and density of users. Cloud brokers act as an intermediate layer between the cloud and the users. The brokers select best set of services from different cloud providers and offer these services to the users. In this work, we propose a cloudlet broker architecture and assume that cloud brokers mainly focus on providing the computationally intensive services or services with strict time deadline specified in Service Level Agreement(SLA) to the users through cloudlets. The cloud brokers deploy cloudlets based on user requirements for computationally intensive services or strict SLA's. In this work, a pricing scheme and an algorithm to optimize the profit for cloud brokers is also proposed. Simulation results indicate that the profit for brokers increases when the number of cloudlets increase for an area or the processing and memory resources increase for a cloudlet and that also causes more faster execution of services requested from the users.

S22.7 A Neural Data Security Model: Ensure High Confidentiality and Security in Cloud Data Storage Environment
Jegadeeswari S, Dinadayalan P and Gnanambigai N (Pondicherry University, India)

Cloud computing is a computing paradigm which provides a dynamic environment for end users to guarantee Quality of Service (QoS) on data towards confidentiality on the out sourced data. Confidentiality is about accessing a set of information from a cloud database with a high security level. This research proposes a new cloud data security model, A Neural Data Security Model to ensure high confidentiality and security in cloud data storage environment for achieving data confidentiality in the cloud database platform. This cloud Neural Data Security Model comprises Dynamic Hashing Fragmented Component and Feedback Neural Data Security Component. The data security component deals with data encryption for sensitive data using the RSA algorithm to increase the confidentiality level. The fragmented sensitive data is stored in dynamic hashing. The Feedback Neural Data Security Component is used to encrypt and decrypt the sensitive data by using Feedback Neural Network. This Feedback Neural Network is deployed using the RSA security algorithm. This work is efficient and effective for all kinds of queries requested by the user. The performance of this work is better than the conventional cloud data security models as it achieve a high data confidentiality level.

S22.8 A New Hybrid Scheduling in Cloud Environment
Komal Goyal and Arvinder Kaur (Chandigarh University, India)

Cloud computing is a technology that provides anytime anywhere access for user applications irrespective of their location and provides access on machines other than the native machine or some other remotely connected devices. The Cloud computing environment provides facilities for handling the cost of servers and manages the software updates. Every user has a specific requirement for Quality of Service in terms of cost and response time. In Cloud computing environment the availability of resources might be limited but the received user requests of the access part are unlimited which makes scheduling of resources a challenging process. In this paper, we propose a Combinational Scheduling technique which is based on Haizea and the Condor scheduling algorithms. The proposed technique attempts to improve the scheduling parameters of both schedulers to maximize the resource utilization and improves the job turnaround times. The simulation results show the effectiveness of the proposed technique as compared to the existing schemes.

S22.9 Shamir's Key based Confidentiality on Cloud Data Storage
Kamal Raj, Bala Murugan, Jegadeeswari S and Sugumaran M (Pondicherry University, India)

Cloud computing is a flexible, cost effective and proven delivery platform for providing business or consumer services over the Internet. Cloud computing supports distributed service over the internet as service oriented architecture, multi-user, and multi-domain administrative infrastructure, hence, it is more easily affected by security threats and vulnerabilities. Cloud computing acts as a new paradigm where it provides a dynamic environment for end users and also guarantees Quality of Service (QoS) on data confidentiality. To generate efficient privacy preserving query plans on the cloud data, Shamir's Key Distribution based Confidentiality (SKDC) Scheme is presented. Key Distribution achieves a higher percentage of confidentiality by residing the cloud data with polynomial interpolation. The SKDC scheme creates a polynomial of degree with the secret as the first coefficient and the remaining coefficients picked up at random to improve the privacy preserving level on the cloud infrastructure. The 'k' is the key hidden from the public cloud users to improve the confidentiality rate. Shamir's Key Distribution supports batch auditing where multiple user requests for data auditing is held concurrently at a higher confidentiality rate.

S22.10 A Novel Secure Cloud Storage Architecture Combining Proof of Retrievability and Revocation
Deepnarayan Tiwari (Institute for Development and Research in Banking Technology & Central University of Hyderabad, India); Gr Gangadharan (IDRBT, India)

The outsourcing of data into the cloud inherently requires a mechanism to control the access capability of the users and the cloud providers. This mechanism requires efficient cryptographic primitives to achieve fine grained access control of data, proof of storage, and revocation of the authorization. In this paper, we present a secure cloud data storage architecture with the features of dynamic user construction, revocation of the authorization, and proof of storage. In the proposed architecture, we used attribute based broadcast encryption, attribute based access control, and proxy re-encryption to achieve an efficient solution.

S22.11 Comparative Analysis of Scheduling Algorithms for Grid Computing
Shyna Sharma, Amit Chhabra and Sandeep Sharma (Guru Nanak Dev University, India)

The grid computing offers powerful and dynamic framework, with the numerous resources, distributed CPU loads, along with the extent of idle memory continually altering. Efficient schedulers are required to schedule jobs in such dynamic environment. This paper presents comparative study of various well-known grid computing based scheduling techniques by taking into account the implementation environment as well as metrics used like soft computing, environment and metrics used with the aim to determine the effectiveness of each existing optimizing technique. This study has shown that the Ant colony optimization scheduling has quite significant results over the available techniques. However due to slowing growing rate it also become bottleneck of the optimistic scheduling. This paper ends up with the suitable future directions to enhance the existing scheduling techniques.

S22.12 Cloud Based Dual Auction (DA) and A* and IDA* Searching models using BH - Strategy for Resource Allocation in E-Markets
Mohammed Nisar Faruk (QIS College of Engineering and Technology, India); Koyi Lakshmi Prasad (Bharath University, India); Pamidi Srinivasulu (DVR & Dr HS MIC College of Technology, India)

Cloud platform has been most promising model for consumers at hiring maximum shared resources offered by variety of Cloud Service Providers (CSPs). The cloud environment users are slightly different from typical internet user since their demands are dynamic in nature they purely trust on CSPs to obtain storage and computing resources to fulfill their necessities. On other side CSPs trying to magnetize consumers on favorable provisions. In such type of aggressive cloud e-markets and pricing tags are vital factor of overall market effectiveness. The CSPs frequently post their policies and prices on cloud market rely on the total resources they can offer; in this article we elaborated an e-auction based proposal on cloud platform using Dual Auction (DA) mechanism using A and IDA searching model. It is originated to distribute the requirements and assist the trading process depend on the type of resources are being utilized. Various assessing criteria are adopted to analyse the efficacy of trading markets and their strategies. Furthermore, the preference of auction strategies to deploy a significant impact on each consumer to maximize their own revenues, hence we urbanized a narrative bidding stratagem for DA and two phase gaming based BH-strategy. At evaluation stage we premeditated three simulation models to calculate approximate performance of our defined approach with other already dictated auction strategies and verified that BH-strategy got enhanced improvisation on surpluses and triumphant contacts and e-market effectiveness, additionally we defined that our dynamic DA mechanism is practicable for cloud environment resource allocation.

S22.13 Secured Fast Prediction of Cloud Data Stream with Balanced Load Factor Using Ensemble Tree Classification
Bala Murugan, Kamal Raj, Jegadeeswari S and Sugumaran M (Pondicherry University, India)

Cloud computing uses the virtualized processing and storage resources in conjunction with modern technologies where it delivers the conceptual, scalable platforms and applications as on data services. The Cloud infrastructure stores a very large amount of data and presents the system to meet the fluctuations on the computational load. Cloud infrastructures effectively predict the data streams with load factors of ensemble models in order to reduce the computation. Data stream processes on the cloud infrastructure run continuously with the varying load factors. In this paper, we propose an architecture with a load balancing secured framework for cloud infrastructure and a formal definition of the Ensemble Tree Metric Space Indexing (E-tree MSI) technique. We introduce three fundamental techniques for constructing our E-tree MSI technique: Fast Predictive Look-ahead Scheduling approach (FPLS) where the scheduling of Spatio-temporal data stream files takes place; Parallel Ensemble Tree Classification (PETC) which performs the process of classification operations on cloud data stream; and a Mapping process which adds efficiency in cloud infrastructure and implements an efficient construction of the effective load balancing query processing approach

S22.14 A New Hybrid Approach for Overlay Construction in P2P Live Streaming
Kunwar Pal (Malaviya National Institute of Technology, India); Mahesh Chandra Govil (Malaviya National Institute of Technology, Jaipur INDIA, India); Mushtaq Ahmed (Malaviya National Institute of Technology Jaipur INDIA, India)

Peer-to-peer (P2P) is a decentralized communications model which was originally used for file sharing, and more recently for real-time communications and media streaming. Due to the utilization of resources, demand of peer to peer network is increasing rapidly and today significant part of the Internet traffic is generated by peer-to-peer (P2P) applications. Live video streaming in peer to peer network has two main issues- overlay construction and data scheduling. In this paper we first briefly discussed about existing overlay architectures in live streaming P2P network e.g. tree architecture and mesh architecture and compared their advantages, disadvantages and other relevant concepts and then suggested novel hybrid tree/mesh design that leverages both overlays. Tree approach is good for peer to peer network in terms of latency and resource utilization but it is quite complex when the devices are mobile in nature. While in mesh architecture resilience and resource utilization is maximum but has problem of latency and control overhead. Our new hybrid tree/mesh design provides better resources utilization and less delay between peers as compare to previous approaches.

S22.15 MR-VSM: Map Reduce based Vector Space Model for User Profiling-An Empirical Study on News Data
Anjali Gautam and Punam Bedi (University of Delhi, India)

Velocity of data generation has increased over a period of decade which is expected to further increase exponentially with the passage of time. To mine the useful nuggets of information, satisfying a large community of users it is preferred to capture the interest of the user, i.e., to create a user profile, and then filter the content according to his taste. A user may traverse through a large number of documents, requiring a user profiling technique to support the scalability of growing number of documents. This paper proposes a novel technique of user profiling - Map Reduce based Vector Space Model (MR-VSM). MR-VSM is a technique for user profiling where the user interacts with data rich in text and volume. MR-VSM is designed to use Map Reduce, a parallel programming paradigm to increase the computational efficiency and support scalability of documents. It works by parallelizing the task of creating a term-document class of VSM by using TF-IDF to create term vector. For experimental study this paper makes use of the News dataset which is rich in text and volume and is collected from the web using RSS feeds. The proposed system creates user profile by taking into consideration the News item read by the user and creating a term vector for each news item read. Resulting user profile is set of Top-n terms. To test the computational efficiency and scalability of MR-VSM for growing number of news items read by user, MR-VSM is made to run on a cluster of Hadoop for 12,000, 24,000 and 48000 news items. It is observed that for MR-VSM computational time for user profiling and scalability of news item read by the user are improved with the increase in the number of nodes in a Hadoop cluster.

S22.16 An Efficient Resource Allocation (ERA) Mechanism in Iaas Cloud
Rajalakshmi Shenbaga Moorthy (102/3 North Street, Athikulam, Srivilliputtur & St. Joseph's Institute of Technology, India)

Cloud computing allows consumers to use the applications as remote with internet access as pay per use. Resource Allocation is one of the most challenging tasks in cloud since it depends on dynamicity of resource providers and heterogeneous user request. To efficiently allocate the user request in a resource, Efficient Resource Allocation (ERA) is proposed. Hence, in the proposed efficient resource allocation mechanism, JCC K-Means is used to cluster the resources dynamically based on user request and improved branch and bound based assignment algorithm is used for efficient allocation of cloud resources to user request. The proposed work is simulated and compared with K-means and Random Allocation. The simulation is carried out in Java using Eclipse IDE The simulation results shows that the proposed work achieves better result in terms of efficient resource discovery, makespan and user satisfaction.

Wednesday, August 12 14:30 - 19:00 (Asia/Kolkata)

S21: S21-Security, Trust and Privacy- II

Room: 311
Chairs: Chirag Modi (NIT Goa, India), Kester Quist-Aphetsi (University of Brest France, France)
S21.1 Saliency Based Image Steganography with Varying Base SDS and Multi-Objective Genetic Algorithm
Ram Sharma (University of Hyderabad, India); Rajarshi Pal (Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, India)

Hiding secret message into images is known as image steganography. A good image steganography method aims to minimize visual degradation while maximizing embedding capacity. In this paper, visual saliency is used to determine the number of bits which can be embedded in a pixel. The relation between the saliency of a pixel and the number of embedded bits is obtained by solving a multi-objective optimization problem. A genetic algorithm has been developed which gives the optimum relation between saliency values and the number of embedded bits in the pixels. A variation of Single Digit Sum (SDS) based steganography scheme has been used to embed the secret message bits where the base of computing SDS is estimated based on the number of bits being embedded in a pixel. Experimental results reveal that the proposed technique can boast of a good embedding capacity while controlling the visual degradation of stego image.

S21.2 Exploiting Curse of Diversity for Improved Network Security
Ghanshyam Bopche (Institute for Development and Research in Banking Technology (IDRBT) & School of Computer and Information Sciences (SCIS), University of Hyderabad, India); Babu Mehtre (IDRBT - Institute for Development and Research in Banking Technology & Reserve Bank of India, India)

Higher species diversity in biological systems increases the robustness of the system against the spread of disease or infection. However, computers are remarkably less diverse. Such lack of diversity poses serious risks to the today's homogeneous computer networks. An adversary learns with the initial compromises and then applies the learned knowledge to compromise subsequent systems with less effort and time. An exploit engineered to take advantage of a particular vulnerability could be leveraged on many other systems to multiply the effect of an attack. The existence of the same vulnerability on multiple systems in an enterprise network greatly benefits the adversary because she can gain incremental access to enterprise resources with relative ease. In this paper, we have proposed a metric to identify all the attack paths that are not fairly/truly diversified. Our goal is to identify all the attack paths in an enterprise network in which one or more vulnerabilities that could be exploited more than once. Additionally, our goal is to identify what are all those vulnerabilities and what are the affected software's/services? Based on the proposed heuristics, identical and vulnerable services were identified and diversified by functionally equivalent alternatives in such a way that adversary requires an independent effort (i.e. additional or new effort) for exploiting each vulnerability along every attack path. We have presented a small case study to demonstrate the efficacy and applicability of the proposed metric and proposed an algorithm for diversifying attack paths for making enterprise network more robust against 0-day attacks. Initial results show that our approach is capable of identifying identical and vulnerable software/applications/services that need to be diversified for increased network security.

S21.3 Data Hiding Scheme Based on Octagon Shaped Shell
Sujitha Kurup (Shah and Anchor Kutchhi College of Engineering & Mumbai University, India); Anjana Rodrigues (Mukesh Patel School of Technology Management and Engineering, India); Archana Bhise (Mukesh Patel School of Technology Management & Engineering, India)

Steganography is the art of hiding covert message in unsuspected digital media and is generally used in secret communication between acknowledged parties. A good approach to steganography must provide good visual quality, high embedding capacity and security. In this paper, a new data hiding scheme based on octagon shaped shells is proposed to obtain better image quality and higher embedding capacity while security is achieved by scrambling the secret image using magic square. In the proposed method, a secret digit is embedded into each cover pixel pair with the help of a reference matrix consisting of connected octagons. Experimental results show that the proposed scheme ensures higher embedding capacity and security compared with existing schemes and also provides good visual quality but also better visual perception of the stego image.

S21.4 Image Watermarking on Degraded Compressed Sensing Measurements
Anirban Bose (B. C. Roy Engineering College, Durgapur, India); Santi Prasad Maity (Indian Institute of Engineering Science and Technology, Shibpur, India); Seba Maity (CEMK, Kolaghat, India)

This paper proposes an additive watermarking on sparse or compressible coefficients of the host image in presence of blurring and additive noise degradation. The sparse coefficients are obtained through basis pursuit (BP). Watermark recovery is done through deblurring and performance is studied here for Wiener and fast total variation deconvolution (FTVD) techniques; the first one needs the actual or an estimate of the noise variance, while the second one is blind. Extensive simulations are done on images for different CS measurements along with wide range of noise variation. Simulation results show that FTVD with an optimum value for regularization parameter enables extraction of the watermark image in visually recognizable form, while Wiener deconvolution neither restores the watermarked image nor the watermark when no knowledge of noise is used.

S21.5 An Efficient Classification model for detecting Advanced Persistent Threat
Saranya Chandran (Amrita Vishwa Vidyapeetham, Amrita University, India); Hrudya P (Research Associate at Amrita Center for Cyber Security, India); Prabaharan Poornachandran (Amrita University, India)

Among most of the cyber attacks that occurred, the most drastic are advanced persistent threats. APTs are differ from other attacks as they have multiple phases, often silent for long period of time and launched by adamant, well-funded opponents. These targeted attacks mainly concentrated on government agencies and organizations in industries, as are those involved in international trade and having sensitive data. APTs escape from detection by antivirus solutions, intrusion detection and intrusion prevention systems and firewalls. In this paper we proposes a mathematical model having 99.8% accuracy, for the detection of APT.

S21.6 XSSDM: Towards Detection and Mitigation of Cross-Site Scripting Vulnerabilities in Web Applications
Mukesh Kumar Gupta (Swami Keshvanand Institute of Technology, Mgt, Jaipur, India); Mahesh Chandra Govil (Malviya National Institute of Technology); Girdhari Singh (MNIT Malaviya National Institute of Technology, Jaipur, India); Priya Sharma (Swami Keshvanand Institute of Technology, India)

With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.

S21.7 Handling cold start problem in Recommender Systems by using Interaction based Social Proximity factor
Punam Bedi and Chhavi Sharma (University of Delhi, India); Pooja Vashisth (Delhi University, India); Deepika Goel and Muskan Dhanda (University of Delhi, India)

Recommender Systems (RS) help users to find items and make choices which may suit their taste and needs. A user's past behavior, taste and general buying trends may effectively be used by a RS to suggest items to the user each time he/she enters an e-commerce website. However if the user is new and has no awareness of the available choices (cold start user), it becomes somewhat difficult for the system to offer recommendations. This limitation, known as the cold start problem, has been one of the mostly explored challenges in the Recommender Systems research. In the current work, an attempt is made to handle cold start problem by generating recommendations based on the social interactions between the users on Facebook, a popular social networking website. The choices made by friends or acquaintances, tend to have an impact on the user's opinions and choices. We incorporate this concept to make recommendations to the user. We have proposed Interaction Based Social Proximity (IBSP), a social interaction factor to overcome the Cold Start problem. A prototype of the system has been developed for the books domain using java. The Facebook graph API was used to extract information from the social graph of the user.

S21.8 Variable Strength Interaction Test Set Generation Using Multi Objective Genetic Algorithms
Sangeeta Sabharwal (Delhi University, India); Manuj Aggarwal (NSIT, Delhi, India)

Combinatorial testing aims at identifying faults that are caused due to interactions of a small number of input parameters. It provides a technique to select a subset of exhaustive test cases covering all the t-way interactions without much loss of the fault detection capability. The test set generated is for a fixed value of t. In this paper, an approach is proposed to generate test set for a system where some variables have higher interaction strength among them as compared to that of the system. Variable Strength Covering Arrays are used for testing such systems. We propose to generate Variable Strength Covering Arrays using Multi objective optimization (Multi Objective Genetic Algorithms). We attempt to reduce the test set size while covering all the base level interactions of the system and higher strength interactions of its components. Experimental results indicate that the proposed approach generates results comparable to or better in some cases as compared to that of existing approaches.

S21.9 Identifying Metamorphic Virus Using n-grams And Hidden Markov Model
Shiva Prasad Thunga (University of Hyderabad, India); Raghu Kisore Neelisetti (Idrbt, India)

Computer virus is a rapidly evolving threat to the computing community. These viruses fall into different categories and it is generally believed that metamorphic viruses are extremely difficult to detect. The first step to effectively combat a virus is to successfully classify it's family so that past experience can be readily applied to understand it's functionality and apply the right strategy to mitigate it. In this paper we propose and test a Hidden Markov Model (HMM) based classifier that can be used to identify the family to which a virus understudy belongs to. The proposed solution is to train multiple HMM's, each representing a family of virus and then determine the family of the virus to be identified based on the log-likelihood similarity score obtained. Malware samples from the malicia data set were used to evaluate the proposed technique.

S21.10 Network Intrusion Detection System Using J48 Decision Tree
Shailendra Sahu (University of Hyderabad & Institute of Development & Research in Banking Technology, India); Babu Mehtre (IDRBT - Institute for Development and Research in Banking Technology & Reserve Bank of India, India)

As the number of cyber attacks have increased, detecting the intrusion in networks become a very tough job. For network intrusion detection system (NIDS), many data mining and machine learning techniques are used. However, for evaluation, most of the researchers used KDD Cup 99 data set, which has widely criticized for not showing current network situation. In this paper we used a new labelled network dataset, called Kyoto 2006+ dataset. In Kyoto 2006+ data set, every instant is labelled as normal (no attack), attack (known attack) and unknown attack. We use Decision Tree(J48) algorithm to classify the network packet that can be used for NIDS. For training and testing we used 134665 network instances. The rules generated, works with 97.2% correctness for detecting

S21.11 Inter-domain Role Based Access Control using Ontology
Chandra Mouliswaran Subramanian (VIT University, India); Aswani Kumar Cherukuri (Vellore Institute of Technology, Vellore, India); C Chandrasekar (Periyar University Salem, Tamil Nadu, India)

There are several access control models available for multiple domain environments. Applying role based access control for inter-domain services of multiple domain environment meet the challenges such as mapping of inter-domain role hierarchy, separation of duty constraints on role conflict, service conflict and location conflict. In the recent times, ontology based access control is introduced for various domain of interest. The main purpose of this paper is representing inter-domain access permissions of multiple domain environments using ontology which is the formal and explicit representation of a domain of interest through their concepts and their associations. To attain this objective, we propose a procedure to transform the access permission matrix of multiple domain environments into inter-domain access control ontology. The implementation shows that it is possible to formalize ontology for access permission of inter-domain security policy without any conflicts in terms of inter-domain roles, services and domains in multiple domain environments.

S21.12 Provable secure protected designated proxy signature with revocation
Deepnarayan Tiwari (Institute for Development and Research in Banking Technology & Central University of Hyderabad, India); Gr Gangadharan (IDRBT, India); Maode Ma (Qatar University, Qatar)

In this paper, we present a novel concept in proxy signature by introducing a trusted proxy agent called mediator together with a proxy signer, which enables an efficient revocation of signing capability within the delegation period, by controlling the signing capability of proxy signer even before/after generating the signature by the designated proxy signer. We describe a secure designated proxy signature scheme with revocation based on elliptic curve discrete logarithmic problem. Further, we define a random oracle based security model to prove the security of the proposed scheme under an adaptive-chosen-message attack and an adaptive-chosen-warrant attack.

S21.13 A Secure Non-blind Block Based Digital Image Watermarking technique Using DWT and DCT
Ranjan Kumar Arya (Central University Of Rajasthan, India); Shalu Singh and Ravi Saharan (Central University of Rajasthan, India)

Digital image watermarking can be considered as a better solution to protect authentication of images and its copyright protection. The purpose of proposed method is to provide better security of host image. Proposed method is implemented using DWT and DCT. Watermark image is inserted in LL sub-band of host image by dividing it using DWT. 8x8 block DCT is applied on LL sub-band and watermark is embedded into last pixel of each block. On the basis of watermark pixel value, pixel value of host image is affected. Here, binary watermark image is taken for experiment. Extraction process is completely non-blind technique. In extraction DWT technique is applied on both host image and watermarked image. Finally reverse process of embedding is applied to extract exact watermark image.

S21.14 A Multipath Routing Protocol for Cognitive Radio AdHoc Networks (CRAHNs)
Nitul Dutta (Marwadi University, India); Hiren Kumar Deva Sarma (Gauhati University, India); Ashish Srivastava (Marwadi Education Foundation's Group of Institutions, India)

Cognitive Radio Networks (CRNs) increases spectrum utilization by opportunistically sharing licensed spectrum with cognitive capable devices. There are many routing algorithms proposed for CRN so far. Most of these algorithms discover single path from source to destination and consumes considerable amount of bandwidth during route discovery. On the other hand, the multipath algorithms designed for AdHoc networks shows significant improvements in bandwidth utilization by discovering multiple paths using one route discovery process. So, it is believed that such multipath routing algorithm provides substantially resilient behavior to the channel dynamism because multiple paths are discovered with in single round of path discovery. In this paper, a multipath routing protocol for Cognitive Radio Network (CRN) is proposed. The protocol considers channel stability for route formation and finds multiple disjoint paths between two nodes. The proposed multipath protocol is compared with Cognitive AODV (CAODV). Result shows that the performance of the proposed protocol is better in terms of packet loss rate, route discovery latency, route discovery frequency and average delay.

S21.15 Analytical Study of Cognitive Radio Networks (CRNs): An exploration for optimum utilization of spectrum hole
Nitul Dutta (Marwadi University, India); Hiren Kumar Deva Sarma (Gauhati University, India)

This paper provides an analysis of the Cognitive Radio Networks (CRN) for optimized utilization of spectrum hole. The focus of the paper is to control the packet generation by Cognitive Users (CUs) in such a way that the available spectrum hole could be optimally utilized. We emphasize on determining the volume of data to be generated by CUs based on the available spectrum hole in order to achieve minimum packet loss and maximum throughput. For analytical evaluation, the packet arrival at CU is modeled as Poisson's process and availability of channels for CU devices are modeled as random activity. An optimization function is designed to provide a feed back to CR devices for controlling the packet generation depending upon the expansion and compression of spectrum hole over time. An algorithm is also proposed to increase or decrease packet generation in order to maximize channel utilization and minimize packet loss. Various cost computed through numerical evaluation shows that the cost of CRN system could significantly improve by adjusting packet generation based on size of the spectrum hole.

WCI-06: WCI-06: Privacy, Security and Trust

Room: 402
Chair: Vaishali Maheshkar III (CDAC, India)
WCI-06.1 A Robust Biometric-Based Authentication Scheme for Wireless Body Area Network Using Elliptic Curve Cryptosystem
Mrudula Sarvabhatla (NBKR IST, India); M Chandramouli Reddy (Veltech Technical University & Vaishnavi Institute of Technology for Women, India); Chandra Sekhar Vorugunti (Indian Institute of Information Technology- SriCity, India)

The advancements in mobile and semiconductor technology resulted in a new area of internet based patient monitoring system called Wireless Medical Sensor Network (WMSN).The transferring of patient sensitive data, aggregated through patient body sensors,over insecure internet,demands for patient identity preserving and data privacy. In this context,many researchers have provided authentication schemes based on various factors. Due to the inherent advantage of light key weight and computation cost, ECC based authentication systems got great demand. Recently Lu et al had provided an authentication and key agreement protocol between WMSN server and user based on ECC. However, in this manuscript, we will demonstrate that Lu et al scheme fails to achieve fundamental requirements of WMSN i.e. preserving user anonymity and resisting impersonation attacks. Further, we propose our robust authentication scheme and we demonstrate the security strengths of our scheme both formally using BAN logic and informally. The formal security analysis confirms that our proposed scheme resists all relevant security attacks.

WCI-06.2 A Distributed Cooperative Approach to Detect Gray Hole Attack in MANETs
Bobby Sharma (Assam Don Bosco University, India)

Due to intrinsic properties of Mobile Ad hoc Network (MANETs) such as openness, infrastructure less network, dynamic topology, mobility of nodes, lack of centralized monitoring system, lack of secure routing protocol etc., it always suffers from different kind of attacks. There is no clear line of defense to resists the malicious nodes from its route. Moreover, nodes communicate to each other on hop-by-hop fashion. That helps the intruders to sit in between and deliberately disrupt the communication. That degrades the network performance in different levels. This paper contains a distributed cooperative approach to detect a network layered active attack known as gray hole attack. Efficiency of the detection methodology has been shown in terms of detection rate and throughput of the network

WCI-06.3 A Secure and Light Weight Authentication and Key Establishment Framework for Wireless Mesh Network
Mrudula Sarvabhatla (NBKR IST, India); M Chandramouli Reddy (Veltech Technical University & Vaishnavi Institute of Technology for Women, India); Chandra Sekhar Vorugunti (Indian Institute of Information Technology- SriCity, India)

The advancement of web and mobile technologies results in the rapid augmentation of traditional enterprise data, IoT generated data, social media data which outcomes in peta bytes and exa bytes of structured and un structured data across clusters of servers per day. The storage, processing, analyzing and securing these big data is becoming a serious concern to large and medium enterprises. Hadoop or HDFS is a distributed file system based on cloud for storage and processing of large voluminous amounts of data across clusters of servers.

WCI-06.4 A Secure Image-Based Authentication Scheme Employing DNA Crypto and Steganography
Mohammed Misbahuddin (Centre for Development of Advanced Computing, India); Sreeja Sukumaran (Christ University, India)

Authentication is considered as one of the critical aspects of Information security to ensure identity. Authentication is generally carried out using conventional authentication methods such as text based passwords, but considering the increased usage of electronic services a user has to remember many id-password pairs which often leads to memorability issues. This inspire users to reuse passwords across e-services, but this practice is vulnerable to security attacks. To improve security strength, various authentication techniques have been proposed including two factor schemes based on smart card, tokens etc. and advanced biometric techniques. Graphical Image based authentication systems has received relevant diligence as it provides better usability by way of memorable image passwords. But the tradeoff between usability and security is a major concern while strengthening authentication. This paper proposes a novel two-way secure authentication scheme using DNA cryptography and steganography considering both security and usability. The protocol uses text and image password of which text password is converted into cipher text using DNA cryptography and embedded into image password by applying steganography. Hash value of the generated stego image is calculated using SHA-256 and the same will be used for verification to authenticate legitimate user.

WCI-06.5 Detection of Stealth Process using Hooking
Deepti Vidyarthi (Defence Institute of Advanced Technology, India)

Malware writers adapt multiple methods to make the malware detection process difficult. Hiding the presence of malware during its execution is one of them. Malware written for espionage, data stealing, or rootkits have this key characteristic of stealthiness. One of the preferred ways to implement stealthiness is using Hooking. This paper proposes an effective method to identify the typical hooking mechanism implemented by rootkit to conceal its presence in a system. The approach is useful in detecting the presence of any process hiding in the system using hooking. It is specifically useful in identifying any user level rootkit in the system as validated by the experimentation.

WCI-06.6 Entropy based content filtering for Mobile Web Page Adaptation
Neetu Narwal (Banasthali Vidyapith, Rajasthan, India); Sanjay Kumar Sharma (Banasthali University, India); Amit Prakash Singh (Guru Gobind Singh Indraprastha University, India)

A global increase in the usage of mobile devices and the availability of internet services on the phone has increased the usage of internet on these devices. However, these devices face a major challenge of limited screen space, less bandwidth and processing capability. A majority of the website are designed to be viewed on PCs, Desktop etc. Hence mobile Internet user finds it difficult to browse the web page. In this paper we present an approach to filter the informative web page content and rearrange it on the available screen space of mobile device. In this process the web page is segmented into visual blocks and then the entropy measure of each visual block is computed using content entropy and feature entropy. The model is trained using neural network to segregate main content and noise content

WCI-06.7 Fusion Mechanism for Multimodal Biometric System-Palmprint and Fingerprint
Bhagyashri Kale (Savitribai Phule Pune University, India); Pravin Gawande (Pune University, India)

Security is main issue in everywhere today. This research paper proposes the multimodal biometrics system(MBS) for identity verification using two traits i.e., fingerprint and palmprint. The proposed MBS system is designed for applications like authentication where the training database contains a fingerprint images and palmprint for each individual. Palm print is very important so chosen as a biometric trait as no two palm print match unless they are of the same person also palm has a good vascular pattern making it a good identifying factor for an individual as compared to other biometric traits. The images captured by the designed hardware are preprocessed using Image enhancement techniques and Features are extracted by Gaussian kernel, Gabor Filter and Principal Component analysis. These feature vectors are fused at feature level. Later matched by using Euclidean distance or Manhattan distance. Quality measures are also found for these above modalities.

WCI-06.8 Malware Detection in Android files based on Multiple levels of Learning and Diverse Data Sources
Shina Sheen and Ramalingam Anitha (PSG College of Technology, India)

Smart mobile device usage has expanded at a very high rate all over the world. Mobile devices have experienced a rapid shift from pure telecommunication devices to small ubiquitous computing platforms. They run sophisticated operating systems that need to confront the same risks as desktop computers, with Android as the most targeted platform for malware. The processing power is one of the factors that differentiate PCs and mobile phones. Mobile phones are more compact and therefore limited in memory and depend on a limited battery power for their energy needs.Hence developing apps to run on these devices should take into consideration the above mentioned factors. To improve the speed of detection, a multilevel detection mechanism using diverse data sources is designed for detecting malware balancing between the accuracy of detection and usage of less compute intensive computations. In this work we have analyzed android based malware for analysis and a multilevel detection mechanism is designed using diverse data sources. We have evaluated our work on a collection of Android based malware comprising of different malware families and our results show that the proposed method is faster with good performance

WCI-06.9 MoveFree: A ubiquitous system to provide women safety
Sohini Roy (Arizona State University, USA); Abhijit Sharma (National Institute of Technology Durgapur, India); Uma Bhattacharya (Bengal Engineering & Science University, India)

With surveys of what is happening around us, it is beyond any to argument to state that women are not safe enough to move freely alone. It is high time for the women to give a good retort to the society and raise their voice against the crimes that are taking place regularly. Pervasive computing with its omnipresent nature is emerging with a revolutionary approach towards technological growth. Anything and everything of the surroundings can be made to act as a computing device using the concept of pervasive computing. This paper makes use of the ever-present nature of pervasive computing to provide security to women. The work of this paper aims in designing a pervasive system comprising of a wearable computer that will act as a guardian angel for women in danger. Design of the system makes use of body area sensors, Bluetooth communication, GPS, SMS and MMS, Internet connection and mobile database system. A machine learning approach based on supervised classification is used to determine whether a woman is in danger. A prototype of the system is implemented in android.

WCI-06.10 Non-tree Based Group Key Management Scheme With Constant Rekeying and Storage Cost
Nishat Koti and Esha Keni (National Institute of Technology Goa, India); Kritika S (National Institute of Technology, Goa, India); Purushothama B R (National Institute of Technology Goa, India)

Designing a key management scheme for secure group communication is a challenging task. There are several tree based and non-tree based group key management schemes. In the existing non-tree based group key management schemes, the rekeying and storage costs are mostly in linear with the number of members in the group. Also, in some of the efficient tree-based group key management schemes, the rekeying and storage cost is a function of the number of group users. We propose a rekey efficient centralized group key management scheme in which communication, storage and computation costs are constant. In the proposed scheme, each user has to store only two keys. We compare the proposed scheme with some of the existing tree and non-tree based group key management schemes and show that the proposed scheme is rekey and storage efficient.

WCI-06.11 Secure Session Key Sharing Using Public Key Cryptography
Anjali Gupta and Muzzammil Hussain (Central University of Rajasthan, India)

In distributed systems, entities perform the predefined tasks distributively and have to communicate quite frequently. Confidentiality is the primary security requirement in such systems. There should be some secure mechanism whereby a pair of entities can securely communicate with each other. In most of the applications of distributed systems, computational and communication resources are limited and any such mechanism that ensures security of data under transmission should consider these computational and communicational constraints. Symmetric key cryptography emerges as one of the effective mechanisms to secure the data in distributed systems. The security of a symmetric key mechanism purely depends on the secrecy of the session key. In this paper, we have proposed a protocol for the sharing of session key between communicating entities securely. The proposed protocol secures the session through asymmetric key mechanism and once the session key is securely shared, the entities can communicate through the shared session key.

WCI-06.12 SQAMPS: Secure quorum architecture for mobile presence services with end-to-end encryption
Dipti Madankar (University of Pune, India); Vaishali Maheshkar III (CDAC, India); Aaradhana Arvind Deshmukh (Aalborg University Denamrk & University of Pune India, India); Albena Mihovska (Aarhus University, Denmark)

Due to omnipresent computing use of social networking applications has been multiplied hugely in last decade. Social networking applications offer a large vary of facilities to speak at an extended distance, one in every of that is mobile presence service. Presence enabled applications enable to publish and retrieve the context information like online/offline standing, GPS location and updates created by buddies enlisted in buddy list. If such presence enabled updates occur often it ends up in measurability downside for backend servers. In our paper to touch upon measurability issue we've chosen gathering based mostly design that solve measurability downside effectively with low search latency and price. SQAMPS focuses on security problems that area unit still unresolved during this design. we have a tendency to thought-about end-user security, server authentication and media file security whereas planning the SQAMPS. A hybrid end-to-end cryptography theme, distributed trust system and distribution access rights to media file area unit used severally for constant same.

WCI-07: WCI-07: Natural Language Processing/Computational Intelligence/Systems and Software Engineering

Room: 403
Chair: V Renumol (School Of Engineering, CUSAT, Kochi, India)
WCI-07.1 An Analogy of F0 Estimation Algorithms Using Sustained Vowel
Prarthana Karunaimathi and Dennis Gladis (Presidency College, India); Balakrishnan D (SRM University, India)

The pitch of the human voice plays a key role in determining the uniqueness of one's voice and any abnormality in the pitch may result in a voice disorder. This can be found by analyzing the fundamental frequency of the voice F0, which is the subjective pitch of the voice and hence it plays a major role in Clinical Voice Research. This study examines five fundamental frequency estimation algorithms using sustained vowel phonations recorded. The statistical parameters namely the mean and standard deviation of F0 are evaluated and the results depict the robustness of the algorithms in determining the near accuracy in F0 estimation.

WCI-07.2 Classification and Prediction of Breast Cancer Data derived Using Natural Language Processing
Johanna Johnsi Rani G. (Madras Christian College (Autonomous), India); Dennis Gladis (Presidency College, India); Joy Mammen (Department of Transfusion Medicine & Immunohaematology, India)

Most of the traditional Medical reports in India are in the form of descriptive natural language text. In recent times, medical data are electronically acquired, indexed and stored in databases resulting in easy retrieval and processing. The work presented here is the extension of an automated system developed earlier that processed 150 de-identified Breast cancer Pathology reports applying Natural Language Processing (NLP) and Information Extraction (IE) techniques. The system classified parameters, Tumor T, Lymph node N and Metastases M, the components of the pTNM classification protocol by the American Joint Committee on Cancer (AJCC) and derived the stage of cancer S using T, N and M. The Gold-standard to evaluate the work was obtained from the Pathologists who manually reviewed the corpus. The current work processes the pathology report and extracts details regarding the presence / absence of ten medical conditions associated with Breast Cancer. Then Decision-tree based classifiers J48, Naïve Bayes and Random Forest are applied on all the parameters extracted as part of the earlier and the current work and the Gold standard values, using WEKA tool. A Training set consisting of 53 rules was used to predict the stage of cancer on the Test set. The automated system interfaces with WEKA and presents the results obtained. This work processes and analyzes regional breast cancer data and thus has practical applicability and relevance. The results of analysis by the system would assist the medical experts in early diagnosis, decision making and understanding of the patient population in India.

WCI-07.3 Discourse Translation from English to Telugu
Suryakanthi Tangirala (Faculty of Business & University of Botswana, Botswana); Kamlesh Sharma (Manav Rachna International Institute of Research and Studies, India)

Discourses are texts that are above the sentence level. Translating discourses requires special attention as the sentences should be translated keeping the context in mind. The first and second sentence should be interpreted as a whole and not as individual sentences. The present paper deals with translating two types of discourses, compound and complex sentences from English to Telugu language. The paper discusses the design of two algorithms to translate compound and complex sentences from English to Telugu. The paper discusses the results obtained by implementing the two algorithms in a machine translation system on sample test suites.

WCI-07.4 Improving the Accuracy of Document Similarity Approach using Word Sense Disambiguation
Veena G (Amrita Vishwa Vidyapeetham, India); Umesha Veni, U B (AmritaVishwaVidyapeetham, India)

The aspects of Artificial Intelligence and statistics such as Text mining, Data Mining can provide solutions to the area of concept mining. It provides powerful insights into the meaning and documents similarity without exploiting the semantics of the terms or phrases in the document. Our work determines the similarity of documents using semantic processing namely Word Sense Disambiguation by annotating the senses of the words in the documents and then performs traditional PageRank algorithm over it. The Algorithm ranks the possible senses and finds the correct sense according to the context. Our paper proposes the method of disambiguating the ambiguous words in order to find the document similarity. Moreover it is compared with the cosine similarity approach, which is frequently used to determine similarity between two documents to prove the accuracy of our work.

WCI-07.5 Is Stack Overflow Overflowing With Questions and Tags
Ranjitha K (Manipal University, India); Sanjay Singh (Manipal Institute of Technology, India)

Programming question and answer (Q&A) websites, such as Quora, Stack Overflow, and Yahoo! Answer etc. helps us to understand the programming concepts easily and quickly in a way that has been tested and applied by many software developers. Stack Overflow is one of the most frequently used programming Q&A website where the questions and answers posted are presently analyzed manually, which requires a huge amount of time and resource. To save the effort, we present a topic modeling based technique to analyze the words of the original texts to discover the themes that run through them. We also propose a method to automate the process of reviewing the quality of questions on Stack Overflow dataset in order to avoid ballooning the stack overflow with insignificant questions. The proposed method also recommends the appropriate tags for the new post, which averts the creation of unnecessary tags on Stack Overflow.

WCI-07.6 Learning to Grade Short Answers Using Machine Learning Techniques
Krithika R (Amrita Vishwa Vidyapeetham, Amritapuri, Kollam, India); Jayasree Narayanan (Amrita Vishwa Vidyapeetham, Amritapuri, India)

In this work, we are attempting to grade short answer automatically which can be efficient and helpful to both students and teachers. It uses a combination of many semantic and graph alignment features and is implemented in the Microsoft Azure Machine Learning using Two-class Averaged Perceptron, Linear and Isotonic Regression. We also provide first attempt to use graph alignment features at sentence level. We compare the results of two machine learning algorithms like Two-class Averaged Perceptron and Two-class Support Vector Machine in the results of grading short answers. We have devised novel techniques to apply the concept of Random Projection for grading 150 algorithmic answers on a coding question using our own domain specific corpus which gives precise classification of right and wrong answers.

WCI-07.7 Maestro Algorithm for Sentiment Evaluation
Deepali Virmani and Shweta Taneja (Guru Gobind Singh Indraprastha University, India); Pooja Bhatia (Guru Gobind Singh Indraprastha University & Bhagwan Parshuram Institute of Technology, India)

Opinion mining is very powerful technique in Information discovery in a vast set of structured and unstructured data. One of extremely useful application can be, to extract the qualities of a student from the Letter of Recommendation written by the professors of college. The extracted information is an instrument to judge the student‟s personality based on the different capabilities of a student . To execute the application, an efficient algorithm is required. Which can analyze the syntax of a sentence and further assign appropriate weights to each required piece of information. In the paper we have proposed "Maestro Algorithm" which has been designed after extensive research on syntax behavior of sentences in English grammar. The weights in the algorithm have been assigned based on a survey conducted on sixty teachers. The survey was conducted to evaluate the opinion of the teachers, regarding what attributes contribute to the personality of the student. The survey revealed five key categories along with percentages. Accordingly, maestro algorithm analyze each and every sentence and extract the required information. Further it categorizes the extracted information and form the "estrella structure" after applying the proposed equations in the paper. The "estrella structure" is the final opinionated result of the algorithm.

WCI-07.8 Comparative Study of Personality Models in Software Engineering
Jayati Gulati and Priya Bhardwaj (Guru Gobind Singh Indraprastha University, India); Bharti Suri (Guru Gobind Singh Indraprastha University - India, India)

In this review, we discuss and compare the previous studies based on human factors in the field of software engineering. Human factors are of utmost importance when we focus on the qualities of a software engineer as they can help predict various industry trends and improve the performance of the process as a whole. Various software engineering researchers have applied different theoretical models to comprehend software developers' personalities. These models prove to be beneficial in improving the performance of engineers and encouraged effective team work by providing an insight on the personality traits which were favourable for certain role type in the software industry. Hence, in this research, we look at the current chunk of information on software developers' personalities. Our work includes contrasting research work on students in graduation/post graduation programmes, effectiveness of a team, software project manager's role, software engineer's personality and programming in pairs. The comparison shows MBTI (Myers-Briggs Type Indicator) and FFM (Five Factor Model) as significantly used personality models. The rationale behind this paper is to identify the current level of published data on human factors relating to software engineering, discussing its significance and benefit to the industry.

WCI-07.9 Identifying Health Domain URLs using SVM
R Rajalakshmi (Vellore Institute of Technology, Chennai Campus, India)

World Wide Web contains large volume of information on various topics. Especially, in health domain, people surf the net before consulting experts. But it is not guaranteed that, only the relevant health related pages are retrieved. So there is a need for an automated system that could assist in identifying health related web pages. In this paper, a novel URL based approach is proposed to identify health domain URLs that will help to avoid fetching irrelevant web pages. One of the issues in URL based topic classification is difficulty in selection of suitable URL features. Statistical dictionary based methods have been reported in the literature, but construction of such dictionary is not automatic. A machine learning technique to automatically learn statistical dictionary of terms from the training URLs is proposed. To classify a web page either as a health page or not, a binary classifier is designed with a dictionary of 4-grams derived from URLs as features. The bench mark dataset ODP has been used for evaluating the performance by conducting various experiments. With the proposed approach,the classification performance of 87% in terms of precision is achieved which is a significant improvement over the existing techniques.

WCI-07.10 Information Risk Analysis in a Distributed MOOC Based Software System Using an Optimized Artificial Neural Network
Nimisha Sharath and Shivani Parikh (National Institute of Technology Karnataka, Surathkal, India); Chandrasekaran K (National Institute of Technology Karnataka, India)

Information security is of utmost importance to any organization. With the increasing number of attacks on private data, understanding the risk involved with handling and maintaining it is relevant. Although there are various methods to determine the risk associated with a certain organization's data, there is also a need to speed up the process of computation of this risk. This paper discusses the usage of Artificial Neural Networks that bodes well for the non linear nature of the threat vectors that affect risk involved in setting up a distributed MOOC based software system. An optimization to the existing methods is proposed that makes use of the bio inspired, Cuckoo Search Algorithm. With the concept of Levy Flights and Random Walks, this algorithm produces a much faster rate of convergence in calculation of the importance to be given to each threat vector in assessing the security of the software system.

WCI-07.11 Post Release Versions based Code Change Quality Metrics
Meera Sharma and Madhu Kumari (Department of Computer Science, University of Delhi); Vir Singh (Delhi College of Arts and Commerce)

Software Metric is a quantitative measure of the degree to which a system, component or process possesses a given attribute. Bug fixing, new features (NFs) introduction and feature improvements (IMPs) are the key factors in deciding the next version of a software. For fixing an issue (bug/new feature/feature improvement), a lot of changes have to be incorporated into the source code of the software. These code changes need to be understood by software engineers and managers when performing their daily development and maintenance tasks. In this paper, we have proposed four new metrics namely code change quality, code change density, file change quality and file change density to understand the quality of code changes across the different versions of five open source software products, namely Avro, Pig, Hive, jUDDI and Whirr of Apache project. Results show that all the products get better code change quality over a period of time. We have also observed that all the five products follow the similar code change trend.

WCI-07.12 Prioritization of Classes for Refactoring: A Step towards Improvement in Software Quality
Ruchika Malhotra (Indiana - Purdue University Indianapolis, IN USA, India); Anuradha Chug and Priyanka Khosla (Guru Gobind Singh Indraprastha University, India)

Bad Smells are certain structures in the software which violates the design principles and ruin the software quality. Refactoring deals with bad smell treatment that improves the design quality, but it's not possible to refactor every class of the software within deadlines. Prioritization of classes help the developers involved in the maintenance activity to identify the software portions requiring urgent refactoring. In the current study, we propose a framework to identify potential classes based on bad smells and their design metric characteristics. We evaluate our approach on medium sized open-source systems ORDrumbox and four types of code-smells God Class, Long Method, Type Checking and Feature Envy. Well known Chidamber and Kemerer metric suite is used to evaluate the object oriented characteristics of the open source data set. These bad smells along with the status of design metrics are then combined in certain ratio to calculate new proposed metric Quality Depreciation Index Rule (QDIR). Classes are further arranged as per their QDIR values which help us in identifying the severely affected classes requiring urgent refactoring treatment. It works on 80:20 principles conveying 80% of the code quality can be improved by just providing refactoring treatment to 20% of the severely affected code. Results of our study reflects that the bad smells and design metrics can be used as an important source of information to quantify the flaws in the classes and thus helpful to software maintainers in performing their task under strict time constraints while maintaining the overall software quality.

WCI-07.13 Hallmarking Author Style from Short Texts by Multi-Classifier Using Enhanced Feature Set
Athira U (IIITM-K, India)

Authorship analysis is the process of discerning the author of a document by reckoning the stylistic details that subsist in the document. The analysis attains significance in the area of forensic linguistics, where identification of author of forensic documents can be crucial evidence. The proposed method aims at attributing authorship of short texts by eliciting the idiosyncrasy using psycholinguistic aspects, morphological annotations and lexical features as style emblem. The traits so obtained are subjected to multiple classification and output predictions of multiple classifiers are amalgamated to obtain final judgment regarding authorship. The investigation in this regard culminated in the development of a technique that yields better results for authorship analysis in short texts, hence making it suitable for analysis of forensic documents. The proposed work exhibits an improved result in comparison with the predictions made by the state of the art methods.

WCI-07.14 Is That Twitter Hashtag Worth Reading
Anusha A (Manipal University, India); Sanjay Singh (Manipal Institute of Technology, India)

Online social media such as Twitter, Facebook, Wikis and Linkedin have made a great impact on the way we consume information in our day to day life. Now it has become increasingly important that we come across appropriate content from the social media to avoid information explosion. In case of Twitter, popular information can be tracked using hashtags. Studying the characteristics of tweets containing hashtags becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis among others.

In this paper, we have analysed Twitter data based on trending hashtags, which is widely used nowadays. We have used event based hashtags to know users' thoughts on those events and to decide whether the rest of the users might find it interesting or not. We have used topic modeling which reveals the hidden thematic structure of the documents (tweets in this case) in addition to sentiment analysis in exploring and summarizing the content of the documents. A technique to find the interestingness of event based twitter hashtag and the associated sentiment has been proposed. The proposed technique helps twitter follower to read, relevant and interesting hashtag.

VisionNet-02: VisionNet-02: Image Enhancement and Restoration (VisionNet/SIPR)

Room: 502
Chairs: Priya S (Govt. Model Engineering College, Oman), Ashok Kumar T (Govt. Model Engineering College, India)
VisionNet-02.1 Background Modelling from a Moving Camera
Amitha Viswanath (Amrita Vishwa Vidyapeetham, India); Reena Kumari Behera (KPIT Cummins, India); Vinuchackravarthy Senthamilarasu (KPITCummins Infosystem ltd, India); Krishnan Kutty (KPIT Cummins Infosystems Ltd, India)

In video analytics based systems, an efficient method for segmenting foreground objects from video frames is the need of the hour. Currently, foreground segmentation is performed by modelling background of the scene with statistical estimates and comparing them with the current scene. Such methods are not applicable for modelling the background of a moving scene, since the background changes between the scenes. The scope of this paper includes solving the problem of background modelling for applications involving moving camera. The proposed method is a non-panoramic background modelling technique that models each pixel with a single Spatio-Temporal Gaussian. Experimentation on various videos promises that the proposed method can detect foreground objects from the frames of moving camera with negligible false alarms.

VisionNet-02.2 Object Detection and Tracking based on Trajectory in Broadcast Tennis Video
M Archana and M. KalaiselviGeetha (Annamalai University, India)

Ball, player detection and tracking in Broadcast Tennis Video (BTV) is a challenging task in tennis video semantic analysis. Informally, the challenges are due to the camera motion and the other causes such as the small size of the tennis ball and many objects resembles like ball, while the player, the human body along with the tennis racket is not detected completely. In this paper proposed an improved object tracking technique in BTV. In order to track the ball, logical AND operation is applied between the created background and image difference is performed, from that ball candidates are detected by applying threshold values and dilated. Finally the ball is tracked. Player detection is performed from AND results by finding the biggest blob and filling the whole detected object by removing the small one and the players are tracked based on the contour. The experimental result shows the proposed approach achieved the higher accuracy in object identification, and their tracking. It is achieved a high hit rate and less fail rate for ball tracking while for player tracking is measured by Multiple Object Tracking Precision (MOTP).

VisionNet-02.3 A Reliable Method for Detecting Road Regions from a Single Image Based on Color Distribution and Vanishing Point Location
Neethu John (Amrita Vishwa Vidyapeetham, India); Anusha B (KPIT Technologies, India); Krishnan Kutty (KPIT Cummins Infosystems Ltd, India)

Numerous advanced driver assistance systems (ADAS) are gaining popularity even in the mid to low end segment cars. Vision based technologies assisted by the use of cameras cater to a lot of these ADAS systems. These systems enhance safety of the driver, passenger and pedestrians on the road. In a typical image taken from an on-board camera of a car, it is the road region that occupies most of the pixels. Therefore, an approach for detecting and eliminating road regions from an optical image is proposed which in turn speeds up computations for further object/ROI detection. Our proposed road detection algorithm works in two stages: i) vanishing point detection; ii) road region identification. Experimental results show that this approach performs better with real-road images of varying texture, color and shapes.

VisionNet-02.4 Image denoising using Variations of Perona-Malik Model with different edge stopping functions
Kamalaveni Vanjigounder and Anitha Rajalakshmi Rajendran (Amrita Vishwa Vidyapeetham, India); Narayanankutty Kotheneth K a (Amrita School of Engineering, India)

Anisotropic diffusion is used for both image enhancement and denoising. The Perona-Malik model makes use of anisotropic diffusion to filter out the noise. In Perona-Malik model the rate of diffusion is controlled by edge stopping function. The drawback of Perona-Malik model is that the sharp edges and fine details are not preserved well in the denoised image. But the sharp edges and fine details can be preserved well using appropriate edge stopping function. We have analysed the effect of different edge stopping functions in anisotropic diffusion in terms of how efficient they are in preserving edges. We have found that an edge stopping function which stops diffusion from low image gradient onwards well preserves the sharp edges and fine details. This property of an edge stopping function will also result in lower evolution in case of level set methods. But an edge stopping function which stops diffusion from high image gradient onwards will not preserve sharp edges and fine details, since they are blurred due to diffusion. We have also found that low values of gradient threshold parameter used in edge stopping function well preserves the sharp edges and fine details than high values of threshold parameter. By utilizing an edge stopping function which stops diffusion from low image gradient onwards or which has zero or insignificant value at low image gradient, we can well preserve the sharp edges and fine details in the denoised image.

VisionNet-02.5 Smart Farming: Pomegranate Disease Detection Using Image Processing
Manisha Bhange (Savitribai Phule Pune University & Jaywantrao Sawant College of Engineering Hadpsar Pune, India); H. A. Hingoliwala (Jaywantrao Sawant College of Engineering, Pune, India)

Crops are being affected by uneven climatic conditions leading to decreased agricultural yield. This affects global agricultural economy. Moreover, condition becomes even worst when the crops are infected by any disease. Also, increasing population burdens farmers to increase yield. This is where modern agricultural techniques and systems are needed to detect and prevent the crops from being effected by different diseases. In this paper, we propose a web based tool that helps farmers for identifying fruit disease by uploading fruit image to the system. The system has an already trained dataset of images for the pomegranate fruit. Input image given by the user undergoes several processing steps to detect the severity of disease by comparing with the trained dataset images. First the image is resized and then its features are extracted on parameters such as colour, morphology, and CCV and clustering is done by using k-means algorithm. Next, SVM is used for classification to classify the image as infected or non-infected. An intent search technique is also provided which is very useful to find the user intension. Out of three features extracted we got best results using morphology. Experimental evaluation of the proposed approach is effective and 82% accurate to identify pomegranate disease.

VisionNet-02.6 Denoising of DT- MR Images with an Iterative PCA
Sreelakshmi U (Amrita Viswa Vidyapeetham & Amrita School Of Engineering, India); Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India)

Nowadays most of the clinical applications uses Magnetic Resonance Images(MRI) for diagnosing neurological abnormalities. During MR image acquisition the emitted energy is converted to image by using some mathematical models, and this may cause addition of noise. Therefore we need to denoise the image. Currently most of the clinical application uses Diffusion Tensor-MR Images for tracking neural fibres by extracting features from the images. Noise in DT-MR Images make fibre tracking and disease diagnosing tougher. So our work aims to denoise the Diffusion Tensor MR images with better visual quality. In this paper, we propose a denoising technique that uses Structural Similarity Index Matrix (SSIM) for grouping similar patches and performs Iterative Principal Component Analysis on each group. By performing the weighted average on Principal Component, we have obtained the denoised DT-MR Image. For getting better visual quality of the denoised images we employ Iterative Principal component Analysis technique.

VisionNet-02.7 Edge Detection Using Sparse Banded Filter Matrices
V Sowmya (Amrita Vishwavidyapeetham, India); Neethu Mohan and Soman K P (Amrita Vishwa Vidyapeetham, India)

Edges are intensity change, which occur on the boundary between two regions in an image.Edges can be used as feature descriptors of an object. Hence, edge detection plays an important role in computer vision applications.This paper presents the application of sparse banded filter matrices in edge detection. The filter design is formulated in terms of banded matrices. The sparsity property of the designed filter leads to efficient computation.In our proposed method,we applied sparse banded high-pass filter row-wise and column-wise to extract the vertical and the horizontal edges of the image respectively.The proposed technique is experimented on standard images and the results are compared with the state-of-the-art methods. The visual comparison of the experimental results shows that the proposed approach for edge extraction based on sparse banded filter matrices produces result comparable to the existing methods. The advantage of the proposed approach is that the continuous edges are attained without any parameter tuning.

VisionNet-02.8 Adaptive Pedestrian Detection in Infrared Images using Background Subtraction and Local Thresholding
Rajkumar Soundrapandiyan and Chandra Mouli Pvssr (VIT University, India)

Infrared (IR) images is the order of the day with potential real life applications such as surveillance, defence, non-military applications and so on. Low contrast, poor illumination due to capturing devices and moderate to low environmental conditions are the general characterizations of IR images. In addition, the occlusion of objects make the detection more challenging. The objects considered in this paper are pedestrians. A simple and efficient pedestrian detection method is proposed in this paper. The two major tasks in the proposed method are background subtraction model and local adaptive thresholding. The major contribution of the paper is the adaptive calculation of the required parameters based on the image characteristics. Experiments are conducted on the standard OSU thermal pedestrian database to show the robustness of the proposed method. The proposed method attain 90% detection rate under various environmental conditions which is superior than the other existing methods.

VisionNet-02.9 Denoising Ultrasound Medical Images with Selective Fusion In Wavelet Domain
Mallika Kavuluru (MTech SP, India); Polurie Venkat Vijaya Kishore (K L University College of Engineering & K L University, India); K Narayana and M Prasad (K L University, India)

Ultrasound medical imaging is undoubtedly an incontestable tool which provides the view of the internal organs of a body. The no ionizing radiation exposure property of the ultrasound medical images made it more fitting for fetus imaging. The only major snag ultrasound imaging encompasses is speckle noise that results from constructive and destructive interference thereby degrading the quality of the image. This paper submits a twofold technique to remove this multiplicative speckle noise and to bring a contrast between the object of the interest and the remaining image. First fold includes block based hard (BHT) and soft thresholding (BST) on pixels in wavelet domain where in which the original ultrasound image is divided into Non Overlapping blocks of sizes 8, 16, 32 and 64. The second fold includes restoration of the object boundaries and texture with adaptive wavelet fusion which are lost by the blurring effect caused as a result of the first fold. Fusion of wavelet coefficients of original US image and block thresholded US images assuaged to restore the degraded object. Fusion rule and wavelet decomposition level are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). Visual quality through twofold processing has improved to an interesting level. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

VisionNet-02.10 Performance Evaluation of Fuzzy and Histogram Based Color Image Enhancement
Taranbir Kaur and Ravneet Kaur Sidhu (CT Institute of Technology and Research, India)

In computer vision applications, image enhancement plays an important role. It has been analyzed that the majority of the existing enhancement techniques are in relation to the transform domain methods, which can introduce the color artifacts and also may reduce steadily the intensity of the input image. To overcome this dilemma, a fuzzy based algorithm has been used. This approach could have the ability to boost the contrast in digital images in an efficient manner by utilizing the histogram based fuzzy image enhancement algorithm. The overall objective of this paper is to evaluate the effectiveness of histogram and fuzzy based image enhancement for various kinds of images like underwater, remote sensing, medical etc. The fuzzy and histogram based enhancement has been designed and implemented in MATLAB using image processing toolbox. The result has shown the effectiveness of the fuzzy based enhancement over the existing techniques.

VisionNet-02.11 Improved gait recognition using Gradient Histogram Gaussian Image
Parul Arora (JIIT, India); Smriti Srivastava (Netaji Subhas Institute of Technology, India); Kunal Arora and Shreya Bareja (Netaji Subhas inst. of Technoogy, India)

In this paper, we have proposed the incorporation of HOG (Histogram of Oriented Gradients) to Gait Gaussian Image for visibly improved results in gait recognition. This new spatial-temporal representation is called Gradient Histogram Gaussian Image (GHGI). It is almost similar to Gait Energy Image but the usage of Gaussian function and the further application of HOG considerably increases efficiency and reduces amalgamation of noise. In GEI, silhouettes are averaged and hence only edge information at the boundaries is preserved. Contrary to this, our concept takes the Gaussian distribution over a cycle and computes gradient histograms at all locations. Edge information inside the person silhouette is also preserved this way. The features derived from GHGI are classified using the Nearest Neighbor classifier. The supporting simulations are performed on OU-ISIR database A and B, commonly referred to as the Treadmill database A and B. The potency of our hypothesis is validated with comparative results.

VisionNet-02.12 Clustering of Web User Sessions to maintain occurrence of sequence in navigation pattern
Anupama S (Visvesvaraya Technological University, India); Sahana Gowda (BNM Institute of Technology, India)

Web log data available at server side helps in identifying the most appropriate pages based on the user request. Analysis of web log data poses challenges as it consists of abundant information of a web page. In this paper a novel technique has been proposed to pre-process the web log data to extract sequence of occurrence and navigation patterns helpful for prediction. Each URL in the web log data is parsed into tokens based on the web structure. Tokens are uniquely identified for the classification of URLs. The sequence of URLs navigated by a user for a period of 30 minutes is treated as a session. Session represents the navigation pattern of a user. Sessions from multiple users are clustered using hierarchical agglomerative clustering technique to analyze the occurrence of sequence in the navigation patterns. From each cluster, a session is identified as a representative as it holds most possible pages in the sequence, other sessions in the cluster are the subset of the representative session. Session representative navigation patterns are useful for predicting the most appropriate pages for the user request. The proposed model is tested on web log files of NASA and enggresources.

VisionNet-02.13 Detection and Analysis of Emotion From Speech Signals
Assel Davletcharova (Nazarbayev University, Kazakhstan); Sherin Sugathan (University of Bergen, Norway); Bibia Abraham (Medical College Hospital, Kannur, India); Alex James (IIITMK, India)

Recognizing emotion from speech has become one the active research themes in speech processing and in applications based on human-computer interaction. This paper conducts an experimental study on recognizing emotions from human speech. The emotions considered for the experiments include neutral, anger, joy and sadness. The distinguishability of emotional features in speech were studied first followed by emotion classification performed on a custom dataset. The classification was performed for different classifiers. One of the main feature attribute considered in the prepared dataset was the peak-to-peak distance obtained from the graphical representation of the speech signals. After performing the classification tests on a dataset formed from 30 different subjects, it was found that for getting better accuracy, one should consider the data collected from one person rather than considering the data from a group of people.

WCI-08: WCI-08: Pattern Recognition, Signal and Image Processing-II

Room: 502
WCI-08.1 A Discrete Wavelet Transform based Adaptive Steganography for Digital Images
Pooja Rai and Sandeep Gurung (Sikkim Manipal University, India); M. K. Ghose (Sikkim Manipal University)

Steganography is the scheme of surreptitious communication by hiding the data inside data with the aim of concealing the presence of secret data from inquisitive eyes. Pervasiveness as well as availability of redundant information in an image makes it the alluring carrier medium. However, all parts of an image cannot be used evenly to hide the secret information. In this paper, an approach for image steganography has been made through a method that exploits standard deviation of high frequency components of the carrier image to identify the potential region for secret embedding. Discrete Wavelet Transform (DWT) is used to segregate high frequency and low frequency components of the image. The secret embedding is done in three higher frequency components with non-uniform high embedding efficiency method. The block with standard deviation lower than the mean value is embedded with relatively higher efficiency than that of the blocks with lower values. Secret image is encrypted using a chaotic mapping which creates diffusion and thus is safe even in the situation where an adversary have the knowledge of embedding technique used. The experimental results for the method were observed with acceptable cover-stego structural similarity, stego image fidelity, embedding efficiency and statistical imperceptibility.

WCI-08.2 An Efficient Deblurring Algorithm on Foggy Images using Curvelet Transforms
Monika Verma and Vandana Kaushik (Harcourt Butler Technological Institute, India); Vinay Pathak (Vardhaman Mahaveer Open University, India)

The contrast and color of an image will be degraded if the photographs are taken under poor weather conditions eg. rain and fog. Added to this, if in a scene motion blur is also there then apart from the degradation of contrast and colour the complexity of the image due to blur is also increased. In this paper we have compared certain deblurring algorithm and also implemented de-weathering algorithm with the help of curvelets, after rectifying the motion blur of the given scene. Experimental results presented in this paper show that the Cumulative Probability of Blur Detection (CPBD) values are better for the results obtained using the algorithms of deblurring with curvelets, instead of only using deblurring. Hence it can be concluded that the algorithm presented performs better than the algorithm where motion blur was rectified in a foggy condition without using curvelets.

WCI-08.3 An Efficient Way to Determine the Chromatic Number of a Graph Directly from its Input Realizable Sequence
Prantik Biswas (Jaypee Institute of Information Technology, Noida, Sector 62, India); Chumki Acharya, Nirupam Chakrabarti and Shreyasi Das (National Institute of Technology Agartala, India); Abhisek Paul (NITA, India); Paritosh Bhattacharya (NIT Agartala, India)

Spectral graph theory is a popular topic in modern day applied mathematics. Spectral graph theoretic techniques are widely used to extract a large variety of information about different properties of a graph from its adjacency matrix. A well known physical property of a graph is its chromatic number. In this paper, we have proposed an efficient approach to determine chromatic number of a graph directly from a realizable sequence. The method involves construction of adjacency matrix corresponding to an input sequence followed by calculation of eigen values to determine the bounds of chromatic number and consequently its chromatic number.

WCI-08.4 Applying Image Processing for Detecting On-Shelf Availability and Product Positioning in Retail Stores
Rahul Moorthy, Swikriti Behera and Saurav Verma (Mukesh Patel School of Technology Management and Engineering, NMIMS University, India); Shreyas Bhargave (Capgemini Technology Services India, India); Prasad Ramanathan (IGATE Global Solutions, Mumbai, India)

Lack of availability of goods and/or the improper positioning of products on the shelves of a retail store can result in loss of sales to a retailer. Visual audits are undertaken by the retailer's staff and the staff of the FMCG product companies, (whose products are stocked in the retail shelves), to discover out-of-stock and misplaced products in a retailer's shelf. In this paper, a method of automating the process of manual inspection has been described. The paper also demonstrates that by applying image processing techniques (available in MATLAB 2013a), it is possible to identify and count the front-facing products, as well as detect void spaces on the shelf. Images from a video stream (such as from a security camera) can also be analyzed to count the number of facings of a specific product on a shelf and identify if they are placed face-up, as should be the case. The image processing approach proposed in the paper will primarily enable proper positioning of products on the shelf the front row. While that may seem as a limitation for inventory counting, it is actually an important parameter for product manufacturers who usually rent shelf space and positions at a premium and mandate the retailers to place specific products at specific shelves. The incremental change that the paper proposes is to extend the use of feature extraction in image processing to highlight incorrect placement and positioning of items on the shelves. The implemented solution does not require significant additional infrastructure costs.

WCI-08.5 Comparative study of visual attention models with human eye gaze in remote sensing images
Amudha J (Amrita Vishwa Vidyapeetham, India); Radha D (Amrita University & Amrita School of Engineering, India); Deepa AS (Amrita School of Engineering, India)

Computational visual attention model analogous to human eye behaviour has a tremendous need for applications in various fields. Study of visual attention yields information about a person's conscious processing while performing a task. This paper evaluates the behaviour of the bottom-up visual attention models with the eye gaze data set based on various performance metrics. The eye tracking data used for the study measures the gaze fixation points of human beings viewing remote sensing images. The evaluation of the models concludes that the Graph Based Visual saliency model predicts better than the Itti-Koch model across all the performance measures such as Normalized Scanpath Saliency (NSS) score, Area Under the Curve (AUC) score and Linear Correlation Coefficient for the remote sensing images.

WCI-08.6 Copy-Move forgery detection based on Harris Corner points and BRISK
Meera Isaac (University of Kerala, India); Wilscy M (SAINTGITS College of Engineering, India)

Copy-Move Image Forgery is a simple and effective method to perform digital image forgeries. To localize a forged region, is an important task in detecting such kind of forgeries. This paper proposes a key point based passive image forensic technique to detect and localize forged regions in a manipulated image. Key points are extracted from the image using Harris Corner detector and a region around each key point is used for feature extraction using BRISK. The Hamming distance metric is used to detect the distances among the features obtained to facilitate the measurement of similarity. Similarity among feature vectors are detected using Nearest Neighbor Distance Ratio. Finally outliers are removed using RANSAC. Experiment results demonstrate the effectiveness of proposed method to general geometrical transformations and post processing operations.

WCI-08.7 Gabor Filter Based Face Recognition Using Non-frontal Face Images
Atrayee Majumder, Srija Chowdhury and Jaya Sil (Indian Institute of Engineering Science and Technology, Shibpur, India)

Face recognition has immense real world applications in the field of computer vision and a challenging task especially when the frontal face images are not available to train the classifiers. In this paper by regulating the scale and orientation parameters of Gabor Filters, we obtain high dimensional features from the face images with different poses. To classify the images, first we partition the images using k-means clustering algorithm where k varies from 6 to 8 for different databases representing pose variations of input images. Based on the clustering we assign class labels to the training data set for recognizing non-frontal face images with variant poses. To reduce the complexity of the system, different statistical properties of the features like variance, entropy, and correlation coefficient are analyzed to select significant features only. Removal of irrelevant features, effectively reduces dimensionality of the feature space without sacrificing accuracy which is 94.47%. The proposed approach performs better compare to the existing methods, with and without feature selection algorithm.

WCI-08.8 Impedance Cardiography Used For Diagnosis of Diseases
Samrudhi Patil (Mumbai University, India); Pranali Choudhari (Fr. C. Rodrigues Institute of Technology & IIT Bombay, India)

In emergency medicine, hemodynamic parameters of a patient in extremely ill condition are difficult to evaluate. It is very difficult to confirm correct state of their cardiovascular health only from parameters measured by checking blood pressure & heart rate. Though recently scientific knowledge and technology have advanced significantly, cardiovascular diseases present a growing and major public health concern. These diseases can be diagnosed by invasive method and non-invasive method. The non-invasive methods such as ECG are reliable, easy to use and compatible in patients. Though ECG monitoring is widely used ambulatory technique, in many circumstances 24-hour monitoring of ECG does not supply sufficient information, such as in case of heart's functioning in pharmacological studies and arrhythmia patients. To solve this problem, recording of ECG signal simultaneously with another one reflecting central hemodynamic activity is possible. Impedance Cardiography (ICG) is one such inexpensive and noninvasive method to monitor electrical impedance change of thorax which is caused by periodic change of blood volume in aorta. This paper proposes a simple, non-invasive, and cost effective method for diagnosing the diseases using impedance cardiography also measures impedance from blood flow at wrist avoiding trauma & inconvenience caused by thoracic technique due to passage of current through heart during emergencies or otherwise. The system proposed in this paper has been used to obtain the impedance waveform for twelve subjects with different ailments and successfully estimate few hemodynamic parameters as well as discriminate the diseases.

WCI-08.9 Integrating AI Techniques In SDLC: Design Phase Perspective
Shreta Sharma (St. Xavier's College, JAIPUR, India); Santosh Pandey (Ministry of Communication & IT, New Delhi, India)

Software Development Life Cycle (SDLC) is a process consisting of various phases like requirements analysis, designing, coding, testing and implementation & maintenance of a software system as well as the way, in which these phases are implemented. Research studies reveal that the initial two phases, viz. requirements and design are the skeleton of the entire development life cycle. Designing has several sub-activities such as Architectural, Func-tion-Oriented and Object- Oriented design, which aim to transform the requirements into detailed specifications covering all aspects of the system in a proper way, but at the same time, there exists various related challenges too. One of the foremost challenges is the minimum interaction between construction and design teams causing numerous problems during design such as: production delays, incomplete designs, rework, change orders, etc. Prior research studies reveal that Artificial Intelligence (AI) techniques may eliminate these problems by offering several tools/ techniques to automate certain processes up to a certain extent. In this paper, our major aim is to identify the challenges in each of the stages of the design phase and possibility of AI techniques to overcome these identified issues. In addition, the paper also explores the relationship between these issues and their possible AI solution/s through Venn-Diagram. For some of the issues, there exist more than one AI techniques but for some issues, no AI technique/s have been found to overcome the same and accordingly, those issues are still open for further research.

WCI-08.10 Off-line Handwritten Modi Numerals Recognition using Chain Code
Manisha Shankarrao Deshmukh (North Maharashtra University, Jalgaon & School of Computer Sciences, India); Manoj Patil and Satish Kolhe (North Maharashtra University, Jalgaon, India)

In this paper a system for recognition of off-line handwritten Modi script numerals is presented. To extract the features of handwritten Modi numeral Chain code feature extraction technique is used with non-overlapping blocking strategy. A correlation coefficient is used for Modi numeral recognition. Experimental results are evaluated using two strategies: different numeral image non-overlapping division and different sizes of data set. On experimentation the maximum recognition rate of 85.21% is achieved on a database of 30000 images. The recognition results shows better performance for 5X5 grids divisions.

WCI-08.11 Robust and Adaptive Approach for Human Action Recognition Based on Weighted Enhanced Dynamic Time Warping
Bharathram Chandrasekaran (College of Engineering - Guindy & Anna University, India); S Chitrakala (College Of Engineering Guindy, Anna University Chennai, India)

New applications of Human Action Recognition (HAR) require quicker and adaptive methods that resolves user actions, accommodates multiple users and learns new action/actor. This paper proposes a robust and adaptive approach for HAR using Weighted Enhanced Dynamic Time Warping (WEDTW) that allows up to two users/actors to interact with the system. For this, a kinematic model of the actor is constructed from the video input and key poses are found using a suitable clustering method and saved into the repository. The poses thus found are tested for any affected joints and their positions are recovered by probabilistic estimation, if found to be defective. Actual HAR in a given video is done in two phases. In the first phase, poses are weighted and WEDTW finds two measures bagwise similarity and posewise similarity between key poses from new input and those in the trained repository. Bagwise similarities are then tested for confidence. Based on which the existing action label is given as output or the second phase is invoked. Adaptation runs in the second phase to resolve anomalies and learn the new actor/action. Thus making it applicable to dynamic needs of applications such as Gaming and Human Computer Interaction.

WCI-08.12 Using PRUSS for Real-Time Applications on Beaglebone Black
Anjitha Anand (NIELIT Calicut, India); Balu Raveendran (RSET, India); Shoukath Cherukat (NIELIT Calicut, India); Shiyas Shahabudeen (Gadgeon Kochi, India)

This paper proposes the use of Programmable Real-time Unit Sub System (PRUSS) in the Sitara ARM processor, for real-time applications. The Sitara AM335x SoC on the Beaglebone Black is equipped with two 32-bit low-latency microcontrollers called Programmable Real-time Units (PRUs). It can be individually programmed in assembly to do a specific task independent of the arm processor. This can have many advantages like fast operation with near to real-time processing speed and offload task from the ARM core.

WCI-08.13 An Integrated Digital Image Analysis System for Detection, Recognition and Diagnosis of Disease in Wheat Leaves
Diptesh Majumdar (IIT Guwahati, India); Dipak Kumar Kole (St. Thomas' College of Engineering. and Technology, India); Aruna Chakraborty (St. Thomas' College of Engineering & Technology, India); Dwijesh Majumder (Institute of Cybernetics Systems and Information Technology, India)

Wheat leaves need to be scouted routinely for early detection and recognition of rust diseases. This facilitates timely management decisions. In this paper, an integrated image processing and analysis system has been developed to automate the inspection of these leaves and detection of any disease present in them. Disease features of wheat leaves have been extracted using Fuzzy c-means Clustering algorithm and disease detection, recognition of its type and identification algorithm has been developed based on artificial neural network (ANN). Through the use of ANN and more specifically multilayer perceptrons, detection of the presence of disease in wheat leaves have been successful in $97\%$ of the cases, after analysis of about $300$ test images of wheat leaves. Also, identification of type of disease, if present, in wheat leaf has been successful in $85\%$ of the cases.

WCI-08.14 Medical Image Mining System:MIMS
Neethu Joseph. C and Aswathy Wilson (Calicut University, India)

Data mining is an egress area for research, because a huge volume of electronic data is generated in each seconds. The image mining is a new outgrowth of data mining, in which the analysis of image data is carried out. In the case of medical images the mining is an important task. The increasingly large medical collections introduces big challenges in medical data management and retrieval. The medical images contains very crucial information's, which are important in the characterization of diseases. There is some medical information retrieval systems and also some medical image retrieval systems are existing. But that systems have some limitations and draw backs. This paper proposed a novel Medical Image Mining System, MIMS that performs the medical image retrieval task. The system extracts the SURF features from the images. The KD Tree method is used to indexing the feature dataset. The KNN classifier is used for image searching. This image retrieval system retrieves most of the similar images from the data base. The performance measures shows that the proposed system worked efficiently.

VisionNet-03: VisionNet-03: Pattern Recognition, Clustering, and Classification - I(VisionNet/SIPR)

Room: 506
Chairs: Mahesh Chavan (KIT's College of Engineering, India), Deepthi P S (LBS Centre for Science and Technology, Trivandrum & Indian Institute of Information Technology and Management-Kerala, India)
VisionNet-03.1 A Hybrid Data Clustering Using Firefly Algorithm Based Improved Genetic Algorithm
Maheshwar Sharma (Bharati Vidyapeeth's College of Engineering, New Delhi, India); Keshav Kaushik and Vikram Arora (MVN University, Palwal, India)

Clustering is among the data mining techniques to group the data into subsets to retrieve useful information from the data set. Clustering involves selecting the k-cluster centres randomly and grouping that data around those centres. Genetic algorithms are heuristic algorithms that have been applied to clustering problem for optimization. Genetic algorithms follow the process of natural selection and work in iterative manner, generating new population from the old one. The initial population is randomly initialized. The whole iterative process is influenced by the initial values selected at start. So, the proper selection also affect optimization problem. In this paper, we have proposed a firefly based genetic algorithm (FAG) where the initial population is selected from a pool of population on the basis of fire-fly algorithms. Fire-fly algorithms are also biologically inspired algorithm and are used to optimization problem. FAG algorithm is then applied to the publically available datasets from UCI repository. The results obtained are very much satisfactory and competitive as compare to the basic genetic and firefly algorithm.

VisionNet-03.2 Comparative Analysis of Scattering and Random Features in Hyperspectral Image Classification
Nikhila Haridas (Amrita Vishwa Vidyapeetham, India); V Sowmya (Amrita Vishwavidyapeetham, India); Soman K P (Amrita Vishwa Vidyapeetham, India)

Hyperspectral images (HSI) contains extremely rich spectral and spatial information that offers great potential to discriminate between various land cover classes. The inherent high dimensionality and insufficient training samples in such images introduces Hughes phenomenon. In order to deal with this issue, several preprocessing techniques have been integrated in processing chain of HSI prior to classification. Supervised feature extraction is one such method which mitigates the curse of dimensionality induced by Hughes effect. In recent years, new strategies for feature extraction based on scattering transform and Random Kitchen Sink have been introduced, which can be used in context of hyperspectral image classification. This paper presents comparative analysis of scattering and random features in hyperspectral image classification. The classification is performed using simple linear classifier such as Regularized Least Square accessed through Grand Unified Regularized Least Squares (GURLS) library. The proposed approach is tested on two standard hyperspectral datasets namely, Salinas-A and Indian Pines subset scene captured by NASAs AVIRIS sensor(Airborne Visible Infrared Imaging Spectrometer). In order to show the effectiveness of proposed method, a comparative analysis is performed based on feature dimension, classification accuracy measures and computational time. From the comparative assessment, it is evident that classification using random features achieve excellent classification results with less computation time when compared with raw pixels(without feature extraction) and scattering features for both the datasets.

VisionNet-03.3 Generic Feature Learning in Computer Vision
Kanishka Nithin (Amrita University, India); Palaniappan Bagavathi Sivakumar (Amrita Vishwa Vidyapeetham & School of Engineering, India)

Eminence of learning algorithm applied for computer vision tasks depends on the features engineered from image. It's premise that different representations can interweave and ensnare most of the elucidative genes that are responsible for variations in images, be it rigid, affine or projective, hence researches give at most attention hand-engineering features that capture these variations, but problem is by hand engineering them researchers elude best of representations. Hence learning algorithms never reach their full potential. In recent times there has been a shift from hand-crafting features to representation learning. The resulting features are not only optimal but also generic as in they can be used as off the shelf features for visual recognition tasks. The paper reviews various end to end deep learning methods for hierarchical feature learning in computer vision.

VisionNet-03.4 Efficient Automatic Image Annotation using Optimized weighted Complementary feature fusion using Genetic Algorithm
Ajimi Ameer (Cochin University of Science and Technology, India); Sreekumar K (College of Engineering Poonjar, India)

Retrieval of images of user's interest from a large database is complex since the image content is defined on different features as color,texture,shape.And we can combine these features to give a single feature vector that represents an image. In conventional methods, equal weights are taken for each feature and this will increase the feature dimensionality and certain image features will override the concept that user is really focusing on. So, to overcome these problems,in this paper, weights of different features should be assigned appropriately using Genetic algorithm(GA) similar to human perception, which gives an optimized feature vector of each image.

VisionNet-03.5 Video Search Engine Optimization Technique Using Keyword and Feature Analysis
Krishna Choudhari and Vinod Bhalla (Thapar University, India)

Growth of internet touched upon every sphere of life. Business is no exception to it. More and more companies and individuals are bringing their business online. Now a day videos are used as a tool to promote the business and advertisement. Enterprises upload their relevant videos on the video tubes for people to video search engines to extract the most relevant video contents. User normally choose top ranked and most views irrespective of usefulness. So, the key is video search engine optimization (VSEO). Research proposes a method to optimize the video rank by exploiting video search engines search strategy to get higher rank. Eventually higher ranked video may have more views and as a result apparently promotes website visits. An experiment is done taking as case study. To promote white hat SEO, technique is suggested on usage of keyword tags in title, description and transcript. Ranks of videos are analysed before and after VSEO. Results of analysis emphasis on the strategy to select appropriate keyword tags based on navigational search queries, transactional search queries and informational search queries.

VisionNet-03.6 Improved Edge Detection Algorithm for Brain Tumor Segmentation
Asra Aslam and Ekram Khan (Aligarh Muslim University, India); M. M. Sufyan Beg (J. M. I New Delhi, USA)

Image segmentation is used to separate objects from the background, and thus it has proved to be a powerful tool in bio-medical imaging. In this paper, an Improved Edge Detection algorithm for brain-tumor segmentation is presented. It is based on sobel edge detection. It combines the sobel method with image dependent thresholding method, and finds different regions using closed contour algorithm. Finally tumors are extracted from the image using intensity information within closed contours. The algorithm is implemented and its performance is measured objectively as well as subjectively. Simulation results show that the proposed algorithm gives superior performance over conventional sobel edge detection methods. For comparative analysis, various comparison methods are used for demonstrating the superiority of proposed method over the sobel edge detection.

VisionNet-03.7 A Minutiae Count Based Method for Fake Fingerprint Detection
Kumar Abhishek and Ashok Yogi (Indian Institute of Technology Guwahati, India)

Fingerprint based biometric systems are ubiquitous because they are relatively cheaper to install and maintain, while serving as a fairly accurate biometric trait. However, it has been shown in the past that spoofing attacks on many fingerprint scanners are possible using artificial fingerprints generated using, but not limited to gelatin, Play-Doh and Silicone molds. In this paper, we propose a novel method based on the minutiae count for detecting the fake fingerprints generated using these methods. The proposed algorithm has been tested on the standard FVC (Fingerprint Verification Competition) 2000-2006 dataset and the accuracy was reported to be well above 85%. We also present a literature survey of the previous algorithms for fake fingerprint detection.

VisionNet-03.8 Vibration feature extraction and analysis of industrial ball mill using MEMS accelerometer sensor and synchronized data analysis technique
Satish Mohanty and Karunesh K Gupta (BITS, India); Kota Solomon Raju (Scientist, India)

The use of advanced technologies such as Micro-electromechanical system (MEMS) sensors and low power wireless communication hold a great promise for optimal performance of Industrial wet Ball Mill. The direct translation of the natural processes of the batch process mill in a lab setup to a continuous process mill in the industry is quite perplexed in the nature of their intent and operating conditions. In this paper, the vibration signature is analyzed for industrial wet ball mill using a MEMS accelerometer sensor. The signals are taken using two wireless accelerometer sensors; mounted at feed and discharge end of the ball mill to validate the grinding status of the copper ore. The vibration spectrum before and after feed are compared to estimate the actual grinding status of the ore inside the mill. To obtain a fuller apprehension of the mill phenomena, vibration signatures are estimated based on synchronized sampling rate of sensor and r. p. m of the ball mill. Finally, a limiting threshold levels for the intensities are identified to monitor the desired grinding status of the ore. The high frequency (ZigBee) transmission loss due to diffraction is also compensated by the novel arrangement of the sensor transceiver on the side wall of the mill

VisionNet-03.9 Independent Component Analysis and Number of Independent Basis Vectors
Sushma Niket Borade (Babasaheb Ambedkar Marathwada University Aurangabad, India); Ratnadeep Deshmukh (Babasaheb Ambedkar Marathwada University, India); Pukhraj Shrishrimal (Computer Science & IT, India)

This paper addresses the use of Independent Component Analysis (ICA) for recognizing human faces. It is implemented using InfoMax algorithm. Face recognition performance is evaluated using Architecture-I which treats images as random variables and pixels as outcomes. We are observing the sensitivity of ICA to the dimensionality of final subspace. Experiments are carried out on ORL face database which consists of 400 face images. We presented recognition rate of the system corresponding to number of independent basis vectors along with the energy retained in number of eigenvectors of underlying PCA subspace. Our results show that the performance of face recognition using ICA increases with the number of statistically independent basis vectors.

VisionNet-03.10 Visualization with Charting Library based on SVG for Amrita Dynamic Dashboard
Meenakshi R, Jayalekshmi G and Hariram S (Amrita Vishwa Vidyapeetham, Amritapuri, India); Shiju Sathyadevan (Amrita Vishwa Vidyapeetham, India); MG Thushara (Amrita School of Engineering, Amritapuri & Amrita Vishwa Vidyapeetham, Amrita University, India)

Data Visualization is the representation of data in a graphical or pictorial format. For the effective communication of data for a user proper visualization is necessary. Visualization is essential in order for the user to understand the data in an easy way. Visualization of data is done through various charts that represent the attributes of the data. For web applications, there are many open source JavaScript libraries that work on HTML5 (using SVG or CANVAS). But the drawback of these libraries is that they don't provide for much flexibility with respect to configuration. They also don't provide generalization of charts. Also many data mining algorithms are not supported by these libraries for data visualization. This paper has illustrated in building JavaScript charting libraries that would ensure proper visualization of data which is flexible for user customization. The charting library supports different types of charts varying from scatter chart to line chart to bar chart that are used for various algorithms. The libraries are built based on Object-Oriented JavaScript concept to support web applications that run either on the internet or intranet, so that extending the same in the future is also possible.

VisionNet-03.11 Fuzzy C means Detection of Leukemia based on Morphological Contour Segmentation
P Viswanathan (Image Processing & VIT University, India)

Due to complex nature of blood smear images and imitation of similar signs of other disorders makes difficult to detect leukemia. It also requires more time and sometimes leads susceptible to error. In order to solve this issues Fuzzy C means cluster Optimization of Leukemia Detection based on Morphological Contour Segmentation is proposed in this paper. This paper introduces the new approach for leukemia detection which consisted of (1) Contrast enhancement to highlights the nuclei, (2) morphological contour segmentation, and (3) Fuzzy C means classification of leukemia detection. The contract enhancement is done by simple addition and subtraction operation separates the nuclei. The morphological contour segmentation detects the edges of nuclei and eliminate the normal blood cells from the microscopic blood image. Then the features like texture, geometry, color and statistical analysis of nuclei is evaluated to determines the various factors of white blood cells. Finally the features of row vector is classified to normal and leukemia white blood cells by fuzzy c mean clustering. The proposed algorithm provided better result in terms of accuracy and time consumption when compare to the normal hematologist's visual classification .

VisionNet-03.12 Plant leaf recognition using shape features and colour histogram with k-nearest neighbour classifiers
Trishen Munisami, Mahess Ramsurn, Somveer Kishnah and Sameerchand Pudaruth (University of Mauritius, Mauritius)

Automated systems for plant recognition can be used to classify plants into appropriate taxonomies. Such information can be useful for botanists, industrialists, food engineers and physicians. In this work, a recognition system capable of identifying plants by using the images of their leaves has been developed. A mobile application was also developed to allow a user to take pictures of leaves and upload them to a server. The server runs pre-processing and feature extraction techniques on the image before a pattern matcher compares the information from this image with the ones in the database in order to get potential matches. The different features that are extracted are the length and width of the leaf, the area of the leaf, the perimeter of the leaf, the hull area, the hull perimeter, a distance map along the vertical and horizontal axes, a colour histogram and a centroid-based radial distance map. A k-Nearest Neighbour classifier was implemented and tested on 640 leaves belonging to 32 different species of plants. An accuracy of 83.5% was obtained. The system was further enhanced by using information obtained from a colour histogram which increased the recognition accuracy to 87.3%. Furthermore, our system is simple to use, fast and highly scalable.

VisionNet-03.13 Fingerprint Recognition Using Zone Based Linear Binary Patterns
Gowthami A T (Visvesvaraya Technological University & PES Institute of Technology and Management, India); Mamatha H R (P E S University & Visvesvaraya Technological University, India)

Many of the applications used to recognize humans are based on fingerprints. Fingerprint recognition is the most popular biometric technique widely used for person identification. This paper proposes a fingerprint recognition technique which uses the local binary patterns for fingerprint representation and matching. An entire fingerprint image is divided into 9 equal sized zones. In each zone the linear binary patterns are identified and used for recognition. Neural network and nearest neighbour classifiers are used for classification. The proposed method is experimented using eight databases, comprising of 3500 samples in total. On an average accuracy of 94.28% and 91.15% are obtained for neural network and nearest neighbour classifiers respectively.

VisionNet-03.14 Hardware Implementation of Palm Vein Biometric Modality for Access Control in Multilayered Security System
Sonal Shripad Athale, Dhiraj Patil and Pallavi Devendra Deshpande (Vishwakarma Institute of Information Technology, Pune, India); Yogesh H Dandawate (Vishwakarma Institute of Information Technology, India)

Among the biometric modalities palm veins are the most secure and difficult to duplicate. This palm vein verification system aims to recognize a person from its exclusive palm vein organisation that cannot be forged easily since veins are situated in inner layers of skin. Embedded devices are gaining increased attention in biometrics due to reliability and cost efficient systems. An embedded palm vein recognition system is the need of today in institutes, industries, security places etc. The aim of this proposed work is to implement palm vein identification system on hardware unit so that it can be further build into a single standalone unit, where it can be used in final level security in multilayered security system without any possibility of hacking. The hardware platform used in the proposed work is Blackfin ADSP-561 processor and the algorithms used for matching of palm vein are performed using C language. The project focuses on storing images and implementing the matching algorithms on hardware platform itself such that PC or laptop is not needed for identification purpose. Principal component analysis (PCA) technique is used as verification algorithm of palm veins. Finally, it can be concluded from the experimental results that this approach can verify an individual with an average accuracy of 92%.

VisionNet-03.15 Automatic Identification of Licorice and Rhubarb by Microscopic Image Processing
Bhupendra D. Fataniya (Sarkhej Gandhinagar Highway & Nirma University, India); Meet Nirajbhai Joshi, Urmil Modi and Tanish Zaveri (Nirma University, India)

This paper presents a method for automatic identification of Herbal Plants Licorice and Rhubarb by microscopic image processing. This method is useful for identifying species from fragments or powders and for distinguishing species with similar morphological characteristics. In first step, desired region of the image is cropped by intensity based segmentation. After that Hu's moments and compactness parameters are calculated for cropped part which are rotational, scaling and shift invariant. In the final stage using SVM classifier, classify the herbal plants of licorice and rhubarb. Area under the ROC curve for licorice is 0.7051 and for rhubarb it is 0.9487.

VisionNet-03.16 Hyperspectral Image Denoising Using Low Pass Sparse Banded Filter Matrix for Improved Sparsity based Classification
Aswathy C (Amrita Vishwa Vidyapeetham, India); V Sowmya (Amrita Vishwavidyapeetham, India); Soman K P (Amrita Vishwa Vidyapeetham, India)

The recent advance in sensor technology is a boon for hyperspectral remote sensing. Though Hyperspectral images (HSI) are captured using these advanced sensors, they are highly prone to issues like noise, high dimensionality of data and spectral mixing. Among these, noise is the major challenge that affects the quality of the captured image. In order to overcome this issue, hyperspectral images are subjected to spatial preprocessing (denoising) prior to image analysis (Classification). In this paper, authors discuss a sparsity based denoising strategy which uses low pass sparse banded filter matrices (AB filter) to effectively denoise each band of HSI. Both subjective and objective evaluations are conducted to prove the efficiency of the proposed method. Subjective evaluations involve visual interpretation while objective evaluations deals with the computation of quality matrices such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) index at different noise variance. In addition to these, the denoised image is followed by a sparsity based classification using Orthogonal Matching Pursuit (OMP) to evaluate the effect of various denoising techniques on classification. Classification indices obtained without and with applying preprocessing are compared to highlight the potential of the proposed method. By using 10% of training set, a significant improvement in overall accuracy (84.21%) is obtained by the proposed method.

Wednesday, August 12 14:30 - 17:30 (Asia/Kolkata)

T8: Tutorial -8: Disruptive Technologies and Platforms for transforming Indian Cities to Smart Cities

Dr. Pethuru Raj Chelliah (IBM India); Ms. Anupama Raman (IBM India)
Room: LT1

Cities across the world function with the help of multiple interconnected agencies with disparate domains. They need the ability to run as a single operational entity with a common need for the right information at the right time. This in turn has led to the evolution of smart cities concept. Smart cities are being increasingly conceptualized with all the grandness and concretized optimally through the smart leverage of proven and potential information and communication technologies (ICT) and tools. Smart cites are very much futuristic to accomplish the long-term goal of establishing and sustaining livable, lovable, and sustainable cities for providing a kind of enhanced care, choice, comfort and convenience to citizens across the globe. There are a considerable amount of inventions and improvisations being realized in the people-enabling ICT domain.

T9: Tutorial -9: Data Protection and Privacy Preservation in the Internet of Things

Dr. Geethakumari G (BITS-Pilani, Hyderabad Campus, India)
Room: LT2

Recently, the concept of the Internet as a set of connected computer devices is changed to a set of connected surrounding things of human's living space, such as home appliances, machines, transportation, business storage, and goods etc. The number of things in the living space is larger than the number of world population. Research is going on how to make these things to communicate with each other like computer devices communicate through Internet. The communication among these things is referred to as Internet of Things (IoT). Internet of Things helps us seamlessly gather and use information about objects of the real world during their entire lifecycle. The data collection, handling, and mining are accomplished in the IoT systems in completely different form what we know. Protection of user data privacy in the world of Internet of Things is a major challenge. Collaborative activities take place among un-trusted or semi-trusted parties that have never met each other before and therefore the need to protect their privacy. In this work, we explore the concerns of privacy of data as applied to the IoT domain. We would review the existing approaches/solutions for privacy preservation in IoT and discuss our proposed approach.

Wednesday, August 12 14:30 - 16:30 (Asia/Kolkata)

T7: Tutorial -7: Issues and Challenges in TDMA based Wireless Mobile Ad hoc Networks (MANETs)

Mr. Mallesham Dasari (Uurmi Systems Pvt. Ltd, India)
Room: MSH

Time division multiple access (TDMA) based channel access scheme is fast becoming one of the most widely used mechanisms in Mobile ad hoc networks (MANET) with its efficient channel utilization primitive and collision free transmissions at its core. Design of this architecture for a single carrier is much more challenging work, where control and data communication in done through the same channel. The tutorial discusses the challenging issues such as time synchronization in a distributed environment and multi-hop environment with Quality of Service (QoS) provisioning for different types of traffic such as data, voice and video flows. The tutorial also describes the suitable frame structure design, which gives the minimal scheduling delay which is also suitable for mobile environment. The concrete open research issues and solutions to these problems are discussed in the tutorial.

Wednesday, August 12 16:00 - 17:00 (Asia/Kolkata)

ICACCI Poster - 02: ICACCI Poster Session - II

Chairs: Nishant Doshi (PDPU, India), Dhiraj Sunehra (Jawaharlal Nehru Technological University Hyderabad, India)
ICACCI Poster - 02.1 CFS Performance Improvement using Binomial Heap
Shirish Singh (Columbia University, USA); Praveen Kumar (The LNMIIT, India)

Process scheduling algorithm plays a crucial role in operating system performance and so does the data-structure used for its implementation. A scheduler is designed to ensure the distribution of resources among the tasks is fair along with maximization of CPU utilization. The Completely Fair Scheduler (CFS), the default scheduler of Linux (since kernel version 2.6.23), ensures equal opportunity among tasks. In this paper we discuss the CFS and propose an improved performance implementation using Binomial Heap as a replacement for Red-Black Tree. Preliminary results with simulation on C are very promising and show about 3 times improvement for insert operations.

ICACCI Poster - 02.2 An Optimized Hyper Kurtosis based Modified Duo-Histogram Equalization (HKMDHE) method for Contrast Enhancement Purpose of Low Contrast Human Brain CT scan images
Sabyasachi Mukhopadhyay (IISER Kolkata); Sawon Pratiher (Indian Institute Of Technology Kharagpur, India); Venkatesh M (IIT KANPUR, India); Ritwik Barman (IISER Kolkata, India); Soham Mandal and Satyasaran Changdar (IEM Kolkata, India); Prasanta Panigrahi and Nirmalya Ghosh (IISER Kolkata, India)

In our proposed hyper kurtosis based modified duohistogram equalization (HKMDHE) algorithm, contrast enhancement has been done on the hyper-kurtosis based application. The results are very promising of proposed HKMDHE technique with improved PSNR values and lesser AMMBE values than other classical techniques like CLAHE.

ICACCI Poster - 02.3 Hybrid Intelligent Access Control Framework to Protect Data Privacy and Theft
Jignesh C Doshi (L J Institute of Management Studies, Ahmedabad, India); Bhushan H Trivedi (GLS Institute Of Computer Technology, India)

Web application usage has grown in the past few years. The web is used by more and more organizations for business. As a result more and more transactions are taking place via the web. Attacks have grown in sophistication and number. A database is a favorite target of attackers. The key focus of database attacks is the exclusion of authentication, data theft and data manipulation. Data theft must be handled carefully with new emerging technologies like Distributed databases, Database As A Service of cloud computing etc. In this paper, a Hybrid Intelligent Database Security framework is proposed. The focus of the framework is to protect data theft and privacy horizontally and vertically. Prototype test results show that the framework is able to prevent accidental or deliberate data breaches on the Oracle Database. The proposed framework provides a platform and fine grained security and can be used in emerging technologies

ICACCI Poster - 02.4 Security Analysis of Tree and Non-tree Based Group Key Management Schemes Under Strong Active Outsider Attack Model
Purushothama B R and Nishat Koti (National Institute of Technology Goa, India)

Key management schemes for secure group communications should satisfy two basic security requirements, backward secrecy and forward secrecy. Most of the prominent secure group key management schemes are shown to satisfy the basic security requirements considering passive attack model. In this paper, we analyze secure group key management schemes under active outsider attack model. In active outsider attack model, an adversary can compromise a legitimate user of the group. We show that some of the efficient tree based, non-tree based and proxy re-encryption based group key management schemes are not secure under active attack model. We evaluate the cost involved in making these schemes secure under active attack model. Also, we construct a secure version of these schemes and show that the schemes are secure under active outsider attack model.

ICACCI Poster - 02.5 A Novel Multi Metric QoS Routing Protocol for MANET
Prakash Srivastava (Madan Mohan Malaviya University of Technology Gorakhpur, India); Rakesh Kumar (MMM University of Technology, India)

The network topology in Mobile Ad hoc Networks (MANETs) changes frequently due to node mobility and power limitations, hence routing is an important issue in such networks. QoS aware routing is needed to provide optimal routes as a function of parameters like bandwidth, delay, packet loss etc. Our approach is focused on QoS enabled route discovery procedure which involves estimation of node bandwidth and delay at every node. Hard real time applications with stringent requirements of bandwidth and delay limit can be benefited from this approach. Existing link failure strategy involves frequent route discoveries which incur high routing overhead and increased end-to-end delay. Most of the routing protocols in this category use single route and do not utilize multiple alternate paths. In this paper, efficient link failure strategy is also incorporated by estimating link expiration metric with the help of signal intensity level to provide prediction before route break up. The performance of the proposed approach is analyzed in terms of packet delivery ratio and throughput. Results show that our approach has been found outperforming to existing approaches.

ICACCI Poster - 02.6 Remote Monitoring and Control of a Mobile Robot System with Obstacle Avoidance Capability
Dhiraj Sunehra, Ayesha Bano and Shanthipriya Yandrathi (Jawaharlal Nehru Technological University Hyderabad, India)

With the advancements in technology, the field of robotics and automation has gained tremendous popularity. Mobile Robots are being widely used in a number of places including production plants, warehouses, airports, agriculture, medical, military, and in hazardous environments to reduce human efforts. In this paper, we present the design and implementation of a mobile robot system with obstacle avoidance capability for remote sensing and monitoring. The proposed system enables the user (base station) to send necessary commands to the remote station (mobile robot) using Dual-Tone Multi-Frequency (DTMF) signals for robot teleoperation. Global Positioning System (GPS) and Global System for Mobile communication (GSM) technologies are used, which provide user with mobile robot location in the form of a Google map link. The system also provides the user with real time video monitoring of the remote area by using an internet enabled device. The user can also save the images and record the videos captured by the mobile robot IP webcam at the remote location, which can be stored in a public cloud for later use.

ICACCI Poster - 02.7 Fingerprint Fuzzy Vault Using Hadamard Transformation
Deepika Bansal, Sanjeev Sofat and Manvjeet Kaur (PEC University of Technology, India)

Biometric system is a security model which could not be easily cracked or penetrated as compared to the traditional methods like password, ID card to get information. Trait information storage of a particular person in the database makes it vulnerable to template database attack. Fuzzy vault is a popular biometric cryptosystem technique that provides security to the biometric templates. The challenge faced by fuzzy vault is it sans the potential of cancellability and diversity which restricts its implementation in real applications. With great efforts of researchers, some approaches are designed to provide cancellability and diversity in the fuzzy vault. This paper provides insight about such approaches and a proposed technique which verge to satisfy issues regarding template protection scheme. Proposed technique is applied on the fingerprint trait.

ICACCI Poster - 02.8 Real-time Swara Recognition System in Indian Music using TMS320C6713
Sinith MS (Govt. Engineering. College Thrissur, India); Shikha Tripathi (PES University, India); Kvv Murthy (IIT Gandinagar, India)

Swara recognition is the starting point of music pattern classification in Indian classical music. The recognition of swara in realtime is often required for judging the quality of tone of a vocalist or an instrumentalist. In this age of reality shows of music, a stand alone electronic system for swara recognition is very relevant, where the system would precisely judge the contestant in terms of swara perfection. In this context a swara recognition system is designed and implemented on a Texas Instrumemts Digital Signal Processor TMS320C6713. In this system the fundamental frequency is tracked using FFT algorithm and the output is wrapped to a single octave. The fundamental frequency is then mapped to twelve distinct values corresponding to twelve semi-notes and displayed in the built LED display of the DSP Processor. The efficiency of the system is tested in terms of memory usage and speed.

ICACCI Poster - 02.9 A Novel Method For Error Correction Using Redundant Residue Number System In Digital Communication Systems
Jilu James (Mahatma Gandhi University, India); Ameenudeen Pe (National Institute of Technology Calicut, India)

Communication channel is highly prone to noise and therefore error correcting codes are required to protect the transmitted data. Residue Number System (RNS) is defined as the representation of an integer in the residue field. Redundant Residue Number System (RRNS) is obtained by adding some redundant residues in the RNS. In RNS, the arithmetic operations are carry free, so errors do not propagate from one residue to other. This enables RRNS to give a promising way to provide a very fast arithmetic in the field of digital computers. This paper discusses a single error correction algorithm based on Redundant Residue Number System (RRNS). The Bit Error Rate (BER) performance of RRNS codes with error correction algorithm for M-ary modulation in an AWGN channel is analysed and is compared with conventional single bit error correction code, Hamming code. The simulation results show that RRNS codes outperforms Hamming codes in terms of BER.

ICACCI Poster - 02.10 An Investigation of Fractal Antenna Arrays for Side Lobe Reduction with A Fractal Distribution of Current
V A Sankar Ponnapalli (Sreyas Institute of Engineering and Technology, India); P V YJayasree (GITAM University, India)

Fractal arrays are multiband arrays having space filling capability. But Side lobe level and the number of elements are the prominent challenges in the designing of fractal antenna arrays. In this paper, analysis of linear and hexagonal fractal antenna arrays is investigated with uniform and fractal current excitations. Due to the fractal distribution of current, a notable improvement has observed in Side lobe level, Beam width and thinning of the elements can be achieved at the various iterations of fractal arrays. These arrays are analyzed and simulated by MATLAB programming.

ICACCI Poster - 02.11 Realizing Cooperative Beamforming Scenario with MIMO-OFDM DF Relay Channels and Performance Comparison based on Modulation Scheme and Energy Efficiency
Anunchitha P (University of Calicut, Kerala)

Cooperative communication has attracted foremost research concern due to its ability to expand system coverage and enhance spectrum efficiency. In this paper, a transmit cooperative beamforming scenario with multiple-input-multiple-output (MIMO) decode-and-forward (DF) relay channel with a direct source to destination link is considered, into which an orthogonal-frequency-division-multiplexing (OFDM) technique is incorporated by including an OFDM modulator/demodulator in the transceiver. Results show the performance comparison based on achievable information rate for the transmit signal-to-noise ratio (SNR) for different modulation scheme and it is seen that with OFDM the overall performance of the system is enhanced further. Results also show the comparison of energy efficiency of the transmitter in MIMO-OFDM DF relay and MIMO DF relay channel.

ICACCI Poster - 02.12 Low Complexity Channel Estimation using Fuzzy - Kalman Filter for Fast Time Varying MIMO-OFDM Systems
Vinod Gutta (Amrita Vishwa Vidyapeetham, India); Kamal Kanth Toguru Anand (Amrita School of Enginnering, Amrita Vishwa Vidyapeetham, India); Teja sri Venkata saidhar Movva (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, India); Bhargava Rama Korivi, Santosh Killamsetty and Sudheesh P (Amrita Vishwa Vidyapeetham, India)

Estimation of channel is a significant issue in wireless communication. In this paper, fuzzy - Kalman Filter based channel impulse response (CIR) estimation, for the time varying velocity of the receiver in a Multiple -input multiple - output orthogonal frequency division multiplexing (MIMO-OFDM) system is being proposed. The channel is modeled using second order auto regressive (AR) random model. Linearization of channel estimation is done using fuzzy logic and Kalman filter is used to estimate the channel. For fast time varying channel, fuzzy based channel impulse response estimation is a low complex technique when compared to conventional filters.

ICACCI Poster - 02.13 A Status Quo Of WSN Systems for Agriculture
Vijo Varghese (Amrita Vishwa Vidyapeetham, India); Kalyan Sasidhar (DAIICT, India); Rekha P (Amrita Vishwa Vidyapeetham, India)

Wireless Sensor Networks have revolutionized mobile computing in the past decade or so. A myriad of systems have been designed and developed for various applications, most commonly, environmental monitoring, water quality monitoring, structural monitoring etc. One particular domain where there is a dearth of sensor networks implementation is agriculture. The advantages of sensor networks have not been leveraged for agriculture, particularly in the Indian agriculture scenario. Indian agriculturists or farmers have been using primitive methods such as use of bullock karts for plowing, use of Persian water wheel to collect water for irrigation, manual sowing, harvesting of crops manually, etc.With India's major economy earnings coming from agricultural products, it is imperative to use technology to improve productivity and consequently the economic growth of the country. This work presents a survey of wireless sensor network systems deployed for the agricultural domain. We clearly lay out the working methodologies and the drawbacks of existing systems, build upon them and finally propose our idea of a wireless sensor network for estimating crop yield that would help the farmer in making a right decision in choosing the right kind of crop.

ICACCI Poster - 02.14 Efficient Multicast Algorithm for Dynamic Intra Cluster Device-to-Device Communication for Small World Model Paper
Manthena Pushpalatha and Narayanan Shruthi (Amrita Viswa Vidyapeetham, India); Telugu Kuppu Shetty Ramesh (Amrita University, India); SandeepKumar Konda (Amrita School of Engineering & AMRITA Vishwa Vidyapetham, India)

Device to Device (D2D) communication technique has been projected as means of taking the benefit of increasing performance of cellular communication with direct link between users. In this paper, we propose a dynamic intracluster data sharing method taking advantage of D2D multicast. One of the users serves as cluster head to take turns to multicast at a particular time, selected by base station on the basis of power level. Cluster head at that particular time is able to access to the users in the cluster. In our algorithm dynamic data demand is considered in order to send the packets in sequence based on users need. Base station identifies the user with more requests, and informs the Cluster head. Cluster head asks that particular user to serve as transmitting user. The transmitting user monitors the acknowledgements sent by the receiving users. If acknowledgement fails to come within the minimum time (tmin) the transmitting user informs the cluster head. Then cluster head checks the reason for failure of acknowledgement. Accordingly, it requests the next user with more requests to transmit the data to the users who failed to receive it. This process continues until all the users receive the data, this improves the transmission efficiency. By simulation we prove that throughput increases, blocking probability and latency get reduced. All these increase the Quality of Service (QoS) compared to Intracluster D2D communication for a small world model based on poor link quality between the users.

ICACCI Poster - 02.15 Periodic Channel-Hopping Sequence for Rendezvous in Cognitive Radio Networks
Ram Narayan Yadav (Institute of Infrastructure Technology Research And Management, Ahmedabad, India); Rajiv Misra (Indian Institute of Technology Patna & IIT Patna, India)

Cognitive radio networks (CRNs) is an emerging paradigm to exploit the spectrum holes very intelligently. To start communication in CRNs, users need to establish a link on a channel which in not occupied by licensed user. Rendezvous is an essential operation in cognitive radio networks (CRNs) to establish a link on common channel for data communication and control information exchange. To achieve guaranteed rendezvous in finite time, most of the proposed algorithms generate channel hopping (CH) sequence using all channels regardless of their availability. Thus, the CH sequence without considering channel availability information may attempt to achieve rendezvous on unavailable channels that result longer time to rendezvous (TTR). The implementation of rendezvous processes in order to minimize the TTR is a major challenge in CRNs. Most of the reported schemes on rendezvous are based on central controller or utilize common control channel which not practical in CRNs. In this paper, we consider the blind rendezvous problem, that find common channel without any central controller. We propose a guaranteed, distributed rendezvous algorithm which finds a commonly available channel for two users called as Periodic Channel Hopping (PCH) sequence algorithm. Our PCH scheme considers only the channels those are free from licensed users. The maximum time to rendezvous (MTTR) and expected time to rendezvous (ETTR) of our PCH scheme are reduced to (2M − 1) and (M − 1) respectively in symmetric model that is better as compared to other competitive schemes. In an asymmetric case, the proposed scheme guarantees rendezvous and MTTR is bounded by M (2M − 1).

ICACCI Poster - 02.16 Third Order Wideband Bandpass Filter for ISM Band Using Stepped and Hairpin Composite
Mangesh Joshi (Babasaheb Ambedkar Technological University, India); Anil Bapusa Nandgaonkar (Dr. B. A. Technological University, Lonere, India)

This work describes the design of wide band bandpass filter for ISM 2.4 GHz band. Using the hairpin and stepped impedance filters, filter bandwidth of approximately 800 MHz is obtained without the sacrifice of insertion loss. The filter is designed on FR4 substrate with dielectric constant of 4.4, height 1.6 mm. The observed return loss at center frequency is approximately -34 dB while the insertion loss varies in the range of -0.7 to -1 dB and found to be reasonably stable over the filter bandwidth. The structure uses basic principle of third order maximally flat bandpass filter design using composite structure.

ICACCI Poster - 02.17 Experimental study on Wide Band FM Receiver using GNURadio and RTL-SDR
Khyati P Vachhani (Institute of Technology, Nirma University, India); Arvind Mallari Rao (Defense Research & Development Organisation, Ministry of Defence, GOI, India)

This paper focuses on the open source GNURadio Software and studies its use as a research tool coupled with USRP and RTL SDR. USRP, RTL SDR and GNU Radio software suite is introduced and briefly discussed. GNU Radio software suite can act as a simulation tool or a software subsystem to drive a SDR transceiver hardware. This is shown by implementing WBFM receiver using RTL SDR with GNU Radio. It concludes by comparing the cheap yet effective RTL SDR with the costly but accurate USRP hardware.

ICACCI Poster - 02.18 Paradigm shift in Mobile Communication Carriers
Manish Kumar and Kapil Kant Kamal (Centre for Development of Advanced Computing, India); Bharat Varyani (Center for Development of Advanced Computing, India); Sumit Barwad (Cdac, India)

Language is an important and integral part of human culture. There are many aspects that make up communication, but humans are unique in that we have an organized spoken language, which allows us to communicate on a deeper, more personal level. The technology has taken us from face-to-face and letter writing communication, to inventions such as the telephone, the cell phone, online chat rooms, and now to Mobile Applications, one of the newest and fastest growing forms of communication with text messaging, audio and video communication. The citizens across the world are increasingly turning to mobile technology because of ease of communication and the different modes it presents like text, audio, video, etc. in which communications can be made very easily. Earlier mobile users use the conventional data communication channels such as Short Messaging Services as for text data, voice calls, but the rapid proliferation and ubiquity of mobile and more and more availability of smart devices in the consumer market has driven the software engineering community to quickly adapt to new development approaches conscious of the novel capabilities of mobile applications. There has been a rapid increase in online communication in the last 7-8 years especially using mobile devices and the mobile application using the internet which is replacing the conventional communication channels. In this paper, we have described how mobile applications over the web have replaced the conventional modes of communication and have also described some case studies.

ICACCI Poster - 02.19 Design of a Novel Microstrip-fed UWB Fractal Antenna for Wireless Personal Area Communications
Susila M (SRM Institute of Science and Technology, India); T Rama Rao (SRM IST, India)

A novel microstrip-fed fractal antenna is designed and fabricated to achieve the desired Ultra Wideband (UWB) features for Wireless Personal Area Network (WPAN) communication applications. The proposed antenna comes with a compact size of 34×32×1.6 mm3 and design simulations executed utilizing 3D Electromagnetic (EM) tool and its prototype is validated using Radio Frequency (RF) equipment. The antenna operates over the entire UWB band with nearly omnidirectional radiation patterns and observed a maximum gain of 4.83 dB with an impedance bandwidth of 7.90 GHz. Further, simulations have also done for 1 x 2 antenna array for better gain and performance.

ICACCI Poster - 02.20 Cloud-based Integrated Medication Management System
Joseph Mathew (College of Engineering Trivandrum); Paul Philip (Model Engineering College, India)

India is a populous, multi-lingual, multi-cultural developing country marred by inherent deficiencies in its Pharmaceutical & Healthcare sector. On one hand the problem of counterfeit pharmaceutical drugs is on the rise, whereas on the other hand the average common man is being financially & medically exploited by authorized / un-authorized doctors with their overdose of medical prescriptions, with an added risk of the patient, skipping part of this complex medication regimen. To mitigate these issues currently prevailing in India, we propose a Cloud-based Integrated Medication Management System. As a precursor to it, a Web-based Inpatient Medication Management System for hospitals is being developed and the preliminary results of the same is outlined in this short paper. This is just a concept which takes advantage of the rapidly growing cloud based networking infrastructure along with the increased penetration of internet in India. The paper also sheds light on the functional details of the proposed concept and its potential impact on the society.

ICACCI Poster - 02.21 Performance Comparison of SOAP and REST based Web Services for Enterprise Application Integration
Smita Kumari (NIT Rourkela, India); Santanu Kumar Rath (National Institute of Technology (NIT), Rourkela, India)

Web Services are common means to exchange data and information over the network. Web Services make themselves available over the internet, where technology and platform are independent. Once web services are built it is accessed via uniform resource locator(URL) and their functionalities can be utilized in the application domain. Webservices are self-contained, modular, distributed and dynamic in nature. These web services are described and then published in Service Registry e.g, UDDI) and then they are invoked over the internet. Web Services are basic Building blocks of Services Oriented Architecture (SOA). These webservices can be developed based on two interaction styles such as Simple Object Access protocol (SOAP) and Representational State Transfer Protocol (REST). It is important to select appropriate interaction styles i.e, either SOAP or REST for building Web Sevices. Choosing service interaction style is a major architectural decision for designers and developers, as it influences the underlying requirements for implementing web service solutions. In this study, performance of application of web services for Enterprise Application Integration(EAI) based on SOAP and REST is compared. Since web services operate over network throughput and response time are considered as a metrics parameter for evaluation.

ICACCI Poster - 02.22 3D Face Recognition Using Optimised Directional Faces And Fourier Transform
Naveen S (University of Kerala, India); Shilpa S Nair (Kerala University & LBS Institute Of Technology For Woman Poojappura, India); Moni R. s (University of Kerala, India)

This paper proposes an efficient multimodal face recognition method by combining the textural as well as depth features, extracted from directional faces of input image. Directional faces are obtained using filters which are designed using Local Polynomial Approximation (LPA). The efficient modified Local Binary Pattern (mLBP) operator is used for feature extraction from optimized directional faces. The spectral representation of the concatenated block histogram of mLBP feature image acts as a robust face descriptor. Discrete Fourier Transform (DFT) is used as the transformation tool. The fusion of both modalities is performed at score level. The experimental results shows that the proposed method gives better performance.

ICACCI Poster - 02.23 Security Requirements Elicitation and Assessment Mechanism (SecREAM)
Rajat Goel and Mahesh Chandra Govil (Malaviya National Institute of Technology Jaipur, India); Girdhari Singh (MNIT Malaviya National Institute of Technology, Jaipur, India)

Today, when most of the software are web-based or cloud-based having a variety of stakeholders with intertwined requirements, developing secure software is a complex issue. Usually, security is neglected during the development process. Now, the researchers emphasize on inclusion of security in development process, especially during the early phases. This paper suggests Security Requirements Elicitation and Assessment Mechanism (SecREAM), a novel methodology to imbibe security right from the inception of the software. It is applicable to both kinds of software- on premise and on cloud. The crux of the methodology lies in actively involving all kinds of stakeholders and ranking of the required assets on the basis of certain parameters that will facilitate a well - understood design and help in making better technical and non-technical decisions later during the course of development.

Wednesday, August 12 17:00 - 18:30 (Asia/Kolkata)

WCI-Poster-02: WCI Pattern Recognition, Signal and Image Processing (WCI-Poster-02)

Chair: Dhananjay Singh (Hankuk University of Foreign Studies, Korea (South))
WCI-Poster-02.1 Multi Layer Versus Functional Link Single Layer Neural Network for Solving Nonlinear Singular Initial Value Problems
Susmita Mall (National Institute of Technology Rourkela, Odisha, India); Snehashish Chakraverty (National Institute of Technology Rourkela, India)

In this paper we have compared the methods of traditional Multi Layer Proceptron (MLP) with that of single layer Functional Link Neural Network (FLNN) in solving nonlinear singular initial value problems of Lane-Emden type equations. In single layer FLNN model, the hidden layer is replaced by functional expansion block for enhancement of the input patterns by a set of orthogonal polynomials viz. Chebyshev and Legendre polynomials. Feed forward neural network model and unsupervised error back propagation principle is used for modifying the network parameters in both the procedures. Computations show that the results are same for both the methods but the time of computation is less in case of FLNN due to minimum number of parameters required in the neural network architecture.

WCI-Poster-02.2 User Intervention Based Segmentation of Myocardium from Cardiac Cine MRI Images
Shrinivas Desai (K L E Technological University, India); Heena Shigli, Tejas MH and Lohit Narayan (B V B College of Engineering & Technology, India)

For the last 15 years, Magnetic Resonance Imaging (MRI) has become a reference examination for cardiac morphology, function and perfusion in humans. Yet, due to the characteristics of cardiac MR images and to the great variability of the images among patients, the problem of heart cavities segmentation in MRI is still open. We present user intervention based semi-automatic methods performing segmentation in short axis images using a cardiac cine MRI sequence. We experiment with various segmentation methods like graph-cut, watershed and the threshold based segmentation to calculate wall thickness and ejection factor which are of clinical importance. Challenge is to effectively segment epicardium and endocardium boundaries, for effective assessment. We have collected dataset from Sunnybrook Cardiac Data (SCD). The performance of each of the segmentation method is assessed by recording confusion matrix, and calculating sensitivity, specificity and accuracy. The results are in favor of graph cut segmentation method. We conclude with a discussion and future scope in this field regarding methodological and medical issues.

WCI-Poster-02.3 Condition Monitoring in Roller Bearings using Cyclostationary Features
Sachin Kumar S and Neethu Mohan (Amrita Vishwa Vidyapeetham, India); Prabaharan Poornachandran (Amrita University, India); Soman K P (Amrita Vishwa Vidyapeetham, India)

Proper machine condition monitoring is really crucial for any industrial and mechanical systems.The efficiency of mechanical systems greatly relies on rotating components like shaft, bearing and rotor.This paper focuses on detecting different fault in the roller bearings by casting the problem as ma- chine learning based pattern classification problem.The different bearing fault conditions considered are, bearing-good condition, bearing with inner race fault, bearing with outer race fault and bearing with inner and outer race fault.Earlier the statistical features of the vibration signals were used for the classification task.In this paper, the cyclostationary behavior of the vibration signals is exploited for the purpose.In the feature space the vibration signals are represented by cyclostationary feature vectors extracted from it.The features thus extracted were trained and tested using pattern classification algorithms like decision tree J48, Sequential Minimum Optimization (SMO) and Regularized Least Square (RLS) based classification and provides a comparison on ac- curacies of each methods in detecting faults.

WCI-Poster-02.4 Integrated Energy Management Framework - Aatral
Anuradha Subramanian (Madurai Kamaraj University & None, India); Thangaraj Muthuraman (Madurai Kamaraj University, India)

Energy optimization and energy aware operations are the key area of research in the field of Wireless Sensor Network. Many of the researchers work on the energy aware algorithms, software and node design. But, these works lack the holistic approach. We are not sure, the energy improvement techniques improve the overall energy. The another check is whether the amount invested on the energy improvements are paid back in recent times by the amount saved out of the energy. The holistic and integrated energy management framework Aatral is proposed which gives the control dashboard to monitor, optimize and harvest the energy related to Wireless Sensor Network. The cost of energy usage is also tracked by the framework sub unit - energy economics calculator, for later reference. The automated report generation, alert creation are configured in the workflow engine of the framework. The energy measures and indices calculated for a profile are getting stored in the database with the time-stamp in the database. The energy variance comparison operation plots energy variance graphs of the energy observation for a profile and the benchmark available for the WSN profile. The performance of the network in terms of throughput, latency is validated before and after energy improvement for fairness.

WCI-Poster-02.5 An Efficient Low Power & High Performance in MPSOC
Naresh Kumar Reddy, B (DUK, India); Jayasree K (LBRCE, India); Srinivasu Kumar (Gudlavalleru, India); Ramesh Jvn (K L U, India); Pranose J Edavoor and Mohd Kashif Zia Ansari (N I T, India)

Multiprocessor system-on-chip (MPSoC) architectures have risen as a prevalent answer to the ever-increasing performance & reduce the power consumption requirements, that are customized to a specific application have the potential to achieve very high performance, while additionally obliging low power consumption. The power consumed and performance of the system majorly depends on the memory & Communication medium of Processors, some issues involved in Memory & communication of processors. In this Paper we avoid that issue and show two separate techniques to increase performance & reduce the power consumption. The first technique is Scratch Pad Memory (SPM) Replacement rather than cache replacement, second technique is Network on Chip (NOC) rather than Advanced Microcontroller Bus Architecture (AMBA) communication medium between processors.

WCI-Poster-02.6 Applications of Text Detection and its Challenges A Review
Nevetha M P and Baskar A (Amrita Vishwa Vidyapeetham, India)

The rising need for automation of systems has effected the development of text detection and recognition from images to a large extent. Text recognition has a wide range of applications, each with scenario dependent challenges and complications. How can these challenges be mitigated? What image processing techniques can be applied to make the text in the image machine readable? How can text be localized and separated from non textual information? How can the text image be converted to digital text format? This paper attempts to answer these questions in chosen scenarios. The types of document images that we have surveyed include general documents such as newspapers, books and magazines, forms, scientific documents, unconstrained documents such as maps, architectural and engineering drawings, and scene images with textual information.

WCI-Poster-02.7 Survey on Tag SNP selection methods using soft computing techniques
Divya P Nair and Keerthana Dayanand (Amrita Vishwa Vidyapeetham, India); M. V. Judy (Amrita Vishwa Vidyapeetham & Amrita School of Arts and Sciences, Kochi, India)

SNPs (Single Nucleotide Polymorphisms, frequently called "snips") are used in order to decide how the alleles at different locations are associated with each other. This association is given by Linkage Disequilibrium. A representative subset of SNPs called tag SNPs helps ascertaining genetic variation and its coalition to phenotypes, without genotyping each and every SNP. For this reason the time and expense for association studies is reduced. In this paper we carry out a survey of various methods used for tag SNP selection, such as GTagger Algorithm (Genetic Tagger), Chaos embedded Particle Swarm Optimization (CPSO) and Multiple Ant Colony Algorithm (MACA).

WCI-Poster-02.8 Mathematics Tutoring Apps for Low-Cost Devices: an Ethnographic Study of Requirements
Viraj Kumar (PES University, India); Aishini Sinha, Anupama Dhareshwar and L. Saloni Joshi (PES Institute of Technology, India)

In several countries, private tutoring has developed as a system of education parallel to traditional schooling. Students from wealthy families have greater access to this system, and the advantages they obtain tend to exaggerate socio-economic inequities. Low-cost programmable computing devices have the potential to execute at least some tasks performed by human tutors, and may therefore help economically disadvantaged students lessen this gap. In this paper, we (1) investigate requirements for designing these kinds of software applications (apps), by conducting an ethnographic study of school students and mathematics tutors at a private tuition center, and (2) illustrate these requirements with a prototype Android app for a component of the middle-school mathematics curriculum: linear equations in one variable. Our results suggest that such apps are unlikely to outperform skilled human tutors in the foreseeable future, but they may provide some benefits for students entirely lacking access to tutors. We hope that this research can contribute to the development of low-cost educational apps that explicitly target such benefits.

WCI-Poster-02.9 Image Scrambling using Kekre's Walsh Sequency and Non Sinusoidal Transforms in Different Color Spaces
Pallavi Halarnkar (Thadomal Shahani Engineering College & Mumbai University, India); Tanuja K. Sarode (Thadomal Shahani Engineering College, India)

Image security has a wide number of applications. Image scrambling / encryption involves making the image unintelligible to the human eye. In this paper, we have proposed a novel and a highly secured technique which makes use of Kekre's Walsh Sequency for scrambling color space transformed coefficients in non sinusoidal transform domain. Five different color spaces and four different transform combinations have been used and experimental results are discussed. From the results the best performers include Haar Transform, Slant Transform and Kekre Transform.

WCI-Poster-02.10 An unsupervised approach for morphological segmentation of highly agglutinative Tamil language
Angela Deepa (Quaid-e-Millath Government College for Women, India); Ananthi Sheshasaayee (Quaid-E-Millath Government College for Women(Autonomous), India)

Morphological learning through unsupervised means enables the automatic identification of affixes, morphological segmentation of words followed by the generation of paradigms incorporating the list of affixes with the combined list of stems for a particular language. For segmenting the words into stems and affixes various unsupervised approaches have been deployed. But for highly agglutinative languages like Tamil very less computational work has been done in this direction. This paper mainly portrays a morphology acquisition framework based on an unsupervised approach for the morphological segmentation of highly agglutinative Tamil language.

WCI-Poster-02.11 A computational approach of Data Smoothening and Prediction of Diabetes
Shivani Jakhmola (Manipal Institute of Technology, India); Tribikram Pradhan (Manipal University & MIT, India)

Data mining when applied on medical diagnosis can help doctors to take major decisions. Diabetes is a disease which has to be monitored by the patient to not cause severe damage to the body, hence its prediction is an important task. In this study, a new data smoothening technique is proposed for noise removal from the data. It is very important for the user to have control over the smoothening of the data so that the information loss can be monitored. The proposed method allows the user to control the level of data smoothening by accepting the loss percentage on individual data points. Allowable loss is calculated and a decision is made to smoothen the value or retain it .The proposed method will let the user get the output based on his requirements of preprocessing. The proposed algorithm will allow the user to interact with the data preprocessing system unlike the primitive algorithms. Different levels of smoothened output is obtained by different loss percentage. This preprocessed output produced will be of a better quality and will resemble more to the real world data. Furthermore, correlation and multiple regression is applied on the preprocessed diabetes data and a prediction is made that whether the patient has develops diabetes or not.

Thursday, August 13

Thursday, August 13 9:00 - 12:00 (Asia/Kolkata)

R4: Registration Starts

Room: Atrium

Thursday, August 13 9:30 - 14:00 (Asia/Kolkata)

S25: S25-Internet and Web Computing

Room: 302
Chairs: Vaishali Maheshkar III (CDAC, India), Ahammed K.K Siraj (Model Engineering College, India)
S25.1 Interactive Email System Based On Blink Detection
Anjusree V k (PG ScholarKerala University); Gopu Darsan (Assistant Professor)

This is a work to develop an interactive email application. An email system is considered as a personal private property nowadays. It is not easy for people with disability to use normal devices for checking their emails. Users need more interaction with their emails. This interactive technology is based on eye blink detection , hence persons with disabilities can also efficiently use this system. The system is divided into two modules. First, in order to use such a system in a secure manner, it is vital to have a secure login module. Face recognition is used for login because it is the only option that can work as a security module with least failure rate and reliability. For face recognition fisherface algorithm is used which can perform faster, best for variations in lighting and facial expression. Second is a tracking phase, which helps to interact with the email system, after the email is loaded and is based on eye blink detection. In this phase a threshold based approach is used that can detect whether a blink occurred or not and interprets them as control commands for interacting with the system. This vibrant application helps people to check their emails faster in a more interactive way without touching any device.

S25.2 Multi-agent Planning With Joint Actions
Satyendra Chouhan (MNIT, India); Ashutosh Singh (IIT Roorkee, India); Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

We consider the multi-agent planning problems that cannot be solved by single agents. In some situations, an action should be performed by more than one agent to get the desired effect. In this paper, we discuss the planning problems that involve joint actions and present the possible specifications of joint action. We also propose a centralized multi-agent planning system to handle joint actions. Experimental results obtained are satisfactory.

S25.3 Directional Captcha: A Novel Approach to Text Based CAPTCHA
Aditya Kaushal Ranjan and Binay Kumar (Central University of Rajasthan, India)

In this Paper, we have proposed a new captcha based on digits and symbols. It is based on the facts that it is difficult for the machine to interpret symbols and perform the tasks accordingly from two different datasets. We have also pointed out the main anti-recognition and anti-segmentation features from previous works and implemented it on our proposed captcha. We have also presented here the pseudocode of it, have done a security analysis and usability survey to firm our claims regarding it.

S25.4 A New Approach for Mixed bandwidth Deterministic communication using Ethernet
Asif S (R V College of Engineering, India); Pradeep Kumar B (CSIR-NAL, India); C. m. Ananda (National Aerospace Laboratory, India); Nagamani. K (R V College of Engineering, India)

A new approach for the deterministic communication of real time and Non-Deterministic communication of non-real time traffic on a common Ethernet node is presented in this paper. The design strategy uses the full duplex communication with a switched ethernet technology. The transmission of packets is performed using the major and minor frames concept of cyclic schedulers, wherein each major frame is divided into two logical sub-major frames. The first sub-major frame consists of periodic minor frames of different configurations with deterministic traffic and the second sub-major frame contains aperiodic minor frames with standard ethernet traffic. Thereby achieving both real time and non-real time traffic communication over a single channel, this can be used in avionics, automobiles and automation industries. The major frames are transmitted periodically to achieve the defined time constraints. The IP core for the proposed approach has been developed using the Xilinx tools and can be implemented using a Xilinx SP605 FPGA evaluation board, which is connected to an end system for controlled transmission at 100Mbps.

S25.5 Multicasting and Broadcasting of Data through TURN Server
Guduru Kiran Kumar (Samsung R&D Institute & R. V. College of Engineering, India); Usha J (R. V. College of Engineering, Bangalore & R. V. College of Engineering, India)

Internet usage is increasing very rapidly. Increased usage of IP telephony and internet is increasing the need for unique IP addresses. IP address can be reused by using Network Address Translators (NAT). NAT is used for reusing IP addresses by connecting a series of nested private networks to public network under a single public IPV4 address. STUN and TURN protocols are used to identify the hosts behind NAT. The current system allows TURN client to communicate with multiple peers. It allows TURN client to send duplicate data packets corresponding to each peer in-order to multicast or broadcast data. Proposed method explains a mechanism to send a single data packet to the TURN server which can be duplicated and multicast to specified peers or broad casted to all the registered peers to that client, which reduces power consumption and bandwidth usage.

S25.6 Enhancing Statistical Semantic Networks with Concept Hierarchies
Sofia Francis Xavier, Lakshmi Priyanka Selvaraj and Vidhya Balasubramanian (Amrita Vishwa Vidyapeetham, India)

With the emergence of the semantic web, effective knowledge representation has gained importance. Statistically generated semantic networks are simple representations whose semantic power is yet to be completely explored. Though, these semantic networks are created with simple statistical measures without much overhead, they have the potential to express the semantic relationship between concepts. In this paper, we explore the capability of such networks and enhance them with concept hierarchies to serve as better knowledge representations. The concept hierarchies are built based on the level of importance of concepts. The level of importance/coverage of a concept within the given set of documents has to be taken into account to build an effective knowledge representation. In this paper, we provide a domain-independent, graph based approach for identifying the level of importance of each concept from the statistically generated semantic network which represents the entire document set. Insights about the depth of every concept is obtained by analysing the graph theoretical properties of the statistically generated semantic network. A generic concept hierarchy is created using a greedy strategy, and the original semantic network is reinforced with this concept hierarchy. Experiments over different data sets demonstrate that our approach works effectively in classifying concepts and generating taxonomies based on it, thereby effectively enhancing the semantic network.

S25.7 Generation & Analysis of Association Rules from Android Application Clones
Umang Saini and Shilpa Verma (PEC University Of Technology, India)

Earlier association rule data mining was mainly used for analysis of market basket data but now the scope has widened. It is experimented in various areas where extraction of interesting correlations can help like healthcare, education systems, manufacturing engineering, network management, intelligence etc. As android is a new technology which came into use from 2008, very few researchers have touched the area of applying association rule mining on android applications. Here an implementation of association rule FP-growth algorithm on similar android applications (clones) is done, which is a novel area itself. Paper describes a framework which mines association rules from android application clones. The analysis of the rules and dependencies proved fruitful and gave useful results. Interesting relations are extracted between codes, which can help the developers in performing future operations like modification of the code easily.

S25.8 Web Page Ranking using Domain based Knowledge
Sutirtha K Guha (Seacom Engineering College); Anirban Kundu (Netaji Subhash Engineering College); Rana Dattagupta (Jadavpur University, India)

In this paper, a new ranking technique is introduced to rank Web pages categorically. Web pages are categorized as 'Primary' and 'Secondary'. It is considered that available Web page URLs are resided in a predefined database. 'Primary' Web pages are selected based on the keywords. These Web pages are ranked by a newly introduced equation. It is evident that all matched Web pages are not selected by this keyword selection procedure. Hence, unmatched Web pages are checked and named as 'Secondary' Web pages. These Web pages are ranked by another new equation. Ranking procedure of 'Primary' and 'Secondary' Web pages is different based on the different selection process. Final outcome is obtained by merging and ranking all matched Web pages. Wide number of Web pages having matched contents is obtained by implementing this proposed procedure.

S25.9 A Compact Multichannel Data Acquisition and Processing System for IoT Applications
Maria George (Indian Space Research Organisation, India); Akash J B, Azad Hussain and Sreedharan Pillai Sreelal (Indian Space Research Organization, India)

With the advent of Internet of Things (IoT), there is wide application and necessity for different kinds of data monitoring, be it in medical field, industrial field or research field. Hence there is a lot of data to be handled which is heterogeneous in nature. Combining data acquisition and data processing into a single system is a more efficient way of handling data. Spectrum estimation is one key method of processing. The novel aspect of the design described here is to implement a multichannel data acquisition system with embedded spectrum estimation at a reduced size, cost and power consumption. The method of implementing the proposed design is via SmartFusion system on-chip device which combines circuits in analog and digital domains and allows the integration of a microcontroller and field programmable gate array (FPGA) fabric in a single chip.

S25.10 EEG Based Detection of Area of Interest in a Web Page
Joy Bose (Ericsson, Bangalore, India); Divya Bansal ( & Snapdeal, India); Ankit Kumar (Samsung R&D Institute India - Bangalore, India)

We focus on the problem of detection of the user's area of interest within a single web page, or the web page of interest within different web pages. Current methods either use some kind of manual ranking, or apply parameters such as the time the user spends on a specific area of the page to determine the area of interest. We postulate that the attention level of the user while browsing is a more reliable indication of the user's level of interest. We use EEG inputs from a NeuroSky MindWave headset to capture the user's attention level in real time. A background script in a web browser in a mobile device captures the part of the webpage currently being browsed by noting the percentage of the page that the user has scrolled to. The attention level and the percentage of the page scrolled are mapped using the timestamp as the key. Our solution is integrated with the mobile web browser architecture. Using our method, we determine and map the average attention level within the same page, and across different pages, for a range of websites and users. This can be useful in a number of applications including: providing inputs of user behavior to web developers for better web design, ranking different websites or videos as per user interest, inserting ads in the regions of a web page where the user is more likely to pay attention to.

S25.11 A Fuzzy-Timestamp based Adaptive Gateway Discovery Protocol in Integrated Internet-MANET
Anshu Pandey, Arun Bajpai and Deepak Singh (M. M. M. University of Technology, Gorakhpur, India); Rakesh Kumar (MMM University of Technology, India)

In MANET, nodes are able to communicate with each other for a short range only. To extend the communication range of MANET, an Internet gateway is used which acts as a bridge between the conventional network and MANET. To enable the long range communication, a MANET node needs to discover and then select the appropriate gateway using various proposed schemes. Some of the schemes suffer from lack of efficient gateway discovery mechanism. They use a single Hop count metric to select gateway which can be a bottleneck in the network. This paper aims at providing a simple and an efficient adaptive gateway discovery and selection mechanism using fuzzy logic to combine two metrics Hop Count and Latency which leads to selection of a less congested as well as short path.

S26: S26-Security, Trust and Privacy- III/ Wireless Communication- II

Room: 303
Chair: Sanjay Singh (Manipal Institute of Technology, India)
S26.1 Secure Neighbour Discovery in Wireless Mesh Networks Using Connectivity Information
P Subhash (VNR Vignana Jyothi Institute of Engineering & Technology, India); S Ramachandram (Osmania University, India)

Authenticated mesh peering exchange (AMPE) is one of the core functionalities of wireless mesh network that facilitates mesh routers to discover their peers (neighbors), securely. Even though the AMPE protocol prevents unauthorized neighbors from becoming part of the network, it fails to prevent relay attacks, where an attacker can simply relay frames used to establish peer-links. The motivation of an attacker is to convince two far-away nodes as neighbors, and make them commit to a non-existent link that acts as a wormhole later. In this paper, we address this problem of relay attacks and propose a secure neighbor discovery mechanism that detects non-existent network links. It relies on a ranking mechanism to compute relative distance between neighbors, and employs connectivity information to validate those links.

S26.2 Mobile Applications: Analyzing Private Data Leakage Using Third Party Connections
Pradeep Kumar (Thapar University, Patiala, India); Maninder Singh (Thapar University, India)

In previous few years, an incredible growth is witnessed in the popularity and pervasiveness of smart phones. It has also seen that new mobile applications are built day by day. These applications provide users functionality like social networking applications, games, and many more. Some of the mobile applications might be having a direct purchasing cost or be free but having an ad-support for revenue and in return these applications provide users' private data to ad provider with or without users' consent. Worryingly, some of ad libraries ask for permissions beyond the requirement and additional ones listed in their documentation. Some applications also track users by a network sniffer across ad providers and its applications. It is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. Here in this paper, we have examined the effect on user privacy of some grossing Android applications that transmit private data of user without their permission. Using third party connections that an app makes, we defined the legitimacy of application. Also we observed some other parameter to check whether an app is stealing users' private information.

S26.3 Security on MANETs Using Block Coding
Syed Jalal Ahmad (JNTU HYD & JBIET, India); Radha Krishna P (INFOSYS Limited, Hyderabad, India)

Security is a challenging task in Mobile Adhoc Networks (MANETs) due to its dynamic network topology. Since MANETs do not have any centralized coordination, the distribution of keys between two nodes becomes an issue. In this paper, we provide security on MANETs without using key distribution schemes between the nodes. Our approach uses linear block coding to generate the security code vector at source node that facilitates efficient matching of code words for identifying the malicious nodes in the network with efficient energy. When a source node is ready for data transmission, a security code vector is appended to the packet header. Then, the complete message block consisting of both data bits and security block is forwarded to the next node. The data is transmitted to the next node only if code vector bits of current node and source node matches. For this purpose, a separate block called Security Block (SB) is reserved in the packet header. Our approach based on linear block coding also saves the energy, because of less computational analysis when compared with the existing approaches. We validate our approach through simulations.

S26.4 Characterization of mmWave Link for Outdoor Communications in 5G Networks
Sheeba Kumari M (VTU, Bangalore, India); Sudarshan Rao (BigSolv Labs Pvt Ltd Bangalore, India); Navin Kumar (Amrita University & School of Engineering, India)

A number of issues are being addressed in prospective millimetre wave (mmWave) technology for the design of 5G cellular networks. High atmospheric attenuation in the proposed 60GHz band, limits the communication range and cell size. The channel characteristics may vary for many different indoor and outdoor scenarios, resulting in differing performance. In this paper, an attempt is made to investigate and characterize the mmWave communication link in a specific scenario of urban area on road surrounded by multi-storey buildings. Average received signal level will be the primary metric considered. A ray tracing channel model is used owing to the deterministic nature of the link. This is due to the choice of directional antennas in mmWave communication. A significant fluctuation in the received signal is observed for varying carrier frequencies at different antenna separation.

S26.5 A Concurrent Dual Band LNA Using Frequency Transformation Based Matching Network For WCDMA And Navigational C-Band Application
Prashant Bandgar (Babasaheb Ambedkar Technological University, India); Sanjay Khobragade (Babasaheb Ambedkar Technological University, Lonere, Raigad, India)

Low noise amplifier is designed at 2.1GHz and 4.6GHz. Matching network is designed using lumped elements. Frequency transformation technique is employed to synthesis lumped element. Stabilization circuit is included for stable LNA throughout frequency range. Simulation result gives gain of 12.03dB and 10.16dB and NF of 0.5818dB and 1.2dB respectively at 2.1GHz and 4.6GHz. Input and output VSWR is 1.71 and 3.4 for 2.1GHz and 1.13 and 1.19 for 4.6GHz respectively. Along with that, LNA is stabilized throughout with (Rollet's Stability Factor) k>1.

S26.6 Prototype for Multiple applications using Near Field Communication (NFC) technology on Android device
Subhasini Dwivedi and Jason Dsouza (University of Mumbai, India)

The paper discusses about the multiple application prototype and a test called the 'NfcShop Test' carried in order to evaluate the usability of the same. The approach developed by us will allow the user to buy railway tickets, make hotel payments and pay for car parking, all by just tapping a a Near Field Communication (NFC) device/Mobile on the Universal Receiver placed at point of sale (POS) terminal. The application, named 'NfcShop' is designed keeping the needs of an user and to serve a wide range of audience. The architecture consists of hardware and software sections, which are detailed in length along with its communication plot. To conclude we have carried out the usability study test and evaluated the responses, opinions and expectations to understand the strengths and weaknesses in our prototype and also measured the time responses.

S26.7 Scheduling in Dynamic Spectrum Access Networks using Graph Coloring
Navin Kumar Hariharan Subramanian (Anna University, India); Mainak Chatterjee (University of Central Florida, USA)

In a dynamic access network, multiple secondary users access and share channels that are not used by the primary users. Uncoordinated and random channel access by the secondary users access leads to decreased secondary network throughput as users in the close proximity interfere with each other. In order to better utilize the channels, we propose a scheduling technique that assigns unique time slots to all secondary users for transmission. The interference among nodes is captured using a conflict graph.We use a greedy graph coloring technique to find time slots for every user using an approximation for the maximum number of interfering users in a Poisson distributed network. Using simulation experiments, we show how the performance of the proposed scheduling mechanism is for various network parameters

S26.8 Spectrum Auctions in the Secondary Market with Multiple Bids
Shobbana Santhoshini Jeevanlal Radha (Anna University, India); Mainak Chatterjee (University of Central Florida, USA)

In this paper, we propose an auction where the primary users (i.e., spectrum licence holders) sell spectrum to secondary users (i.e., non-licence holders) in a dynamic spectrum access networks. We consider a market like scenario where chunks of spectrum is traded where buyers put forward multiple bids based on how much spectrum they need and what is the price they are willing to pay. We use the Sigmoid function to model the price function that the buyers use. On the other hand, the sellers choose the buyers in such a way that maximizes their revenue. We find the set of the winning bids by solving the 0-1 knapsack problem using dynamic programming. Though simulation experiments, we show how the proposed auction performs with respect to selecting the bids and the total revenue generated.

S26.9 Radio Co-location Aware Channel Assignments for Interference Mitigation in Wireless Mesh Networks
Srikant Manas Kala (Osaka University, Japan); Pavan Kumar Reddy M (IIT Hyderabad & Qualcomm India Private Limited, India); Ranadheer Musham and Bheemarjuna Reddy Tamma (IIT Hyderabad, India)

Designing high performance channel assignment schemes to harness the potential of multi-radio multi-channel deployments in wireless mesh networks (WMNs) is an active research domain. A pragmatic channel assignment approach strives to maximize network capacity by restraining the endemic interference and mitigating its adverse impact on network performance. Interference prevalent in WMNs is multi-faceted, radio co-location interference (RCI) being a crucial aspect that is seldom addressed in research endeavors. In this effort, we propose a set of intelligent channel assignment algorithms, which focus primarily on alleviating the RCI. These graph theoretic schemes are structurally inspired by the spatio-statistical characteristics of interference. We present the theoretical design foundations for each of the proposed algorithms, and demonstrate their potential to significantly enhance network capacity in comparison to some well-known existing schemes. We also demonstrate the adverse impact of radio co- location interference on the network, and the efficacy of the proposed schemes in successfully mitigating it. The experimental results to validate the proposed theoretical notions were obtained by running an exhaustive set of ns-3 simulations in IEEE 802.11g/n environments.

S26.10 Reliable Prediction Of Channel Assignment Performance In Wireless Mesh Networks
Srikant Manas Kala (Osaka University, Japan); Ranadheer Musham (IIT Hyderabad, India); Pavan Kumar Reddy M (IIT Hyderabad & Qualcomm India Private Limited, India); Bheemarjuna Reddy Tamma (IIT Hyderabad, India)

The advancements in wireless mesh networks (WMN), and the surge in multi-radio multi-channel (MRMC) WMN deployments have spawned a multitude of network performance issues. These issues are intricately linked to the adverse impact of endemic interference. Thus, interference mitigation is a primary design objective in WMNs. Interference alleviation is often effected through efficient channel allocation (CA) schemes which fully utilize the potential of MRMC environment and also restrain the detrimental impact of interference. However, numerous CA schemes have been proposed in research literature and there is a lack of CA performance prediction techniques which could assist in choosing a suitable CA for a given WMN. In this work, we propose a reliable interference estimation and CA performance prediction approach. We demonstrate its efficacy by substantiating the CA performance predictions for a given WMN with experimental data obtained through rigorous simulations on an ns-3 802.11g environment.

S26.11 Adapting the Beacon Interval for Opportunistic Network Communications
Salem Omar Sati (Misurata University, Libya & HHU GERMANY, Germany); Kalman Graffi (Honda Research Institute Europe, Germany)

A dominant trend nowadays is the rapid spread of small devices, such as laptops and smartphones, carried by people. The users' need for applications with local high-bandwidth requirements and due to limited wireless Internet access, a challenge if identified which is critical for the success of applications such as local wireless data synchronization, gaming or communication. Opportunistic networks provide a solution to localized wireless communication between smart devices. The opportunistic communication between two encountered devices are commonly accomplished by using a radio technology such as Wi-Fi or Bluetooth. One issue involved in device-to-device communication is improving the connection establishment in a mobile opportunistic network with minimum resource consumption. This paper includes a proposal and empirical analysis of the beaconing in opportunistic communication using Wi-Fi technology in the infrastructure mode. With our analysis of the MAC layer, we aim at improving the connection establishment between two adjacent nodes. Precisely, the broadcasting beacon frame is sent by the master device to discover neighbors (stations) at fixed intervals. The contribution of this paper is to propose and evaluate a double hundred kilo beacon interval for opportunistic network communications. The traditional beacon interval is target for low-latency Wi-Fi applications and less suitable for opportunistic networking. The opportunistic networking stations have limited resources and assume delay tolerant applications. Our experiments show that the proposed interval provides a significant reduction in the energy consumption and bandwidth overhead compared to the commonly used beacon interval.

S26.12 A Novel Cost Effective Access Control And Auto Filling Form System using QR Code
Dinesh Khandal (RTU, India); Devendra Somwanshi (Poornima College of Engineering, India)

QR code or matrix codes or simply two dimensional bar codes are used to store text or data on any information in two dimensional grids which can be decoded quickly. The proposed work here deals with Quick response (QR) code extending its encoding and decoding implementation to design a new articulated user authentication and access control mechanism. The work also proposes a new simultaneous registration system for offices and organizations. The proposed system retrieves the candidate's information from their QR identification code or image and transfers the data or content to the digital application form, along wise granting authentication to authorized QR image from the database. The system can improve the quality of service and thus productivity of any organization.

Thursday, August 13 9:30 - 14:10 (Asia/Kolkata)

S28: S28-4th International Symposium on Natural Language Processing (NLP'15) - I

Room: 305
Chair: Rajeev RR (International Centre for Free and Open Source Software (ICFOSS), India)
S28.1 Children Story Classification based on Structure of the Story
Harikrishna DM (IIT-Kharagpur, India); K. Sreenivasa Rao (Indian Institute of Technology Kharagpur, India)

The main objective of this work is to classify Hindi and Telugu stories based on their structure into three genres: Fable, Folk-tale and Legend. In this work, each story is divided into three parts: (i) introduction, (ii) main and (iii) climax. The objective of this work is to explore how story genre information is embedded in different parts of story. We are proposing a framework for story classification using keyword and Part-of-speech (POS) based features. Keyword based features like Term Frequency (TF) and Term Frequency Inverse Document Frequency (TFIDF) are used. Classification performance is analysed for different story parts using different combinations of features with three classifiers: (i) Naive Bayes (NB), (ii) k-Nearest Neighbour (KNN) and (iii) Support Vector Machine (SVM). From the experimental studies, it has been observed that classification performance has not significantly improved by combining linguistic (POS) and keyword based features. Among classifiers, SVM outperformed the other classifiers. Main part of the story has the highest classification accuracy compared to introduction and climax parts of the story.

S28.2 AGRI-QAS Question-Answering System for Agriculture Domain
Sharvari Gaikwad (College Of Engineering, Pune, India); Rohan Asodekar (College of Engineering Pune, India); Sunny Gadia (College Of Engineering Pune, India); Vahida Attar (College of Engineering Pune, India)

In this paper, we focus on the need for a robust domain specific question answering system targeting agriculture domain. It aims to help farmers get information and resolve their queries related to agriculture and thereby improving agriculture literacy. The system is based on the principles of natural language processing and information retrieval. Most of the currently available information retrieval tools return ranked list of documents instead of precise answers and do not support run-time answer retrieval. Thus we focus on developing a system which processes unstructured data and returns actual answer for FACTOID questions such as 'which', 'what', 'who', 'where'. For example, "which diseases affect the wheat crop?", "what are the prevalent diseases in North-America region?" etc.

S28.3 Relative Clause based Text Simplification for Improved English to Hindi Translation
Sandeep Saini (The LNM Institute of Information Technology, Jaipur, India); Umang Sehgal (The LNM Institute of Information Technology, India); Vineet Sahula (MNIT Jaipur, India)

Language translation is one of the most research and development oriented topic in today's world because of its increasing demand and application. Knowledge of grammar structure of source and target languages is must for translating one language to other. Clauses are an integral part of any language and helps in constructing complex sentences in different contexts. This complication leads to a low score of translation in almost every machine translation engine existing in the world. In this work, we are focusing on relative clause identification and extraction for text simplification. The generated simple sentences are then fed to the existing translation engines for translation. Maximum Entropy Inspired Probabilistic LFG F-Structure Parsing techniques are used to parse the sentence tree. In this work we have focused on achieving better quality of English-Hindi translations. The proposed approach is tested manually on a sufficiently large dataset and shows promising and better translation score than the conventional approaches.

S28.4 Kannada Text Summarization Using Latent Semantic Analysis
Geetha J k (RVCE VTU Belgaum, India); Deepamala N (RV College of Engineering, India)

I Geetha J K studying MTech in Department of Computer Science, RVCE, Bangalore

Abstract— Text Summarization is a method of reducing the original text document into a short description. This short version retains the meaning and information content of the original text document. It is a difficult task for human beings to generate the summary for very large documents manually. The linguistic and statistical features of sentence can be used to find the importance of sentences. The Latent Semantic Analysis (LSA) captures automatically the semantic relationships between the sentences as a human being thinks. In this paper Singular Value Decomposition (SVD) is used to generate the summary. SVD finds the dimensions of the sentence vectors which are principal and mutually orthogonal. These properties guaranty the relevance to original text document and non-redundancy respectively in machine generated summary.

S28.5 Author Identification based on Word Distribution in Word Space
Barathi Ganesh HB (Amrita School of Engineering, India); Reshma Unnikrishnan (National Institute of Technology Karnataka, India); Anand Kumar M (National Institute of Technology - Karnataka, India)

Author attribution has grown into an area that is more challenging from the past decade. It has become an inevitable task in many sectors like forensic analysis, law, journalism and many more as it helps to detect the author in every documentation. Here unigram/bigram features along with latent semantic features from word space were taken and the similarity of a particular document was tested using Random forest tree, Logistic Regression and Support Vector Machine in order to create a global model. Dataset from PAN Author Identification shared task 2014 is taken for processing. It has been observed that the proposed model shows state-of-art accuracy of 80% which is significantly greater when compared to the Author Identification PAN results of the year 2014.

S28.6 Raga Identification Based on Normalized Note Histogram Features
Pradeep Rengaswamy (IIT Kharagpur, India); Prasenjit Dhara (IIT-Kharagpur, India); K. Sreenivasa Rao and Pallab Dasgupta (Indian Institute of Technology Kharagpur, India)

In this paper, we propose a discriminative method to identify raga of a polyphonic music clip using Normalized Note Histogram (NNH) features. The raga performed during the rendition is mainly based on the lead voice which corresponds to the main melody. In this work, the sequence of pitch values is extracted from the polyphonic music signal using salience based approach. From the sequence of pitch values, note-sequence is derived by using tonic frequency value. Note histogram is computed from note-sequence, and it is normalized by dividing each bin with the total number of voiced frames in a music clip. In this work, the sequence of bin counts in a normalized histogram is used as a feature vector for representing the music clip. The proposed classification system is designed to identify four ragas (Ahir Bhairav, Bhairavi, Bhupali and Deshkar) in open-set approach. In this paper, raga identification is carried out using template based classification approach. The proposed classifier and normalized histogram features are validated using music database consisting of 110 clips. The performance of the proposed classifier is observed to be 82.34%.

S28.7 Voice conversion: Wavelet based residual selection
Pramod Haribhau Kachare (IIT Bombay & Ramrao Adik Institute of Technology, India); Alice N Cheeran (Department of Electrical Engineering & Veermata Jijabai Technological Institute, India); Jagganath Nirmal (Sardar Vallabhbhai National Institute of Technology, India); Mukesh Zaveri (Sardar Vallabhai National Institute of Technology, surat, India)

Voice conversion has been studied over past few decades and yet no satisfactory system has been developed. Primary concern in concatenative synthesis is output speech quality . Work presented here alleviates this problem by mapping higher order excitation features along with state of the art spectral parameters. Well known linear predictive analysis is used to extract shape of the vocal tract and corresponding residual signal. Each residual signal is segmented in frames to reduce feature dimensionality. Each frame is wavelet analyzed to calculate normalized sub-band energy coefficients forming a code book. Conversion is obtained by selecting target residual corresponding to minimized energy cost function. Primary advantage of this technique is reduced dimensionality with satisfactory conversion statistics. Proposed method is compared with baseline residual selection approach using various subjective and objective tests. Wavelet features provide better selection criteria with slight improvement in output speech individuality.

S28.8 Statistical tagger for Bhojpuri(Employing Support Vector Machine)
Srishti Singh and Girish Nath Jha (Jawaharlal Nehru University, India)

— The present paper is a demonstration of the first statistical tool developed for Bhojpuri. Bhojpuri is emerging as an important language of the Asian continent and the Parts of Speech (POS) tagger presented here is a step towards developing language resource for it. Support Vector machines have already been trained on other languages like Malyalam and Bangla with an accuracy ranging between 86-90 %. The present research came up with approximately 87.67% and 92.79% accuracy for test and gold corpus respectively.

S28.9 A Hybrid Parts Of Speech Tagger for Malayalam Language
Anisha Aziz T and Sunitha C (University of Calicut, India)

One of the first and foremost step of most of Natural Language Processing applications like Machine translation is Parts Of Speech tagging. Allocating one of the parts of speech to the token obtained after the process of tokenization is called Parts Of Speech tagging. A hybrid Parts Of Speech tagger is suggested in this paper, which explores the concepts of both traditional rule based tagger as well as n-grams. Since it's a hybrid tagger, the ambiguity is supposed to be reduced to an extent. A tagged corpus is built with around 100000 words , also a bigram dictionary is kept aside for checking if the case of ambiguity. Within the corpus and bigram dictionary built , the accuracy must be maximized.

S28.10 A Synchronised Tree Adjoining Grammar for English to Tamil Machine Translation
Vijay Krishna Menon (GadgEon Smart System Pvt. Ltd., India); S Rajendran and Soman K P (Amrita Vishwa Vidyapeetham, India)

Tree adjoining Grammar (TAG) is a rich formalism for capturing syntax and some limited semantics of Natural languages. The XTAG project has contributed a very comprehensive TAG for English Language. Although TAGs have been proposed nearly 40 years ago, their usage and application in the Indian Languages have been very rare, predominantly due to their complexity and lack of resources. In this paper we discuss a new TAG system and methodology of development for Tamil Language that can be extended for other Indian languages. The trees are developed synchronously with a minimalist grammar obtained by careful pruning of XTAG English Grammar. We also apply Chomskian minimalism on these TAG trees, so as to make them simple and easily parsable. Furthermore we have also developed a parser that can parse simple sentences using the above mentioned grammar, and generating a TAG derivation that can be used for dependency resolution. Due to the synchronous nature of these TAG pairs they can be readily adapted for Formalism based Machine Translation (MT) from English to Tamil and vice versa.

S28.11 Text Normalization of Code Mix and Sentiment Analysis
Shashank Sharma, Pykl Srinivas and Rakesh Chandra Balabantaray (IIIT Bhubaneswar, India)

The field of getting insights from various text forms such as feedback, opinion and blogs and classifying them based on their polarity as positive or negative is known as sentiment analysis. But from last few years we find huge amount of code - mix (mixture of two languages) text available on social media. This text is available in Romanized English format in Indian social media, which is the transliteration of one language into another, which demands normalization to get further insights into the text. In this paper, we have presented various methods to normalize the text and judged the polarity of the statement as positive or negative using various sentiment resources.

S28.12 Adapting Stanford Parser's Dependencies to Paninian Grammar's Karaka relations using VerbNet
Manish Kumar (Kurukshetra, India); Mohit Dua (National Institute of Technology Kurukshetra, India)

Paninian Grammar framework provides a better solution for parsing free word order languages and Stanford Parser gives the dependencies for English language (Fixed word order language). In this paper, we map the Stanford parser dependencies to karaka relations. By using VerbNet, we capture the syntax and semantics of verb. We present the issues that encounter while doing adaptation and proposed solution to overcome these problems. We are using Hindi Dependency parser for verification of results. With this adaptation of Stanford Parser, an English-Hindi parallel treebank can be created.

Thursday, August 13 9:30 - 13:30 (Asia/Kolkata)

S27: S27-e-governance, e-Commerce, e-business, e-Learning

Room: 306
Chairs: Vaishali Maheshkar III (CDAC, India), V Renumol (School Of Engineering, CUSAT, Kochi, India)
S27.1 Factors affecting Inflation in India: A cointegration approach
Anusree Mohan (Amrita University, India); Balasubramanian P (Amrita School of Business, Amrita University, India)

This study is an empirical analysis to find out the major factors that determine inflation in India. The long run and short run relationships between inflation and other macro-economic indicators such as per capita GDP, money supply, international oil price and exchange rate are determined using Cointegration method and Vector Auto regression model (VAR) respectively. The annual data of these variables from 1980 to 2013 is used for the study. The study finds that there is a long term as well as short term relationship between Inflation (measured using CPI) and exchange rate where as there is a short term relationship between Inflation and per capita GDP.

S27.2 Study on Inter sector Association rules in National Stock Exchange, India
Shona Ulagapriya and Balasubramanian P (Amrita School of Business, Amrita University, India)

In this paper, the stocks grouped under different sector indices under National Stock Exchange (NSE) of India are analyzed to identify any interesting relations among the sectors. Concept of Association rules is used for this analysis. Stocks are grouped into sectors based on their operations/industrial classification. Owing to their similarity in every context stock prices within a sector generally vary in the same direction. This study examines inter sector relations using association rules. Daily closing prices are used to identify the trend of stock price variation which are in turn processed using Apriori algorithm [1] to get association rules and those spanning across sectors are separated and their behavior is analyzed.

S27.3 Impact of grading of IPOs in short run price performance in India: A regression model approach
Neeraja Sasidharan (Amrita University, India); Balasubramanian P (Amrita School of Business, Amrita University, India)

Capital markets all over the world are subject to information asymmetry where the potential investors have inferior knowledge about the company. As a step to make markets efficient SEBI introduced a new mechanism of grading of IPOs in 2006. Grades assigned by different credit rating agencies acts as signal of quality of the company. The objective of this study is to analyze the impact of grading of IPOs in short run price performance. Price performance is one indicator of market efficiency. Using sample of 121 IPOs listed on NSE from 2006 to 2013, IPO returns for 6 months post offer day is calculated. Control variables Beta and 6 months market return are also introduced. Statistical tool multiple linear dummy variable regression analysis is used to understand the dependence of returns from IPO to the grades assigned taking market fluctuation and sensitivity of stock returns to market fluctuations as control variables.

S27.4 Optimizing ATM Placement Using Game Theory
Raja Rathnam Naidu Kanapaka (University of Hyderabad, India); Raghu Kisore Neelisetti (Idrbt, India)

ATM is one of the key delivery channels used by banks to extend banking services to their customers. Facility location is a problem of paramount importance where the aim is to optimize business operations without affecting customer service. In case of delivering banking services this is a challenge because the demand for service and its associated quality expectations are determined by the socio-economic background of the customer. The problem of ATM location is further complicated as customers of one bank can use their debit cards at any other bank ATMs. While this might attract charges, some banks often refund these charges to attract customers. Banks need to have a mechanism to quantitatively measure the benefits of managing their own ATM versus paying for fees to another banks ATM. Game theory is the study of strategic decision making and is an effective technique to identify the best business strategy when provided with multiple options. In this paper we evaluate the strategy of ATM placement by a bank in the presence of multiple competitive banks in a given geographical area.

S27.5 Influence of Customer Acceptance of Online Sales Channel on Firm Profits under Channel Competition
Rofin T m and Biswajit Mahanty (IIT Kharagpur, India)

This paper focuses on the influence of customer acceptance of online sales channel on the profit of firms when the online and the traditional 'Brick and Mortar' channels compete. Profit is calculated when the channels are engaged in Bertrand competition and when the channels are integrated. Influence of customer acceptance of online channel on the firm profit is analyzed for different categories of products. Firms enjoy better profits when the channels are integrated for those products for which customer acceptance of online channel is high.

S27.6 An Expert System for the Mauritian Family Law
Sameerchand Pudaruth (University of Mauritius, Mauritius); Kharuna Chenglerayen (Ministry of Technology, Communication and Innovation, Mauritius)

In this paper, we have developed a new information system that can be used by legal professionals and government officials in order to ascertain whether two individuals can legally be married according to the family laws prevalent in the Republic of Mauritius. We took the family laws from the Napoleon Code and converted them into formal rules. We also created a virtual imaginary community of 50 people in order to get some sample facts to be able to test the system. Our system caters for all types of blood relationships and relationships created through marriage and simple adoption. The outcome of this research will have a very positive impact on the officers of the Civil Status Office who will have the possibility of checking instantly whether two persons are eligible to get married to each other. Previously, this took a lot of time and many violations of existing rules have also been reported. Because of the inherent complexity of the legal system, some aspects of family law like nullity, divorce and guardianship will be dwelt in our future works.

S27.7 Information Security Concerns in Digital Services: Literature Review and a Multi-Stakeholder Approach
Himanshu Singhal (Ramboll India Pvt. Ltd. & IIT Delhi, India); Arpan K Kar (Indian Institute of Technology Delhi, India)

Despite the proliferation of technology enabled services, customer adoption of digital services like e-banking and m-banking is very low. Security concerns pertaining to these services are considered to be one of the most crucial factors to deter an end user to use the service. Efforts have been made in this study to list down various possible security concerns from a multi-stakeholder perspective, namely the perspectives of the end user and the service provider. While some generic findings have been established through the review of literature regarding digital services, greater focus has been given to a detailed exploration of technology enabled banking services like e-banking and m-banking.

S27.8 Learners and Educators Attitudes Towards Mobile Learning in Higher Education: State of the Art
Mostafa Al-Emran (Ton Duc Thang University, Vietnam); Khaled F. Shaalan (The British University in Dubai & Cairo University, United Arab Emirates)

In the last few years, the way we learn has been significantly changed from traditional classrooms that depend on printed papers into e-learning relying on electronic teaching material. Contemporary educational technologies attempt to facilitate the delivery of learning from instructors to students in a more flexible and comfortable way. Mobile learning (M-learning) is one of such pervasive technologies that has been evolved rapidly to deliver e-learning using personal electronic devices without posing any restrictions on time and location. Literature that sheds light on using M-learning in various institutions of learning is beginning to emerge. This paper presents the state of the art of the M-learning. It discusses students' and faculty members' attitudes towards the use and adoption of M-learning. Advantages and disadvantages of M-learning were also presented. The integration and implementation of M-learning with other technological resources has been described. Factors affecting the students' and faculty members' attitudes towards the use of M-learning have been demonstrated. Moreover, the new trends and challenges, which are evolved while conducting this survey, are explained.

S27.9 Simulating Consequences of Smoking with Augmented Reality
B Remya Mohan (Amrita Vishwa Vidyapeetam, India); Kamal Bijlani (Amrita E-Learning Research Lab, India); Jayakrishnan R (Amrita Vishwa Vidyapeetham, India)

Visualization in an educational context provides the learner with visual means of information. Conceptualizing certain circumstances such as consequences of smoking can be done more effectively with the help of the technology, Augmented Reality (AR). It is a new methodology for effective learning. This paper proposes an approach on how AR based on Marker Technology simulates the harmful effects of smoking and its consequences using Unity 3D game engine. The study also illustrates the impact of AR technology on students for better learning.

S27.10 Impact of Learner Motivation on MOOC Preferences: Transfer vs. Made MOOCs
Doraisamy Gobu Sooryanarayan (Amrita University & Amrita School of Business, India); Deepak Gupta (Amrita School of Business & ASB, India)

From being video repositories of courseware to dynamic adaptive e-learning models, the Massive Online Open Courseware (MOOC) ecosystem has matured significantly leading to a wider taxonomy of MOOCs. The most recent taxonomy of MOOCs captures eight broad classifications of MOOCs. Learners have different motivational drivers to choose and consume the different categories of MOOCs. There have been in-depth studies on the design of various pedagogical styles for MOOCs. There are also studies on learner's motivation for effective learning, retention and completion. However there have been few contributions to the literature in discerning the varying motivational drivers for choosing to consume the different styles of MOOCs. This study shows how motivation impacts the variations in the Indian learner's preferences to consume two contrasting classes of MOOCs which have fundamentally different philosophies: Transfer MOOCs and Made MOOCs. MOOCs created with recordings of existing classroom video lectures are categorized as Transfer MOOCs. Courses created exclusively for being presented as a MOOC with extensive use of modern video production and interactive digital content fall under Made MOOCs. The measure for learner's motivation has been developed through extensive exploratory studies and with revelations from relevant literature. The methodology involved a pan-India survey. Using the collected responses, an analytical model of preference for the respective MOOC categories was built using Ordinal Logit Regression with the scales on the motivation factors as the independent variables and age, gender, education and occupation as control variables. The results show preference for Transfer MOOCs and Made MOOCs are impacted by different motivating factors. These results would empower the MOOC designers to better understand the motivation of learners to prefer their respective MOOC styles and hence better cater to the consumer's needs.

S27.11 An approach for mapping of auto generated questions to related topics in the curriculum
Syaamantak Das (Indian Institute of Technology Kharagpur, India); Rajeev Chatterjee (National Institute of Technical Teachers' Training and Research Kolkata, India)

Questions are essential components for assessment of learners' knowledge in teaching - learning environments. The questions should be developed related to tasks of the instructional objectives associated with a learning object. It is expected that a given set of question will cover maximum possible topics under an instructional objective. By analyzing the response to these questions, instructors are able to identify the strong and weak areas of a learner, and are able to understand the deficiency in learning and provide solution for overcoming it. However if the system generates automated questions, a given set of questions may or may not cover all the topics given in the learning object. To identify the amount of topic that is covered under a given set of question, we first use phrase mining technique and then do topic modeling on text data of the given set of questions. By mining the words present the phrases of the individual question stem, we can identify the keywords, calculate its significance value and identify the most appropriate topic it relates to.

S27.12 Anti-Scraping Application Development
Afzalul Haque and Sanjay Singh (Manipal Institute of Technology, India)

Scraping is the activity of retrieving data from a website, often in an automated manner and without the permission of the owner. This data can further be used by the scraper in whatever way he desires. The activity is deemed illegal, but the change in legality has not stopped people from doing the same. Anti-scraping solutions are being offered as rather expensive services, which although are effective, are also slow. This paper aims to list challenges and proposes mitigations techniques to develop a Software as a Product (SaaP) anti-scraping application for small to medium scale websites.

S27.13 Concept Mapping and Assessment of Virtual Laboratory Experimental Knowledge
Lakshmi Bose (Amrita Vishwa Vidyapeetham Kollam, India); Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India)

Quantitative assessment of learning is an important indicator of teaching practices apart from the depth and grasp of content being assessed. Fundamental objective of assessment approaches should promote learner's engagement and is appropriate to the diversity of learners that may be present in any class. Learning from online or virtual classrooms can be more challenging due to lack of direct personal engagement between the teacher and the student. There are various modes of assessment commonly adopted today and students may exhibit different levels of performance based on the assessment method. This paper focuses on cross- comparison of these techniques to learning of laboratory concepts in a virtual environment. Apart from comparison of the commonly used asses