Program for 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI'16)

Tuesday, September 20

Tuesday, September 20 9:00 - 18:00 (Asia/Kolkata)

Journal Track: Journal Track

Room: SAC(Sports complex)
9:00 Predicting Cancer Subtypes From Microarray Data Using Semi-supervised Fuzzy C-Means Algorithm
Deepthi P S (LBS Centre for Science and Technology, Trivandrum & Indian Institute of Information Technology and Management-Kerala, India); Sabu M Thampi (Kerala University of Digital Sciences, Innovation and Technology (KUDSIT), India)

Microarray technologies help to observe the expression levels of thousands of genes. Analysis of gene expression data arising from these experiments provides insight into different subtypes of diseases and functions of genes. Gene expression data are characterized by a large number of genes and few samples. This limited number of samples make the prediction of tissue samples a difficult task. Employing traditional supervised classifiers for prediction requires adequate labeled data which is not available in the case of microarray data. The present work investigates the potential of semi-supervised learning to delineate the tissue samples. The available labeled samples were exploited to guide the clustering of unlabeled samples. A classification system by integrating feature selection techniques with semi-supervised fuzzy c-means algorithm was built. The system was evaluated using publicly available gene expression datasets and results show that a small number of labeled samples can assist in the accurate prediction of disease subtypes.

9:15 Texture Based Feature Extraction Method for Classification of Brain Tumor MRI
Ankit Vidyarthi (Jaypee Institute of Information Technology Noida, India); Namita Mittal (MNIT Jaipur, India)

In machine learning based disease diagnosis, extraction of relevant and informative features from medical image slices is vital aspect. Extracted features represent the descriptive nature of the imaging modality for machine learning. Texture description, is one of such method which is used to extract the informative aspect of the object. In this paper a new texture based feature extraction algorithm is proposed for extracting relevant and informative features from brain MR Images having tumor. Given algorithm is based on finding the texture description using nine different variants of texture objects. Subsequently,the intermediate texture index matrix is formed using texture objects with high pass and low pass spiral filters. The resultant two index matrix are used to generate the Texture Co-occurrence Matrix (TOM). TOM helps to extract the spatial and spectral domain features that forms the hybrid feature set for brain MRI classification. Using TOM, an experimentation is performed with a dataset of 660 T1-weighted post contrast brain MR Images having 5 different types of malignant tumors. Experimental results suggest that proposed method gives significant results in abnormality classification when compared with state-ofart GLCM and Run length algorithms.

9:30 Particle Swarm Optimised Computer Aided Diagnosis System for the Classification of Breast Masses
Punitha Stephen, Ravi Subban, Anousouyadevi M and Vaishnavi Jothimani (Pondicherry University, Pondicherry, India)

Breast cancer is one of the most commonly occurring cancers among women globally.The accurate detection and classification of the abnormalities such as masses and microcalcifications in mammograms is a challenging tasks for the radiologist without which the survival rate of the breast cancer patients may increase worldwide. This paper presents a novel Computer Aided Diagnosis (CAD) system which uses Cellular Neural Network (CNN) technique, which is optimised using Particle Swarm Optimization (PSO) for detection and Particle Swarm Optimised Probalistic neural network (PSOPNN) for the classification of breastmasses as benign or malignant. The Breast Mass Texture feature extraction is carried out using Gray Level Cooccurrence Matrix (GLCM) and the optimal texture features are selected using a particle swarm optimised feature selection.The performance of the proposed system can be evaluated using the True Positive, True Negative, False Positive,FalseNegative values.

9:45 Efficient Retinal Vessel Detection Using Line Detectors with Morphological Operations
Sarika Patil (Savitribai Phule Pune University & Sinhgad College of Engineering, Pune., India); Abhilasha Sandipan Narote (Smt. Kashibai NAvale College Of Engineering, University of Pune, India); Sandipann Pralhad Narote (University of Pune, India)

Digital fundus photography plays a major role in the diagnosis of different retinal pathologies like hypertension, diabetic retinopathy and Glaucoma. To identify abnormal components on the retina, retinal features should be detected accurately. Retinal vessel structure is one of the important landmarks of the retina. So precise detection of retinal vessel structure is imperative. This paper presents a simple, robust retinal vessel extraction approach based on the line detectors and morphological operations. As vessel detection is basically a problem of a line detection, the green channel retinal image is applied to morphological opening using a line as structuring element. The resultant image is again applied with the line detectors and thresholded using Otsu's thresholding. .The proposed algorithm overcomes the fundamental issues of scale and orientation avoiding the need of multiple thresholds with improved values of performance measure as compared to the state of the art techniques. The proposed algorithm is applied on 3 standard databases-HRF (healthy and Diabetic), DIARETDB1 and DRIVE. Area under the ROC curve (AUC) of 97% was achieved with 91% Sensitivity and 97% Specificity for DRIVE dataset. The proposed algorithm achieved an Accuracy of 97%, Sensitivity of 85 % and Specificity of 97% for HRF database. On DIARETDB1 database too observed very good results.

10:00 Hybrid Classifier and Region-Dependent Integrated Features for Detection of Diabetic Retinopathy
Vijay M. Mane (Research scholar, Department of Electronics Engineering, JSPMs, RSCOE, Tathwade, Pune,India); D. V. Jadhav (TSSM's Bhivarabai Sawant College of Engineering & Research, India)

One of the major eye diseases called Diabetic retinopathy (DR), which causes loss of sight if it is not noticed in the early hours. In order to keep the patients vision, the early detection and periodic screening of DR plays an important role in eye diagnosis by examining the deformity in retinal fundus images. During the early detection of DR, ophthalmologists identify the lesions called microaneurysms that emerge as the first symptom of the disease. The various test methods availability and the handlings of all these test methods for detection of DR are not possible in rural areas. The automatic DR detection system offers the potential to be used in large-scale screening programs. This paper presents a hybrid classifier and region-dependent integrated features for detection of DR automatically. In the proposed hybrid classifier, holoentropy enabled decision tree is combined with feed forward neural network using the proposed score level fusion method. The performance is evaluated and compared with existing classification algorithms using sensitivity, specificity and accuracy. Two different databases such as DIARETDB0 and DIARETDB1 are utilized for the experimentation. From the experimental results, proposed technique obtained the accuracy of 98.70%, which is better as compared with existing algorithms.

10:15 Erythrocyte Segmentation for Quantification in Microscopic Images of Thin Blood Smears
Salam Shuleenda Devi (National Institute of Technology Silchar, India); Joyeeta Singha (The LNMIIT, India); Manish Sharma and Rabul Hussain Laskar (National Institute of Technology Silchar, India)

Manual analyzing and interpreting the microscopic images of thin blood smears for diagnosis of the malaria are a tedious and challenging task. This paper aims to develop a computer assisted system for quantification of erythrocytes in microscopic images of thin blood smears. The proposed method consists of preprocessing, segmentation, morphological filtering, cell separation and clump cell segmentation. The major issues are cell separation (i.e. isolated and clump erythrocytes classification) and clump cell segmentation to enhance the performance of the erythrocytes counting. The geometric features such as cell area, compactness ratio and aspect ratio have been used to define the feature set. Further, the performance of the system to classify the isolated and clump erythrocytes is evaluated for the different classifiers such as Naive Bayes, k-NN and SVM. Moreover, the clump erythrocytes are segmented using marker controlled watershed with h-minima as internal marker. Based on the experimental results, it may be concluded that the proposed model provides the satisfactory results with an accuracy of 98.02% in comparison to the state of art method.

10:30 Hardware Efficient Denoising System for Real EOG Signal Processing
Shivangi Agarwal (Mumbai University & Ramrao Adik Institute of Technology (RAIT), India); Vijander Singh (Netaji subhas institute of Technology, Delhi University, India); Asha Rani (NSIT University of Delhi NEW Delhi, India); Alok Prakash Mittal (NSIT, DU)

The traditional signal processing algorithms suffer from large execution delay for real time issues, therefore implementation of high speed algorithms is needed. The present work aims to implement multiplier less Savitzky Golay smoothing filter (SGSF) based on distributed arithmetic (DA) for pre-processing of Electro-oculographic (EOG) signals such that speed is increased along with reduction in chip area. The filter used should be efficient enough to remove the artifacts along with least deformation from the actual signal. Savitzky-Golay (SG) filter is widely employed in biomedical signal analysis but its fast and efficient implementation is not proposed yet for EOG analysis. SGSF is selected so that disease diagnosis using saccade detection of EOG signal can be done accurately. The efficiency of proposed filter is tested in terms of signal-to-signal-plus-noise ratio (SSNR) and real time computations. It is observed from the analysis that DA based architecture increases the processing speed, reduces the chip area and original features of filtered signal are preserved.

10:45 A Genetic TDS & BUG with Pseudo Identifier for Privacy Preservation Over Incremental Data Sets
Sreedhar K c (Srinidhi Institute of Science & Tech, India); Mohammed Nisar Faruk (QIS College of Engineering and Technology, India); Venkateswarlu B (VIT University, India)

Cloud computing plays a predominant role in storage technologies. It enables the tenant user to deploy their infrastructure without any investment. Cloud storage offers the flexibility with storage and sharing facilities using inter-net platform. Apparently storing sensitive information like clinical data requires high privacy preservation and elevates serious concern over data privacy on the cloud platform. Privacy preservation becomes a most adherent issue when the huge volume of data is stored in public clouds. The sub-tree anonymization using Bottom-Up Generalization (BUG) and Top-Down Specialization (TDS) has widely adopted an approach for anonymizing data sets. This ensures individual data privacy, however, it causes potential violations when the new update is received and it suffers from valuing k-anonymity parameter. In this proposed work we anticipated a pseudo-identity to accomplish privacy preservation with maximum data utility on incremental data sets. Initially, the data sets (DS) are partitioned in the preprocessing stage, then the processed data sets are clustered as groups. The genetic model is used for indexing and updating incremental data sets. This is consistent with repeatedly modified data sets. In evaluation segment, we deployed an incremental and distributed DS and accomplished the efficiency and optimal performance of privacy preservation over existing models.

11:00 A Multiclass Cascade of Artificial Neural Network for Network Intrusion Detection
Mirza Baig (Lahore University of Management Sciences (LUMS), Pakistan); Mian Muhammad Awais (LUMS, Pakistan); El-Sayed M. El-Alfy (King Fahd University of Petroleum and Minerals, Saudi Arabia)

This paper presents a cascade of ensemble-based artificial neural network for multi-class intrusion detection (CANID) in computer network traffic. The proposed system learns a number of neural-networks connected as a cascade with each network trained using a small sample of training examples. The proposed cascade structure uses the trained neural network as a filter to partition the training data and hence a relatively small sample of training examples are used along with a boosting-based learning algorithm to learn an optimal set of neural network parameters for each successive partition. The performance of the proposed approach is evaluated and compared on the standard KDD CUP 1999 dataset as well as a very recent dataset, UNSW-NB15, composed of contemporary synthesized attack activities. Experimental results show that our proposed approach can efficiently detect various types of cyber attacks in computer networks.

11:15 A New Cryptographic Method for Image Encryption
Kapil Mishra (Indian Institute of Information Technology Allahabad, India); Ravi Saharan and Bharti Rathor (Central University of Rajasthan, India)

In this paper, we propose a new technique for image encryption based on the pixel shuffling combined with changing pixel values using 128 bit secret key using henon chaotic map. Due to high sensitivity to the initial conditions, chaotic maps are good for designing dynamic permutation map. So a chaotic Henon map is used to generate permutation matrix. An external secret key is used to derive the initial conditions for the chaotic map and secret key for changing pixel values. Pixel shuffling is performed via horizontal and vertical permutation. Shuffling is used to expand diffusion in the image and dissipate the high correlation among image pixels. The proposed scheme is tested against a series of tests to measure its performance. Results of such tests indicate that the proposed algorithm is highly key sensitive and showed a good resistance against various brute-force and statistical attacks.

11:30 Biometric Authentication Using Local Subspace Adaptive Histogram Equalization
Gopal Chaudhary (Netaji Subhas Institute of Technology); Shefali Srivastava and Smriti Srivastava (Netaji Subhas Institute of Technology, India)

In biometrics authentication systems, such as palmprint recognition, fingerprint recognition, dorsal hand vein recognition and palm vein recognition etc., image enhancement play a crucial role for most of the low resolution image samples. In this work, a novel adaptive histogram equalization (AHE) variant is proposed referred as effective area-AHE (EA-AHE) with weights. Here, global adaptive histogram equalization is improved using a local AHE technique by varying the effective area with different effective weights. The method is found to improve the biometric authentication identification rate as compared to the typical AHE. To validate the proposed algorithm, IITD palmprint databases of left and right hand are used in the simulations. Finally, it is validated through results that proposed technique is superior to the existing ones.

11:45 Why So Abnormal?Detecting Domains Receiving Anomalous Surge Traffic in a Monitored Network
Aravind Ashok Nair (Amrita University & Amrita Center for Cyber Security, Amritapuri Campus, India); Prabaharan Poornachandran and Soumajit Pal (Amrita University, India); Prem Sankar (Amrita Vishwa Vidyapeetham, India); Surendran K (Amrita University, Amrita Vishwa Vidyapeetham, India)

Anomalous traffics are those unusual and colossal hits a non-popular domain gets for a small epoch period in a day. Regardless of whether these anomalies are malicious or not, it is important to analyze them as they might have a dramatic impact on a customer or an end user. Identifying these traffic anomalies is a challenge, as it requires mining and identifying patterns among huge volume of data. In this paper, we provide a statistical and dynamic reputation based approach to identify unpopular domains receiving huge volumes of traffic within a short period of time. Our aim is to develop and deploy a lightweight framework in a monitored network capable of analyzing DNS traffic and provide early warning alerts regarding domains receiving unusual hits to reduce the collateral damage faced by an end-user or customer. The authors have employed statistical analysis, supervised learning and ensemble based dynamic reputation of domains, IP addresses and name servers to distinguish benign and abnormal domains with very low false positives.

12:00 PSI-NetVisor: Program Semantic Aware Intrusion Detection at Network and Hypervisor Layer in Cloud
Preeti Mishra (Doon University, Dehradun, India); Emmanuel Shubhakar Pilli (Malaviya National Institute of Technology, Jaipur, India); Vijay Varadharajan (Macquarie University, Australia); Uday Tupakula (The University of Newcastle, Australia)

Cloud Security is of paramount importance in the new era of virtualization technology. Tenant Virtual Machine (VM) level security solutions can be easily evaded by modern attack techniques. Out-VM monitoring allows cloud administrator to monitor and control a VM from a secure location outside the VM. In this paper, we proposed an out-VM monitoring based approach named as 'Program Semantic-Aware Intrusion Detection at Network and Hypervisor Layer' (PSI-NetVisor) to detect attacks in both network and virtualization layer in cloud. PSI-NetVisor performs network monitoring at the network layer of centralized Cloud Network Server (CNS) and provides the first level of defense from attacks. It incorporates semantic awareness in a network intrusion detection system and enables it to provide network monitoring and process monitoring at the hypervisor layer of Cloud Compute Server (CCoS) and provides second level of defense from attacks. PSI-NetVisor employs Virtual Machine Introspection (VMI) libraries based on software break point injection to extract process execution traces from hypervisor. It further applies depth first search (DFS) to construct program semantics from control flow graph of execution traces. It applies dynamic analysis and machine learning approaches to learn the behavior of anomalies which makes it secure from obfuscation and encryption based attacks. PSI-NetVisor has been validated with latest intrusion dataset (UNSW-NB) and malware binaries collected from research centers and results seem to be promising.

12:15 Prominent Feature Extraction for Evidence Gathering in Question Answering
Lokesh Sharma and Namita Mittal (MNIT Jaipur, India)

Question Answering (QA) research is a significant and challenging task in Natural Language Processing. QA aims to extract an exact answer from a relevant text snippet or a document. The motivation behind QA research is the need of user who is using state-of-the-art search engines. The user expects an exact answer rather than a list of documents that probably contain the answer. In this paper, we consider a particular issue of QA that is gathering and scoring answer evidence collected from relevant documents. The evidence is a text snippet in the large corpus which supports the answer. For Evidence Scoring (ES) several efficient features and relations are required to extract for machine learning algorithm. These features include various lexical, syntactic and semantic features. Also, new structural features are extracted from the dependency features of the question and supported document. Experimental results show that structural features perform better, and accuracy is increased when these features are combined with other features. To score the evidence, for an existing question-answer pair, Logical Form Answer Candidate Scorer technique is used. Furthermore, an algorithm is designed for learning answer evidence.

12:30 Recognizing Isolated Words with Minimum Distance Similarity Metric Padding
Alex James (IIITMK, India)

Automated processing and recognition of human speech commands under unconstrained and noisy recognition situations with a limited number of training samples is a challenging problem of interest to smart devices and systems. In practice, it is impossible to remove noise without losing class discriminative information in the speech signals. In addition, any attempts to improve signal quality place an additional burden on the computational capacity in state-of-the-art speech command recognition systems. In this paper, we propose a low level word processing system using mean-variance normalized frequency-time spectrograms and a new similarity measure that compensates for feature length mismatches such as those resulting from pronunciation variations in speech-segments. We find that padding a local similarity matrix with zero-similarity values to disregard the effects of mismatch in length of speech spectrograms results in improved word recognition accuracies and reduction in betweenclass non-discriminative signals. As opposed to the state-of-the-art approaches in spectrogram comparisons such as DTW, the proposed approach, when tested using the TIMIT database, shows improved recognition accuracies, robustness to noise, lower computational requirements, and scalability to large word problems.

12:45 Semantics-Based Topic Inter-Relationship Extraction
Remya R.K. Menon (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, Amritapuri, India); Deepthy Joseph and Ramachandra Kaimal (Amrita University, India)

Maintaining large collection of documents is an important problem in many areas of science and industry. Different analysis can be performed on large document collection with ease only if a short or reduced description can be obtained. Topic modeling offers a promising solution for this. Topic modeling is a method that learns about hidden themes from a large set of unorganized documents. Different approaches and alternatives are available for finding topics, such as Latent Dirichlet Allocation (LDA), neural networks, Latent Semantic Analysis (LSA), probabilistic LSA (pLSA), probabilistic LDA (pLDA). In topic models the topics inferred are based only on observing the term occurrence. However, the terms may not be semantically related in a manner that is relevant to the topic. Understanding the semantics can yield improved topics for representing the documents. The objective of this paper is to develop a semantically oriented probabilistic model based approach for generating topic representation from the document collection. From the modified topic model, we generate 2 matrices- a document-topic and a term-topic matrix. The reduced document-term matrix derived from these two matrices has 85% similarity with the original document-term matrix i.e. we get 85% similarity between the original document collection and the documents reconstructed from the above two matrices. Also, an SVM classifier when applied to the document-topic matrix appended with the class label, shows an 80% improvement in F-measure score. The paper also uses the perplexity metric to find out the number of topics for a test set.

13:00 Hierarchical Visualization of Sport Events Using Twitter
Bushra Siddique and Nadeem Akhtar (Aligarh Muslim University, India)

In the near past, microblogging services like Twitter have gained immense popularity. The vast breadth of user base is responsible for generating information on diverse aspects ranging from product launch to sports match. However, due to the exponentially increasing number of participants on the Twitter platform, the volume of content generated is tremendously high. In this paper we address the information overload problem of the Twitter and present a framework for event detection with hierarchical visualization specifically for sports events. We propose a novel Event Tree algorithm which detects and generates a hierarchy of events through recursive hierarchical clustering. The different levels of the hierarchy represent the events at different granularities of time and thus offer dual advantages. Firstly, it takes care of the users with varied level of interest in the particular sports event. Secondly, the users may get the finer details for specific segments of the sport holdings as per their appeal. We test and report results of our framework for the Indian Premier League Twenty20 2016 season cricket match dataset.

13:15 Gesture Triggered, Dynamic Gaze Alignment Architecture for Intelligent eLearning Systems
Ramkumar N (Amrita Vishwa Vidyapeetam, India); Venkat Rangan (Amrita University, India); Uma G (Amrita Vishwa Vidyapeetam, India); Balaji Hariharan (Amrita University, India)

Current eLearning systems enable the streaming of live lectures to distant students facilitating a live instructor-student interaction with ease. A key factor that is vital to the in enhancing the naturalness during interactions is eye contact. Most live-eLearning solutions today do not facilitate the perception of eye contact between interacting participants. In this paper, we present the architecture of a system that receives gesture triggers and dynamically adapts the perspective of interacting participants facilitating eye contact. The system was evaluated with a three classroom test-bed and results indicate a marked enhancement in the perceived feeling of immersion during interaction. Results indicate a marked 42% improvement in the feeling of presence when gaze correction was employed during the interaction.

13:30 Aarya - A Kinesthetic Companion for Children with Autism Spectrum Disorder
Rachita Sreedasyam, Aishwarya Rao and Nidhi Sachidanandan (Amrita University, India); Nalini Sampath (Amrita Vishwa Vidyapeetham, India); Shriram K Vasudevan (Amrita University, India)

Autism Spectrum Disorder (ASD) as defined as a condition or disorder that begins in childhood and that causes problems in forming relationships and communicating with other people [10]. Aarya works as a personal well-being companion to children with Autism Spectrum Disorder while they interact with a virtual environment that is gesture based. By making a child with ASD face real world situations we aim to improve his/her confidence in facing the world and being open to learning various skills. Social interaction and communication are the major challenges faced by children with ASD. In Aarya we use gesture based inter-face that is the Microsoft Kinect so that the child can find it easier to interact in the real-world environment. Through the interactions made with the children and the results obtained, we understand that this tool can be a companion while giving chance for growth and improving their interactive ability. With further refinement and expert inputs, this tool proposed and built can be used better

13:45 Evaluation of Sequential Adaptive Testing with Real-Data Simulation: A Case Study
El-Sayed M. El-Alfy (King Fahd University of Petroleum and Minerals, Saudi Arabia)

Computer-based testing systems take advantage of the interaction between computers and individuals to sequentially customize the presented test items to the test-taker's ability estimate. Administering such sequential adaptive tests has many benefits including personalized tests, accurate measurement, item security, and substantial cost reduction. However, the design of such intelligent tests is a complex process and it is important to explore the impact of various parameters and options on the performance before switching from traditional tests in a particular environment. Although Monte Carlo simulation is a typical tool for achieving this purpose, it depends on generating pseudo-random samples, which may fail to effectively represent the environment under study and thus incorrect inferences can be drawn. This paper presents a comprehensive case study to evaluate and compare the performance of a number of sequential adaptive testing procedures but using post-hoc simulation, where items of a real conventional test are re administered adaptively. The comparisons are based on the number of administered items, standard error of measurement, item exposure rates, and correlation between adaptive and non-adaptive estimates. It is found that the results varies based on the settings. However, Bayesian estimation with adaptive item selection can lead to greater savings in terms of the number of test items without jeopardizing the estimated ability. It also has the lowest average exposure rate for each item.

14:00 CCFRS - Community-based Collaborative Filtering Recommender System
Chhavi Sharma and Punam Bedi (University of Delhi, India)

With the enormous growth in the volume of online data, users are flooded with gigantic amount of information. This has made the task of Recommender systems (RSs) even more engrossing. The research in RSs has been revolving around newer concepts like social factors, context of the user and the groups they belong to. This paper presents the design and development of a Community based Collaborative Filtering Recommender System (CCFRS). Louvain method of community detection has been used to discover the communities in the dataset. The proposed method of generating recommendations is based on the idea of Item Frequency-Inverse Community Frequency (IF-ICF) score of each item in the target user's community. IF scores help finding the set of items which are unique to a particular community. It is based on finding frequently rated items in the user's community. ICF is inversely proportional to the number of communities in which an item has been rated. It is used to calculate the uniqueness of the item across the communities. The IF-ICF scores of the item are further employed to find the prediction scores of items unseen by the user in order to present a set of top n recommendations to the user. A prototype of the system is developed using Java and experimental analysis has been carried out for the domain of books.

14:15 Developing Content-Based Recommender System Using Hadoop Map Reduce
Anjali Gautam and Punam Bedi (University of Delhi, India)

Proliferation of information is a major confront faced by e-commerce industry. To ease the customers from this information proliferation, Recommender Systems (RS) were introduced. To improve the computational time of a RS for large scale data, the process of recommendation can be implemented on a scalable, fault tolerant and a distributed processing framework. This paper proposes a Content-Based RS implemented on scalable, fault tolerant and distributed framework of Hadoop Map Reduce. To generate recommendations with improved computational time, the proposed technique of Map Reduce Content-Based Recommendation (MRCBR) is implemented using Hadoop Map Reduce which follows the traditional process of content-based recommendation. MRCBR technique comprises of user profiling and document feature extraction which uses the vector space model followed by computing similarity to generate recommendation for the target user. Recommendations generated for the target user is a set of Top N documents. The proposed technique of recommendation is executed on a cluster of Hadoop and is tested for News dataset. News items are collected using RSS feeds and are stored in MongoDB. Computational time of MRCBR is evaluated with a Speedup factor and performance is evaluated with the standard evaluation metric of Precision, Recall and F-Measure.

14:30 Web Page Recommendation System Based on Partially Ordered Sequential Rules
Harpreet Singh (DAV University Jalandhar, India); Manpreet Kaur (D. A. V University, India); Parminder Kaur (Guru Nanak Dev University, India)

As the size of the websites continues to grow, current research focuses on the development of intelligent websites which facilitate the browsing by providing a navigation aid to the website users. Web page recommendation systems provide suggestions to the website users about the webpages that may be of concern to them by evaluating the collective navigation behavior of previous website users. The main motive of this study was to explore the utilization of partially ordered sequential association rules in making future predictions for website users. Sequential rules provide the association between the events that occur in a particular sequence. In this paper, two sequential rule mining algorithms namely TRuleGrowth and CMRules, have been separately used to generate sequential rules. Then the sequential rules were used to make predictions about the future interests of the users regarding webpages. The experimental results on a real life dataset have revealed that the rules generated by TRuleGrowth method were able to make more accurate predictions than those generated by CMRules technique.

14:45 Instability, Chaos and Bifurcation Control in Nonlinear Dynamical System Behavior using Perturb-Boost Fuzzy Logic Controller
Sunil Nangrani (G. H. Raisoni College of Engineering Nagpur, India); Sunil Bhat (VNIT, Nagpur, India)

This paper presents novel Perturb-Boost Fuzzy Logic Controller for instability control of nonlinear dynamical. Several applications can make use of the small perturbation technique discussed in the paper related to industrial control, mechanical non linear system, electrical engineering and systems governed by nonlinear differential equations. This paper presents the power system as an application for applying intelligent technology. The power system is electro-mechanical nonlinear dynamical system described by a combination of electrical and mechanical parameter based differential equations together. Problem faced by power system is related to voltage instability and chaos. The voltage instability exists almost in every power system for a specific set of mechanical power, electrical loading and initial conditions. Voltage instability can be controlled by injecting a small amount of reactive power using power electronic device Static Volt Ampere Reactive Compensator (SVC). The amount of reactive power to be injected is trivial for different type and size of the power system. To avoid voltage instability, reactive power in power system needs to be boosted and it should perturb the system equation to the stable operating point. Perturb-Boost Fuzzy Logic Controller differs from conventional controller due to single shot boost action, which perturbs system dynamics in such a way as to push it to safe zones of voltage stability. This paper analyzes the performance of the proposed controller to control voltage instability for generalized three node power system benchmark model. Mitigation of voltage collapse is discussed in view of simulation results using proposed novel controller.

15:00 Learning Automata Based Fuzzy MPPT Controller for Solar Photovoltaic System Under Fast Changing Environmental Conditions
Sheik Mohammed S (TKM College of Engineering, India); Durairaj Devaraj (Kalasalingam University, India); Imthias Ahamed, Parambath (T K M College of Engineering, India)

Optimization of fuzzy Maximum Power Point Tracking (MPPT) controller using Learning Automata (LA)algorithm is proposed in this paper. The optimal duty cycle of the DC-DC converter circuit is obtained using LA for various environmental conditions through learning process. The fuzzy MPPT controller is developed using the information collected by LA through the learning process. The proposed model is developed and tested using MATLAB for standard test conditions of PV, constant temperature and varying irradiation level, constant irradiation and varying temperature level, and varying temperature and varying irradiation level. The results obtained using the proposed fuzzy MPPT are compared with the conventional Perturb and Observe (P&O) MPPT and variable step size Fuzzy MPPT based PV system. The experimental set up is developed and the test is conducted under different conditions for the solar PV system with P&O MPPT and the proposed LA Fuzzy MPPT. The results show that the proposed LA based Fuzzy MPPT method is more accurate and its tracking response is faster.

15:15 EMD and ANN Based Intelligent Fault Diagnosis Model for Transmission Line
Hasmat Malik (BEARS, University Town, NUS Campus Singapore, Singapore & NSIT Delhi, India); Rajneesh Sharma (Nsit, India)

In the presented work, an intelligent model for fault classification of a transmission line is proposed. Nine different types of faults are simulated along with healthy condition on a MATLAB based transmission line system. Post fault current signatures are used for feature extraction for further study. Empirical Mode Decomposition (EMD) method is used to decompose the post fault current signals into Intrinsic Mode Functions (IMFs) which are used as input variable in an artificial neural network (ANN) based intelligent fault classification model. Relief Attribute Evaluator with Ranker search method is used to select the most relevant input variable for fault classification of three-phase transmission line. Proposed approach at selected most relevant input variable gives better result than other combinations. Used technique for feature selection in the presented work has been applied first time is totally new one which is very imported for fault classification.

15:30 A Hybrid Wavelet-ANN Protection Scheme for Series Compensated EHV Transmission Line
Pranav Raval and Asit Pandya (GTU, India)

The paper presents a novel idea for protection of multi terminal Extra High Volt-age (EHV) transmission line having multiple Series compensation. A statistical learning perspective for improved classification of faults using Artificial Neural Networks (ANN) has been proposed. The protective scheme uses single end cur-rent data of three phases of line to detect and classify faults. A Multi resolution Analysis (MRA) wavelet transform is employed to decompose the signals acquired and further processed to extract statistical features. The statistical features learning algorithm utilizes a set of ANN structures with different combination of Neural Network parameters to determine the best ANN topology for Classifier. The algorithm generates different fault patterns arising out of different fault scenarios and altering system parameters in the test system. The features are selected based on ANOVA F-test statistics to determine relevance and improve classification accuracy. The features thus selected from fault patterns are given to the Hybrid Wavelet-ANN structure. The ANN once trained on a part of data set is later tested on the other part of unseen patterns and further validated on rest of the pat-terns. To provide a comparative Support Vector Machine Classifier is used to classify the fault patterns. A 5 fold cross validation is used on the data set to check the accuracy of SVM. It is shown that the proposed method using Pattern Recognition using Hybrid structure provides a high accuracy with reliability in identifying and classifying fault patterns as opposed to SVM.

15:45 Temperature Mapping of a Rotary Kiln Using Fuzzy Logic
Sreedhanya L. R. and Abi Varghese (University of Kerala, India); Madhu S Nair (Cochin University of Science and Technology, India); Wilsy M (University of Kerala, India)

Based on a flame image processing technology, a fuzzy based temperature monitoring system in a rotary kiln was reported. In this paper, we propose a Fuzzy based flame analysis, which consider Red, Green and Blue intensity planes, to measure the temperature from the flame image.The proposed approach integrates RGB intensity as fuzzified input variables,temperature as defuzzified output variables and fuzzy inference rules based Mamdani models. Based on the color characteristics of burning flame, temperature of different flame zones are located using a fuzzy logic controller. The temperature level at hotspot area is the highest and through the fuzzy analysis we were able to identify hotspot area from the flame image. In order to evaluate the performance of the proposed method, quantitative metric such as f-measure has been used and it was found that the f-measure metric yields high accuracy for the hotspot area. The visual inspection of the results along with the f-measure values showed the superiority of our work. Experimental results indicate that the proposed approach can be applied to a high resolution video flame image.

16:00 IoE-MPP: A Mobile Portal Platform for Internet of Everything
Xin Chen and Pengfei Yang (Beijing Information Science and Technology University, China); Tie Qiu (Tianjin University, China); Hao Yin (Tsinghua University, China); Jianwei Ji (Beijing Information Science and Technology University, China)

With the development of network technology, the Internet portals, such as Sohu and Sina and other portal websites, Google, Baidu and other search engines, as well as WeChat, Weibo and other social networks, are constantly changing. However, such portals can't meet the requirements of Internet of Everything (IoE) communication between people and things or be tween things themselves. Inspired by the thought of container, Mobile Crossplatform Application Development Framework (MCADF) and Platform as a Service (PaaS) technologies, we designed a "Cloud + Container" portal platform for IoE. The cloud provides application development, testing, deployment, operation, management, and other functions, and it is responsible for data storage, management and analysis. The user container terminal is used to carry and manage lots of applications and IoE data, and provide user access entry. In this paper, we rst introduce the development background of IoE and related problems, then give the detailed design of the platform, and evaluate the performance of the platform comparing with other platforms.

16:15 Dynamic Mobile Cloud Offloading Prediction Based on Statistical Regression
Dhanya NM (Amrita University, India); Kousalya Govardhanan (Anna University & Coimbatore Institute of Technology, India); Balakrishnan P (SASTRA University, Thanjavur, Tamilnadu, India)

Due to the advancement of mobile technology, a large number of heavy applications are created for smartphones. But the battery and processing limitations of the smartphones are still making it inferior to the desktop counterparts. Mobile Cloud Offloading (MCO) allows the smartphones to offload computationally intensive tasks to the cloud, making it effective in terms of energy, speed or both. Increased networking capacity due to high speed Wi-Fi and cellular connections like 3G/4G making offloading more efficient. Still the choice of offloading is not always advisable, because of the highly dynamic context information of the mobile devices and the clouds. In this paper we are proposing a dynamic decision making system, which will decide whether to offload or do the tasks locally, depending on the current context information and the optimization choice of the user. We developed matrices for time, energy and combination of time and energy to assess the system. A test bed is implemented and the results are showing much improvement from the currently existing methods.

16:30 The Use of Internet of Things and Web 2.0 in Inventory Management
Sizakele Mathaba (Researcher, South Africa); Olukayode Oki (Walter Sisulu University, South Africa); John B. Oladosu (Ladoke Akintola University of Technology (LAUTECH), Nigeria); Matthew Adigun (University of Zululand)

Radio Frequency Identification (RFID) uses sensors to enable communication among things or objects in what is called internet of things (IoT) technology. Web 2.0 tools, on the other hand, are used on electronic devices (phones, PDAs, computers, etc.) to transmit data contents over the internet. In this study, we used a synergy of these two technologies to improve inventory management. We proposed software architecture that fully integrates the advantages of RFID and Web 2.0 tools. The proposed architecture was used to developed inventory management software prototype focused on enterprises in developing countries of Africa, specifically South Africa. The inventory management prototype developed was able to detect misplaced products, detect low stock levels and send notification on Twitter to update inventory manager on mobile phone. Scalability measurements of the software were taken in order to validate the performance of the software prototype. Results show that the system scaled reliably with increasing number of items read. The contribution of this work was compared to existing work and our findings are presented in this paper. Real life evaluation for a specific industry will be necessary to further reveal what improvements would be required to make this architecture more relevant. Behavioural study of users will also be required to further determine the economic and social benefits of this approach.

16:45 Dynamic Spectrum Reconfiguration for Distributed Cognitive Radio Networks
Olukayode Oki (Walter Sisulu University, South Africa); Thomas Otieno Olwal (Tshwane University of Technology, South Africa); Pragasen Mudali (University of Zululand, South Africa); Matthew Adigun (YES, South Africa)

Spectrum decision is the capability of the Secondary Users to choose the best accessible spectrum band to satisfy a user's Quality of Service (QoS) requirements. Spectrum decision comprises of three primary functions; spectrum characterization, spectrum selection and dynamic reconfiguration of cognitive radio. The study of dynamic reconfiguration of transceiver parameters in spectrum decision making has been motivated because of its importance to the realization of efficient spectrum utilization and management in a distributed mobile cognitive radio networks. Spectrum decision making in a distributed cognitive radio networks is crucial, so as to ensure that an appropriate frequency and channel bandwidth are selected to meet the QoS requirements of different kinds of applications and to maintain the spectrum quality. In attempting to address the issue of dynamic reconfiguration of transceiver parameters in decision making for cognitive radio networks, different approaches can be found in the literature. How-ever, due to some of the challenges associated with these approaches such as high computational complexity, ambiguity, repeatability and applicability of these classical approaches, researchers are still trying to explore other techniques that will be less ambiguous, more efficient, understandable and easier to deploy in a highly dynamic environment like distributed cognitive radio networks. Hence, this paper review the existing approaches, identify the challenges and propose a biologically inspired optimal foraging approach to address the decision making problem and other problems relating to the existing approaches.

17:00 Hyper-Geometric Energy Factor Based Semi-Markov Prediction Mechanism for Cluster Head Election
Rajarajeswari Palaniappan (Anna University Chennai & Sri Krishna College of Technology, India)

The lifetime of the wireless sensor network depends on the effectiveness of the adopted cluster head selection based clustering technique that addresses most of the significant issues related to network management. The energy factor based cluster head selection mechanisms are generally implemented by considering the participating sensor nodes as trustworthy. Conversely, the trust- based cluster head selection schemes assumes that the sensor nodes are energy efficient. But, these assumptions of energy factor or trust assessment for cluster head selection mechanisms may not be true and the previous or present energy availability of the sensor nodes may not identify an effective cluster head in sensor network. Hence, this paper presents the hybrid integrated energy and trust assessment based forecasting model known as Hyper-geometric Energy Factor based Semi-Markov Prediction Mechanism (HEFSPM) for effective election of cluster head in order to improve the lifetime of the network. Simulation results also infers that HEFSPM is superior in improving the lifetime of the network to a maximum extent of 22% than the existing cluster head election mechanism considered for investigation.

17:15 Vedic Multiplication Based Efficient OFDM FFT Processor
Bhawna kalra Ji (Rajastan Technical University, India); Janki Sharma (Rajasthan Technical University, India)

In multi carrier OFDM systems Parameters like speed throughput and area can be improved by using efficient Fast Fourier Transform approach. In this paper an area efficient and high speed 32 bit floating point FFT processor for OFDM using Vedic multiplication process is presented. Proposed FFT processor is based on memory based architecture and utilizing Urdhavatiryakbhyam sutra for Vedic multiplication. As the number of inbuilt multipliers available in FPGAs are limited, hence external multiplication module are required in the multicarrier OFDM systems in order to reduce the complexity of FPGA implementation. By the use of Vedic multiplication process in FFT of OFDM high throughput in smaller area can be achieved. Simulation results explain that the proposed scheme is having high speed and throughput.

17:30 Intelligent Stress Calculation and Scheduling in Segmented Processor Systems Using Buddy Approach
Rohit Kumar (Panjab University, Chandigarh, India); Lokesh Pawar (Chandigarh University, Mohali, India); Rohit Bajaj (BRCM College of Engineering & Technology Bahal, India); Amit Kumar Manocha (Maharaja Ranjit Punjab Technical University, India)

Parallel Processing has been a widely studied, used and implemented in computational systems. Many different types of topologies of processors have been implemented and their performance has been analyzed. The processor technology keeps evolving so their computational capability must be utilized accordingly when employed in parallel systems. In this article new parallel processor architectures has been used and a co-operative protocol has been implemented to optimally utilize the parallel components of the parallel processor design. More precisely a friendship based load balancing strategy has been designed and implemented to maximally utilize the parallel processor which takes care of overloading and starvation problems. Firm and steady process assignment among processors is an astute effort as this kind of assignments necessitates advanced or priori information regarding the present processor load only then the allotments can be made.

17:45 A Fuzzy Fusion Approach to Enlighten the Illuminated Regions of Night Surveillance Videos
Soumya T (College of Engineering Perumon, India)

The night video fusion algorithms integrate the visuals captured by a security surveillance camera, which in turn improve the visual perception. The recent development in night fusion research focused on fusing both illuminated and nonilluminated areas simultaneously, however, the natural color of the light area may be lost. Moreover, the contrast of the illuminated regions decreases because of the dark pixels surrounding in those regions. Hence, the color and contrast should be improved to obtain the actual color of the illuminated regions. We propose a fuzzy inference system based wavelet fusion to enhance light regions of a nonuniform illuminated night video surveillance system. In order to include spatial and temporal variations of the illuminated regions, a spatio-temporal illumination approach is used. A contribution index of the illuminated regions is generated using a fuzzy membership function. Subsequently, the stationary wavelets are used to decompose high and low-frequency coefficients of both night and day background frame for frame fusion. The contribution index selects the illuminated regions presented in these wavelet coefficients for fusion. Finally, the inverse wavelet transform is applied to reconstruct the illumination enhanced frame. The experimental results demonstrate that the information content of the illuminated regions is improved compared to the existing fusion methods. The proposed approach effectively highlights the illuminated regions and provides a better visual perception.

18:00 Application of M-band Wavelet in Pan-sharpening
Vishnu Pradeep V (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Amrita University, India); V Sowmya (Amrita Vishwavidyapeetham, India); Soman K P (Amrita Vishwa Vidyapeetham, India)

Remote sensing satellites are proficient in taking earth images across various regions in visible part of electromagnetic spectrum. The images can be panchromatic image of a single band, multispectral image of three to seven different bands, and hyperspectral image taken from about 220 contiguous spectral bands. These images are used together or on its own, depending on the significance and usage of the preferred application. Pan-sharpening is one method which is used to improve the quality of a low resolution multispectral image by fusion with a high resolution panchromatic image. This paper proposes a method based on M-band wavelets for the pan-sharpening of a low resolution multispectral image. The method tries to improve the spatial characteristics while preserving the spectral quality of the data. The proposed technique uses weighted fusion and average fusion rule. The data used for the experiment were acquired by high resolution optical imagers onboard QuickBird, WorldView-3, WorldView-2 and GeoEye-1. A comparison with existing fusion techniques is done based on image quality metrics and visual interpretation. The experimental results and analysis suggests that the proposed pan-sharpening technique outperforms other pre-existing pan-sharpening methods.

18:15 Traffic State Detection Using Smartphone Based Acoustic Sensing
Arshvir Kaur, Nitakshi Sood, Naveen Aggarwal, Dinesh Vij and Bhavdeep Sachdeva (Panjab University, India)

Traffic congestion occurs when the demand of the vehicles increases more than the existing space of the road. This deleterious problem is increasing at an alarming rate in the whole world. For any effective Intelligent Transportation System, early detection of traffic congestion is very important to take corrective action. Several techniques have been developed to detect traffic congestion, most of which are infrastructure based. Even though these techniques are widely used, but they have many downsides as well. They require large capital input for installation as well as for maintenance. In this paper, we propose an efficient and cost-effective method using smartphones to determine the traffic state of the road. The acoustic data collected from commuter's smartphone is segmented into fixed size frames. Various time and frequency based features such as (MFCC, Delta & Delta-Delta, ZCR, STE, and RMS) are extracted from each frame and used for detecting traffic state as 'busy street' or 'quiet street'.We have compared the accuracy of two classifiers Support Vector Machines and Neural Network by using acoustic data collected from 320 different recording sessions. Experiments have shown that feature set having features MFCC, STE and RMS, results in better classification accuracy of 91.8% with Neural Network and 93% with SVM. Furthermore, various relevant factors affecting the classification accuracy are also tested like frame size, window functions, overlapping size and different combination of features. The frame size of 8192 and hamming window function proved to be more efficient than others.

18:30 High Performance Self Tuning Adaptive Filter Algorithm for Noise Cancellation in Speech
Mugdha M Dewasthale (Savitribai Phule Pune University, India)

Least Mean Square (LMS) and Normalized-Least Mean Square (NLMS) algorithms are very popular and frequently used algorithms for noise cancellation in speech. But selecting the step size for weight updation of adaptive filter is the big issue in LMS and NLMS algorithms. So as to meet disagreeing requirements of quick convergence and less MSE, step size needs to be correctly controlled. Along with step size, length of adaptive filter also plays major role in effective noise cancellation. These two factors greatly affect the performance of ANC. To get best possible solution, a variety of trials of filter length and step size are required. The main motivation behind the development of proposed High Performance Self Tuning (HPST) adaptive filter algorithm is to adaptively determine the step size. The selection of length of adaptive filter is based on the distance between two microphones in ANC system. The proposed algorithm works very well as shown in the experiments which are carried out on NOIZEUS speech corpus as well as actually recorded noisy speech signals. Results indicate that proposed algorithm is superior to referred algorithms in terms of Mean- Square- Error (MSE), Peak- Signal to Noise ratio (PSNR), convergence time and complexity.

18:45 Digital Image Forgery Detection using Compact Multi-texture Representation
Divya S Vidyadharan (AugSenseLab, India)

Detecting forged digital image has been an active research area in recent times. Tampering introduces artifacts within images that differentiate tampered images from authentic images. Forgery detection techniques try to identify these artifacts by analyzing differences in the texture properties of the image. In this paper, we propose a multi-texture description based method to detect tampering. Different texture descriptors considered are Local Binary Pattern, Local Phase Quantization, Binary Statistical Image Features and Binary Gabor Pattern. The method captures subtle texture variations at different scales and orientation using Steerable Pyramid Transform (SPT) decomposition of image. The different texture descriptors extracted from each subband image after SPT decomposition is combined to form the multi-texture representation. Then, ReliefF feature selection method is applied on this high dimensional multi-texture representation to generate a compact representation. This compact multi-texture representation is classified using Random Forest classifier. We have evaluated the performance of individual texture descriptors and multiple textures in detecting image forgery. Experimental results show that the compact multi-texture description has improved detection accuracy.

19:00 A Novel Harmony Search Algorithm Embedded with Metaheuristic Opposition Based Learning
Ritesh Sarkhel (Ohio State University, India); Tithi Mitra Chowdhury, Mayuk Das, Nibaran Das and Mita Nasipuri (Jadavpur University, India)

Evolutionary Algorithms (EA) are effective and robust optimization approaches which have been successfully applied to a wide range of non-linear and complex problems. However, these well-established metaheuristic strategies suffer from the disadvantage of being computationally expen-sive because of their slow convergence rate. Opposition Based Learning (OBL) theory opened up a new frontier in the field of machine intelligence by successfully alleviating this problem to some extent. Through simultaneous consideration of estimates and their respective counter estimates of a candidate solution within a definite search space, a far better approximation for the current can-didate solution could be achieved. However, OBL theory, although addresses the problem to some extent, it is far from alleviating the slow convergence rate of EAs completely. The present work proposes a novel approach towards improving the performance of OBL theory by allowing the exploration of a larger search space when computing the candidate solution. Instead of consid-ering all the components of the candidate solution simultaneously, the proposed method considers each of the components individually and attempts to find the best possible combination among them while computing the candidate solution. In the present work, this improved Opposition learning theory has been integrated with the classical HS algorithm, a music inspired EA, in an attempt to accelerate its convergence rate. A comparative analysis of the proposed method against classical Opposition Based Learning has been performed on a comprehensive set of benchmark functions to prove its superior performance.

19:15 Application of Firefly Algorithm to Uncapacitated Facility Location Problem
Kohei Tsuya, Mayumi Takaya and Akihiro Yamamura (Akita University, Japan)

We apply the firefly algorithm to the uncapacitated facility location problem. We examine the the light absorption coefficient parameter $\gamma$ of the firefly algorithm for better performance and explore suitable values of $\gamma$ for the uncapacitated facility location problem. We also implement the firefly algorithm with local search and discuss its effectiveness. In addition, we compare FA endowed with local search with the ABC algorithm with respect to average relative percent error and hit to optimum rate.

19:30 Improvised Apriori with Frequent Subgraph Tree for Extracting Frequent Subgraphs
Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India); Susanna Thomas (Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India)

Graphs are one of the best studied data structures in discrete mathematics and computer science. Hence, data mining on graphs has become quite popular in the most recent couple of years. The problem of finding frequent itemsets in conventional data mining on transactional databases, thus transformed to the discovery of subgraphs that frequently occur in the graph dataset containing either single graph or multiple graphs. Most of the existing algorithms in the field of frequent subgraph discovery adopts an Apriori based on generation of candidate set and test approach. The problem with this approach is costlier candidate set generation, particularly when there exist more number of large subgraphs. The research goals in frequent subgraph dis- covery are (i) mechanisms that can effectively generate candidate subgraphs excluding duplicates and (ii) mechanisms that find best processing techniques that generate only necessary candidate subgraphs in order to discover the useful and desired frequent subgraphs. In this paper, we propose a two phase approach by integrating Apriori algorithm on graphs to frequent subgraph (FS) tree to discover frequent subgraphs in graph datasets.

19:45 An Innovative and Intelligent Earphones with Auto Pause Facility
Abhishek SN (24 7 Inc, India); Madhan C, Balakirithikaa RB and Shriram K Vasudevan (Amrita University, India)

Life is nothing less than a hell without any entertainment in it .Thanks to mobilephones that let us entertain ourselves on the go. The mobile phones that are being launched nowadays, come with super features that revolve around entertainment. Mobile manufacturers know it pretty well that entertainment has become an indispensable part of human life in the current era. This is the reason why mobile phones are nothing but a complete portable entertainment package. The main source of portable entertainment is music. A very common but irritating problem faced by the current generation youngsters is that missing their favorite beat, or pausing a song frequently, while listening to music just to talk or listen to someone. This seems to be negligible, but irritates most when it comes to be their favorite part. They go crazy to such an extent that they rewind the song or even restart it from beginning for the single beat. So think about a system that stops playing when the buds are taken off, and automatically continues when it is placed back. This seems to be simple but not as simple as it is heard. This will change the whole experience of enjoying media, making a new mile stone in the entertainment world. This system will bring new generation of media players that not only allows us to listen to our favorite music whenever we want but also allows automatic play and stop without having to unlock our phones every now and then for the same.

Wednesday, September 21

Wednesday, September 21 8:30 - 14:30 (Asia/Kolkata)


Room: Registration Desk (Academic area)

Wednesday, September 21 9:30 - 10:00 (Asia/Kolkata)

Conference Inauguration

Room: SAC(Sports complex)

Wednesday, September 21 10:00 - 10:30 (Asia/Kolkata)

Tea Break

Room: Lawns adjacent to SAC( Sports complex)

Wednesday, September 21 10:30 - 11:30 (Asia/Kolkata)

Keynote 1: Prof. Jun Wang, Chair Professor of Computational Intelligence, City University of Hong Kong, Hong Kong

Title of Talk: Collaborative Neurodynamic Optimization Approaches to Constrained Optimization
Prof. Jun Wang, Chair Professor of Computational Intelligence, City University of Hong Kong, Hong Kong
Room: SAC(Sports complex)

Title of Talk: Collaborative Neurodynamic Optimization Approaches to Constrained Optimization

The past three decades witnessed the birth and growth of neurodynamic optimization which has emerged and matured as a powerful approach to real-time optimization due to its inherent nature of parallel and distributed information processing and the hardware realizability. Despite the success, almost all existing neurodynamic approaches work well only for convex and generalized-convex optimization problems with unimodal objective functions. Effective neurodynamic approach to constrained global optimization with multimodal objective functions is rarely available. In this talk, starting with the idea and motivation of neurodynamic optimization, I will review the historic review and present the state of the art of neurodynamic optimization with many individual models for convex and generalized convex optimization. In addition, I will present a multiple-time-scale neurodynamic approach to selected constrained optimization. Finally, I will introduce population-based collaborative neurodynamic approaches to constrained distributed and global optimization. By deploying a population of individual neurodynamic models with diversified initial states at a lower level coordinated by using some global search and information exchange rules (such as PSO and DE) at a upper level, it will be shown that many constrained global optimization problems could be solved effectively and efficiently.

Wednesday, September 21 11:30 - 12:30 (Asia/Kolkata)

Keynote 2: Prof. John N Daigle, University of Mississippi, USA

Title of the Talk: Fountain code-based protocols for improving Internet content delivery
Room: SAC(Sports complex)

Title of the Talk: Fountain code-based protocols for improving Internet content delivery

Abstract: A fountain code is a code that is capable of generating an essentially infinite number of encoding symbols through forming linear combinations of a finite number of source symbols. In order to transfer a block of data, the data is first partitioned into a number of source symbols, then encoding symbols are formed, and each encoding symbol is transferred over the network together with an identifier that reveals the structure of the encoding symbol. The result at the receiving end is the equivalent of a linear system of equations in which the souce symbols are unknown. In principle, the linear system can be solved when a sufficient set of linearly independent equations, that is, encoding symbols, have been received. The utility of fountain codes for content transfer is, of course, heavily dependent upon how well the encoding and decoding processes are designed.

This presentation provides a brief introduction to RaptorQ codes, which are a specific class of fountain codes, and then discusses a number of areas where fountain coding has the potential to greatly simplify reliable data transfer and improve efficiency in the expanding Internet. In each of the areas, advantages of protocols based on fountain coding over traditional protocols will be discussed and the claims will be substantiated through the presentation of an example protocol. In some cases the advantages will be illustrated through the presentation of measurement results on real operating networks or test beds. Challenges in maximizing efficiencies will also be discussed.

Wednesday, September 21 12:30 - 13:20 (Asia/Kolkata)

Lunch Break

Room: LNMIIT Mess

Wednesday, September 21 13:25 - 14:25 (Asia/Kolkata)

Keynote 3: Dr. Peter Mueller, IBM Zurich Research Laboratory, Switzerland

Title of the Talk: Developments in Security for a Future with Quantum Computers
Room: SAC(Sports complex)

Title of the Talk: Developments in Security for a Future with Quantum Computers

Talk description: Thirty-five years ago, Richard Feynman thought up the idea of a ‘Quantum Computer', which at that time was recognized as a topic of science fiction. But with advances in science and technologies of computing, communications and informatics, the fiction is becoming reality. About twenty-five years ago, Peter Shor observed his famous algorithm which allows efficient factorization implementable by basic operations on a quantum computer. Further developments showed weaknesses in almost all commonly applied public key schemes, such as RSA, Elliptic Curve Cryptography (ECDSA), Finite Field Cryptography (DSA) and Diffie-Hellman key exchange. We will take a look into the basic quantum computing hardware and compare its capabilities with traditional processors. Questions like the application of quantum technology to protect information at higher level, problems hard to solve for quantum computers and its implementation on traditional processors will be addressed and compared with the related topics in our current areas of research.

Wednesday, September 21 15:00 - 17:00 (Asia/Kolkata)

Tutorial 1: Text Mining and Biomedical Text Data Mining: Entity, and Relation Extraction

Dr. Jeyakumar Natarajan, Data Mining and Text Mining Lab., Dept. of Bioinformatics, Bharathiar University, India
Room: Computer Lab 1 (Academic Area)
Chair: Karim Hashim Al-Saedi (Mustansiriyah University & College of Science, Iraq)

Title of the Tutorial: Text Mining and Biomedical Text Data Mining: Entity, and Relation Extraction

Dr. Jeyakumar Natarajan, Data Mining and Text Mining Lab., Dept. of Bioinformatics, Bharathiar University, India

Abstract: This tutorial is about text mining in general and biomedical text data mining in particular to extract named entity, and relation extraction form natural language text. The discipline text mining is evolved for automatic extraction new knowledge from published literature. Text mining is defined as the utilization of automated methods for the enormous amount of knowledge available in text documents. In biomedical sciences, besides experimental data, there is a substantial amount of biomedical knowledge recorded only in the form of free-text in research abstracts, full-text articles and clinical records etc. Machine learning algorithms are commonly applied to text mining applications. Text mining of biomedical literature has been applied successfully to various biological problems such as biomedical named entity recognition (e.g. genes and proteins names), entity relation extraction (e.g. protein-protein interactions, gene-disease relations) and event extraction (e.g. biomedical pathways and functions). The talk will introduce text mining basics, methodology, followed by various applications areas in biomedical domain.

Outline including a short summary of every section:

Introduction and overview of machine learning, text mining and biomedical text mining [45 minutes] This section first introduces machine learning and its application in general text mining. This is followed by application of text mining in biomedical literature and biomedical literature resources such as PubMed, Full-text research articles and Clinical records etc will be presented.

Text Mining and Biomedical Text Data Mining and Components Tasks [45 minutes] This section various component tasks of text mining which includes i) Named Entity recognition ii) Co-reference resolution ii) Template relation extraction, iii) Event extraction iv) Scenario template extraction will be presented. The two major component tasks of biomedical text mining (i.e.) biomedical named entity extraction and entity-relation extraction and its applications will be highlighted.

Biomedical Named Entity Extraction [30 minutes] This presentation in this section includes overview of Biomedical Named Entities (e.g. gene, protein, diseases names etc), Protein/gene name identification methods such as rule based, lexicon based and machine learning based approaches and gold standard data sets and essential papers related to this task.

Biomedical Relation and Event Extraction [30 minutes] This section presents the overview of Biomedical Relations (protein-protein interactions, gene-disease relations etc.), Relation extraction methods such as rule based, lexicon based and machine learning based approaches and gold standard data sets and essential papers of relation extraction task.

Demo of in-house developed text mining tools [30 minutes] In this last section the on-line demo of following four in-house developed text mining tools will be presented. They are i) Named Entity Tagger NAGGNER with its algorithm/method ii) Co-reference Tagger ProNormz with its algorithm/method iii) Entity-Relation system PPInterFinder with its algorithm/method iv) event extraction system HPIminer with its algorithm/method.

Target audience: The target audience is computer science and information technology graduate students and researchers who are interested to understand the basic principles behind text mining and wish to develop and use text mining systems for biomedical data analysis. All concepts will be introduced on an intuitive level, so a computational biologist or a computer scientist will be comfortable with the material.

Specific goals and objectives: The specific goal and objective and of this tutorial is introduce about the plethora of data in the form of text and other forms available on biomedical science and the current and future research opportunities excising on this domain to the computer science researchers.

Tutorial 2: Software Quality Predictive Modeling: An Effective Assessment of Experimental Data

Dr. Ruchika Malhotra, Department of Software Engineering, Delhi Technological University, Delhi, India
Room: DSP Lab (Academic Area)

Title of the Tutorial: Software Quality Predictive Modeling: An Effective Assessment of Experimental Data

Dr. Ruchika Malhotra, Assistant Professor, Department of Software Engineering, Delhi Technological University, Delhi, India

Abstract: Predictive modeling, in the context of software engineering relates to construction of models for estimation of software quality attributes such as defect-proneness, maintainability and effort amongst others. For developing such models, software metrics act as predictor variables as they signify various design characteristics of a software such as coupling, cohesion, inheritance and polymorphism. A number of techniques such as statistical and machine learning are available for developing predictive models. Hence, conducting successful empirical studies, which effectively use these techniques are important in order to develop models which are practical and useful. These developed models are useful to organizations in prioritization of constraint resources, effort allocation and developing an effective software quality product.

However, conducting effective empirical studies which develop successful predictive models is not possible if proper research methodology and steps are not followed. This tutorial introduces a successful stepwise procedure for efficient application of various techniques to predictive modeling. A number of research issues which are important to be addressed while conducting empirical studies such as data collection, validation method, use of statistical tests, use of an effective performance evaluator etc. are also discussed with the help of an example. The tutorial also provides future directions in field of software quality predictive modeling.

Outline: A major problem faced by software project managers is to develop good quality software products within tight schedules and budget constraints. Development of predictive models which can estimate various software quality attributes such as effort, change-proneness, defect-proneness and maintainability are important for project-managers so that they can focus their resources effectively and develop a software product of desired quality. But how do we use different available techniques such as statistical and machine learning effectively for model prediction? In order to answer this question, this tutorial discusses with the help of an example the research methodology for successful application of various techniques to software quality predictive modeling. It also explores various research issues in the field and provides future directions to enhance the use of software quality predictive models. The various sections of this tutorial are:

Research Methodology for Software Quality Predictive Modeling Research Issues in Software Quality Predictive Modeling Current Trends in Software Quality Predictive Modeling and Future Directions in Software Quality Predictive Modeling Target Audience: The tutorial is targeted at academic researchers and software practitioners who plan to develop models for predicting various software quality attributes. It effectively states the steps needed to perform an empirical study to investigate and empirically validate the relationship between various software quality attributes and OO metrics. The tutorial proposes efficient steps for doing replicated study or to analyze the relationship between various quality attributes and OO metrics.

Specific Goals and Objectives: The reasons for relevance of this tutorial is manifold. Empirical validation of OO metrics is a critical research area in the present day scenario, with a large number of academicians and research practitioners working towards this direction to predict software quality attributes in the early phases of software development. Thus, this tutorial explores the various steps involved in development of an effective software quality predictive model using a modeling technique with an example data set. Performing successful empirical studies in software engineering is important for the following reasons:

To identify defective classes at the initial phases of software development so that more resources can be allocated to these classes to remove errors and thus the cost of correcting the error is minimized as it is eliminated at an earlier stage. To analyze the metrics which are important for predicting software quality attributes and to use them as quality benchmarks so that the software process can be standardized and delivers effective products. To efficiently plan testing, walkthroughs, reviews and inspection activities so that limited resources can be properly planned to provide good quality software.

To use and adapt different techniques (statistical, machine learning & search-based) in predicting software quality attributes. To analyze existing trends for software quality predictive modeling and suggest future directions for researchers.

To work towards consistently improving the quality of the resulting OO software processes and products. To document the research methodology so that effective replicated studies can be performed with ease. It is important to document and state effective research methodology for use of different techniques for software quality predictive modeling so that efficient empirical studies can be performed which are of practical relevance. Thus, the tutorial presents a complete and repeatable research methodology.

Wednesday, September 21 16:00 - 16:30 (Asia/Kolkata)

Tea Break

Room: Lawns, Academic area and Mechatronics Dept

Thursday, September 22

Thursday, September 22 8:30 - 14:30 (Asia/Kolkata)


Room: Registration Desk (Academic area)

Thursday, September 22 9:20 - 10:10 (Asia/Kolkata)

Keynote 4: Dr. Pedro Silva Girao, Professor, Department of Electrical Engineering, Instituto Superior Técnico, University of Lisbon, Portugal

Title of the Talk: Automated Measuring Systems for Environmental Monitoring
Dr. Pedro Silva Girao, Professor, Department of Electrical Engineering, Instituto Superior Técnico, University of Lisbon, Portugal
Room: LT-9 (Academic Area)

Title of Talk: Automated Measuring Systems for Environmental Monitoring

Dr. Pedro Silva Girão, Professor, Department of Electrical Engineering, Instituto Superior Técnico, University of Lisbon, Portugal

Abstract:Environmental monitoring can be essentially described as a set of continuous or frequent measurements of environmental parameters which are fundamental to assess the state of the environment, the achievement of predefined objectives, law enforcement, the detection of new environmental issues, and environmental short and medium term forecasting. Monitoring at a micro-scale is related to monitor and track one or more parameters in a small and limited geographical context, such as the control of gaseous emissions of a factory. In terms of micro-scale, environmental monitoring is generally used to control emissions of pollutants, whether gaseous or liquid. By opposition, macro-scale monitoring involves a vast geographical area, such as the control of water quality of a lake.

For occasional measurement of environmental pertinent quantities, the natural solution is either to use dedicated, manually operated instruments if the measurement is to be made on site, or to take a sample of the media and make the measurements in an adequate laboratory. The last solution is sometimes required because of the difficulty of measuring on site namely chemical and biological quantities. For continuous environmental monitoring, automated measuring systems are needed.

After an introduction on environmental monitoring and how to approach air, water and soil monitoring, the presentation details a wireless sensor network designed to monitor the quality of the water of the Tagus River estuary near Lisbon, Portugal. This distributed automated measuring system is composed of several measuring sites (nodes of the network) each one installed on an anchored buoy. The hardware of each node includes, inside the buoy, sensors for temperature, pH, turbidity, electrical conductivity, dissolved oxygen, and heavy metals concentration and their conditioning circuits, a processing unit (microcontroller), a radio transceiver, a GPS, and a power supply. Sensors require periodic calibration. Thus, they are placed inside a tank where are introduced reference solutions for calibration purposes and samples of the river water for measurement using electronic controlled pumps and valves. On the outside of the buoy are the transceiver and GPS antennas and a solar panel used to recharge the batteries of the power supply. The hardware and the software of each node turn it into a smart sensor continuously operating and periodically sending data to a land-based central unit (a PC) that performs advanced data processing, data presentation and internet publication. The software of the central unit includes a Kohonen self-organized map based algorithm that allows identification of pollution events, one of the main purposes of the overall system.

Thursday, September 22 10:10 - 11:00 (Asia/Kolkata)

Keynote 5: Prof. Mohammed Atiquzzaman, University of Oklahoma, USA

Title of the Talk: Internet-based Seamless Data Communications with Space
Room: LT-9 (Academic Area)

Title of Talk: Internet-based Seamless Data Communications with Space

Prof. Mohammed Atiquzzaman, Edith J. Kinney Gaylord Presidential Professor, School of Computer Science, University of Oklahoma

Abstract: Data communications between Earth and spacecrafts, such as satellites, have traditionally been carried out through dedicated links. Shared links using Internet Protocol-based communication offers a number of advantages over dedicated links. The movement of spacecrafts however gives rise to issues in seamless data communications with space. This talk will discuss various mobility management solutions for extending the Internet connection to spacecrafts for seamless data communications. The talk with provide an overview of the network layer based solution being developed by the Internet Engineering Task Force and compare with the transport layer based solution that have been developed at University of Oklahoma in conjunction with the National Aeronautics and Space Administration. Network in motion is an extension of the host mobility protocols for managing the mobility of networks which are in motion, such as those in airplanes and trains. The application of networks in motion will be illustrated for both terrestrial and space environment.

Thursday, September 22 11:00 - 11:20 (Asia/Kolkata)

Tea Break

Room: Lawns(Academic Area)

Thursday, September 22 11:20 - 11:40 (Asia/Kolkata)

Demo: Interactive Intelligent Shopping Cart Using RFID and Zigbee Modules

Presenter: Ms. Shallu Dhauta, CDAC
Room: LT-9 (Academic Area)

Demo: Interactive Intelligent Shopping Cart Using RFID and Zigbee Modules Presenter: Ms. Shallu Dhauta, CDAC

Summary: When a customer goes to a mall or a shopping complex they have to worry about a lot of stuff. For example, sometimes the consumer is concerned about the total amount of money they had with them is enough or not, about which they come to know at the cash counter while paying. So they need to calculate all the time in mind the total amount while shopping which creates troublesomeness for customers. Waiting in the queue creates a big problem or inconvenience in the malls or shopping complex, especially on holidays or during the seasons of big sales. This is because the billing process is done using a bar code scanner at the billing counter which is a very timely process. Other problems in malls that a customer does not know about the information about the available offers, sales and discount provided on that day of shopping. Considering all the above problems we designed a system to help customers for purchasing items in a shopping mall. In this system, various algorithms are developed for displaying all the offers of the day and for comparing the chosen products with the similar products in terms of price per unit. In this system, we chose RFID technology for product identification over bar code reader.

Thursday, September 22 11:40 - 12:30 (Asia/Kolkata)

Keynote 6: Prof. Jinsong Wu, Universidad de Chile (University of Chile), Chile

Title of the Talk: Global Green Challenges Meet Big Data Era
Room: LT-9 (Academic Area)

Prof. Jinsong Wu, Universidad de Chile (University of Chile), Chile

Title of Talk: Global Green Challenges Meet Big Data Era

Abstract: Although the term of green has been often used to refer to energy consumption reduction or energy efficiency by many people and literatures, green actually should refer to environmental sustainability in more general senses. Environmental sustainability issues have been important topics for recent years, which has impacted and will further impact individuals, enterprises, governments, and societies. Environmental sustainability is not simply regarding reducing the amount of waste or using less energy, but relevant to developing processes leading to completely sustainable human society in the future. The long term consequences of the relevant serious issues have not yet been fully forecasted, but it has been generally accepted in many communities that immediate responses are necessary. From 30 November to 12 December 2015, the 21th United Nations Climate Change Conferences of the Parties (COP 21) was held in Paris, France, as the a historical breakthrough and milestone towards securing the future Earth, a global agreement on the reduction of climate change, the text of which represented a consensus of the representatives of more than 193 countries attending it, which was a profound milestone for global environmental sustainability. Nowadays there is another significant tendency on how to process the enormous amount of data, big data. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this talk would address how to green big data systems in terms of the whole life cycle of big data processing as well as big data technologies towards various green objectives.

Thursday, September 22 12:30 - 13:20 (Asia/Kolkata)

Lunch Break

Room: LNMIIT Mess

Thursday, September 22 13:25 - 14:20 (Asia/Kolkata)

Keynote 7: Prof. Ponnurangam Kumaraguru, Indraprastha Institute of Information Technology (IIIT), Delhi, India

Title of the Talk: Privacy and Security in Online Social Media (PSOSM)
Room: LT-9 (Academic Area)

Prof. Ponnurangam Kumaraguru, Indraprastha Institute of Information Technology (IIIT), Delhi, India

Title of Talk: Privacy and Security in Online Social Media (PSOSM)

Talk description: With increase in usage of the Internet, there has been an exponential increase in the use of online social media on the Internet. Websites like Facebook, Google+, YouTube, Orkut, Twitter and Flickr have changed the way Internet is being used. There is a dire need to investigate, study and characterize privacy and security on online social media from various perspectives (computational, cultural, psychological). Real world scalable systems need to be built to detect and defend security and privacy issues on online social media. I will describe briefly some cool ongoing projects that we have: Twit-Digest, MultiOSN, Finding Nemo, OCEAN, Privacy in India, and Call Me MayBe. Many of our research work is made available for public use through tools or online services. Our work derives techniques from Data Mining, Text Mining, Statistics, Network Science, Public Policy, Complex networks, Human Computer Interaction, and Psychology. In particular, in this talk, I will focus on the following: (1) Twit-Digest is a tool to extract intelligence from Twitter which can be useful to security analysts. Twit-Digest is backed by award-winning research publications in international and national venues. (2) MultiOSN is a platform to analyze multiple OSM services to gain intelligence on a given topic / event of interest (2) OCEAN: Open source Collation of eGovernment data and Networks Here, we show how publicly available information on Government services can be used to profile citizens in India. This work obtained the Best Poster Award at Security and Privacy Symposium at IIT Kanpur, 2013 and it has gained a lot of traction in Indian media. (3) In Finding Nemo, given an identity in one online social media, we are interested in finding the digital foot print of the user in other social media services, this is also called digital identity stitching problem. This work is also backed by award-winning research publication. I will be more than happy to clarify, discuss, any of our work indetail, as required, after the talk.

Thursday, September 22 14:45 - 16:45 (Asia/Kolkata)

Tutorial 3- Intelligent Digital Image Processing Operators Based on Computational Intelligence Techniques

Speaker: Dr. Mehmet Emin YUKSEL, Professor & Chairman, Department of Biomedical Engineering, Erciyes University, TURKEY
Room: Computer Lab 1 (Academic Area)

Title of the Tutorial: Intelligent Digital Image Processing Operators Based on Computational Intelligence Techniques

Dr. Mehmet Emin YUKSEL, Professor & Chairman, Department of Biomedical Engineering, Erciyes University, TURKEY

Summary: Digital imaging is becoming more and more widespread in many different areas of science and technology. Even though the quality of digital imaging technologies increase every day, digital images are inevitably corrupted by noise during image acquisition and/or transmission due to a number of imperfections caused by image sensors and/or communication channels. In most image processing applications, it is of vital importance to remove the noise from the image data because the performances of subsequent image processing tasks (such as segmentation, feature extraction, object recognition, etc.) are severely degraded by the noise. A good noise filter is required to satisfy two conflicting criteria of (1) suppressing the noise while at the same time (2) preserving the useful information (edges, thin lines, texture, small details, etc.) in the image. Unfortunately, a great majority of currently available image filters cannot simultaneously satisfy both of these criteria. They either suppress the noise at the cost of distorting the useful information in the image, or preserve image information at the cost of reduced noise suppression performance.

In the last few years, there has been a growing research interest in the applications of computational intelligence techniques, such as neural networks and fuzzy systems, to the problems in digital image processing. Indeed, neuro-fuzzy (NF) systems offer the ability of neural networks to learn from examples and the capability of fuzzy systems to model the uncertainty, which is inevitably encountered in noisy digital images. Therefore, neuro-fuzzy systems may be utilized to design line, edge, and detail preserving filtering operators for processing noisy digital images.

In this tutorial, we will begin by a quick review of the fundamental concepts of fuzzy and neuro-fuzzy systems as well as their application to digital image data. Then, we will derive a generalized neuro-fuzzy (NF) based operator suitable for a range of different applications in image processing. Specifically, we will consider three different applications of the presented NF operator: (1) noise filter, (2) noise detector and (3) edge extractor.

In the noise filter application, the NF operator will be employed as a detail-preserving noise filtering operator to restore digital images corrupted by impulse noise without degrading fine details and texture in the image. In the noise detector application, the NF operator will be employed as an intelligent decision maker and utilized to detect impulses in images corrupted by impulse noise. Hence, the NF operator will be used to guide a noise filter so that the filter will restore only the pixels that are detected by the NF operator as impulses, and leave the other pixels (i.e. the uncorrupted pixels) unchanged. Consequently, the NF operator will help reduce the undesirable distortion effects of the noise filter. In the edge extractor application, the NF operator will be used to extract edges from digital images corrupted by impulse noise without needing a pre-filtering of the image by an impulse noise filter.

In all of these applications, the same NF operator will be used for three different purposes. The fundamental building block of the NF operator to be presented is a simple 3-input 1-output NF subsystem. We will then show that highly efficient noise filtering, noise detection or edge extraction operators may easily be constructed by combining a desired number of simple NF subsystems within a suitable network structure. Following this, we will present a simple approach for training the NF operator for its particular target application. Specifically, we will show that the internal parameters of the NF subsystems in the structure of the presented NF operator may adaptively be optimized by training, and the same NF operator may be trained as a noise filter, noise detector or an edge extractor depending only on the choice of the training images. We will further show that the NF subsystems may be trained by using simple artificial training images that can easily be generated in a computer. For each of the three applications of the presented NF operator, we will demonstrate the efficiency of the presented approach by appropriately designed simulation experiments and also compare their performance with a number of selected operators from the literature. We will complete the tutorial with a brief summary of other existing as well as potential applications of the presented general-purpose NF operator in image processing.


Senior researchers and students who have some general background knowledge in signal processing and communications, and who are interested in computational intelligence techniques.


Allow the audience to

understand the basic principles of computational intelligence methodologies, their advantages and disadvantages, learn how to design and implement a general purpose neuro-fuzzy operator suitable for many different kinds of signal/image processing tasks, learn how to customize this neuro-fuzzy operator for a specific signal/image processing task by training, understand the other potential uses of computational intelligence based operators.

Thursday, September 22 16:00 - 17:30 (Asia/Kolkata)

Tea Break

Room: Lawns(Academic area and Mechanical Dept )

Thursday, September 22 16:00 - 18:00 (Asia/Kolkata)

Birds-of-a-Feather Sessions (BOFs)

Room: LT-12 (Mechatronics Dept)

Thursday, September 22 16:35 - 17:30 (Asia/Kolkata)

L1: L1: Lightning Talks

Lightning Talks - 7-8 minutes for each short talk
Room: LT-5(Academic Area)

Augmented Reality Based Code Generation (Abhishek S N, Amrita Vishwa Vidyapeetham, Coimbatore)
Intelligent and Innovative Voting Machine with an Advanced Election System (Abhishek S N, Amrita Vishwa Vidyapeetham, Coimbatore)
Internet of Vehicles "A Future Foreseen Today" (Alpa Kavin Shah, Gujarat Technological University)
Need of Indigenous Wearable Technology Startups in India (Andrews, Mahendra Engineering College)
Discovery of Solar Panels: A Boon or a Curse (Anupam Agarwal, Jagannath University, Jaipur)
Plus Side of a Worldwide Crackdown on Torrents (Aswin T.S, Amrita University, Coimbatore)
Morphology Based Shape Representation and Classification (Bharathi Pilar, Mangalore University, Karnataka, India)
Technological Intervention to Improve Life of Type -1 Diabetic Children (Bilal Malik, University of Kashmir)
Pulse Doppler Spectral Moment Estimation by PCA Approach (Zineb Benchebha, Aeronautical Science Laboratory, France)

Thursday, September 22 17:40 - 18:25 (Asia/Kolkata)

Keynote 8: Prof. Dr.-Ing. Axel Sikora, Dipl.-Ing. Dipl. Wirt.-Ing., Offenburg University of Applied Sciences, Germany

Title of the Talk: Trends in Cyber Physical System Development and Research
Room: LT-9 (Academic Area)

Title of the Talk: Trends in Cyber Physical System Development and Research

Prof. Dr.-Ing. Axel Sikora, Dipl.-Ing. Dipl. Wirt.-Ing., Offenburg University of Applied Sciences, Germany

Abstract: In a cyber-physical system (CPS) the computational and physical elements closely interact. In most cases, CPS are designed as a network of interacting elements with physical input and output instead of as standalone devices. This extension over legacy embedded systems allows a next round in distributed intelligence for a very broad range of applications, i.e. for intelligent systems and applications. Improved observability and controls through online condition monitoring don't only allow functions like predictive maintenance, but enable completely new servicification models. The presentation gives a short overview on the historical development of CPS, discusses selected applications in detail, shows possible solutions for today's problems, and envisages interesting R&D directions of tomorrow.

Thursday, September 22 18:30 - 20:00 (Asia/Kolkata)

Cultural Events followed by Banquet Dinner

Best Paper Awards Ceremony
Room: Open Air Theatre

Friday, September 23

Friday, September 23 8:30 - 14:30 (Asia/Kolkata)


Room: Registration Desk (Academic area)

Friday, September 23 9:20 - 10:10 (Asia/Kolkata)

Keynote 9: Prof. D. Manjunath, Dept. of Electrical Engineering, IIT, Bombay

Title of the Talk: Economics of the Internet and Network Neutrality
Room: LT-9 (Academic Area)

Prof. D. Manjunath, Deptt of Electrical Engineering of IIT, Bombay

Title of Talk: Economics of the Internet and Network Neutrality

Abstract: The Internet is designed to efficiently to transfer bits of content from a provider to a user. A key feature of this design is an egalitarian network that is agnostic to the contents of the data packets. With content being monetised through advertisements, subscriptions and a myriad other means, the economics of the transfer of bits is being analysed and debated. An important chapter in this debate is network neutrality, the requirement that the network treat all packets equally.

In this talk I will examine the economics of network neutrality and provide an overview of the various issues that have been raised. I will also provide an overview of the state of the art of the analysis of the effect of differential pricing, a non neutral network structure that was the subject of a wide debate recently. This debate and discussion is expected to be resurrected again.

Friday, September 23 10:10 - 11:00 (Asia/Kolkata)

Keynote 10: Dr. Biplav Srivastava, Thomas J. Watson Research Center, Yorktown Heights, NY USA

Room: LT-9 (Academic Area)

Title of Talk: Towards Intelligence in Traffic Management Using IT and AI Techniques

Dr. Biplav Srivastava, Thomas J. Watson Research Center, Yorktown Heights, NY USA

Abstract: Traffic management is a pressing problem for cities around the world. Moreover, it is a highly visible perspective of a city's life affecting all aspects of its citizens' economic and personal activities. Consequently, there is substantial governmental, academic and commercial interest in addressing this problem.

Information Technology (IT) in general, and Artificial Intelligence (AI) in particular, can contribute significantly to traffic management with analytics for citizens (demand side) and operators (supply side) in what is called Intelligent Transportation Systems (ITS). For citizens and businesses, we discuss techniques to enable efficient movement of people alone as well as in groups. For operators, we discuss techniques to assess the state of transportation network using available information and take decisions is short- and long- term. We specifically focus on unique situation in cities of developing countries where instrumentation to collect precise traffic data is limited but new avenues to collect aggregate data are feasible. AI /IT techniques covered are: open data, data mining / learning, knowledge representation, planning, scheduling, simulation and stochastic techniques.

Friday, September 23 11:00 - 11:20 (Asia/Kolkata)

Tea Break

Room: Lawns(Academic Area)

Friday, September 23 11:20 - 12:10 (Asia/Kolkata)

Keynote 11: Dr. Prasad Naldurg, IBM Research India, Bangalore

Title of the Talk: Data Privacy with Encrypted Analytics
Room: LT-9 (Academic Area)

Title of Talk: Data Privacy with Encrypted Analytics

Dr. Prasad Naldurg, IBM Research India, Bangalore

Abstract: In this talk we will provide an overview of the Encrypted Analytics project at IBM IRL, i.e., preserving data privacy of users in hybrid/federated clouds, with our primary focus on preventing insider attacks on proprietary information. The goal of this project is to enable common data mining tasks on encrypted data, safeguarding customer data confidentiality at all times. In particular, we focus on data mining kernels like k-NN, k-Means and others and explore cryptographic techniques including different flavours of homomorphic encryption, as well as algorithmic innovations focusing on performance for typical workloads in the federated cloud model. The field of encrypted analytics requires a synergy across various disciplines including RDBMS, Data Mining, Security, Applied Crypto, and Approximation Algorithms.

Friday, September 23 12:10 - 12:30 (Asia/Kolkata)

Demo of the 'ICT PRODUCT OF THE YEAR 2016' -an innovative platform to develop wearable devices for m- health applications

Presenter: Abhinav, MD and CEO of Cardea Labs & Cardea Biomedical Technologies (P) Ltd.
Room: LT-9 (Academic Area)

Title of Demo - Demo of the 'ICT PRODUCT OF THE YEAR 2016' -an innovative platform to develop wearable devices for m- health applications

Presenter: Abhinav, MD and CEO of Cardea Labs & Cardea Biomedical Technologies (P) Ltd.

Short description about the demo: miBEAT is an educational tool which enables Engineering Students to quickly learn and implement all the necessary concepts required to design their own wireless and portable high definition, medical-grade data acquisition system over smart phones and PCs. miBEAT comes with a CE certification and is clinically validated in research labs and research institutions which helps students gain the confidence to develop a ‘medical device' device without any dependencies on expensive data acquisition boards and their respective software. This open source, open hardware, Innovative Biomedical Research platform namely miBEAT ( has been developed in association with Professors and Doctors from Harvard University, Stanford University, DSIR - Govt of India, Dept of Biotechnology - Govt of India, IIT Delhi and AIIMS - New Delhi and has been perfected over a span of 8 years. This technology has been awarded as the 'ICT Product of the Year 2016' at an international conference organized by ASSOCHAM.

Friday, September 23 12:30 - 13:20 (Asia/Kolkata)

Lunch Break

Room: LNMIIT Mess

Friday, September 23 13:25 - 14:25 (Asia/Kolkata)

Keynote 12: Prof. Erol Gelenbe, Imperial College, UK

Title of the Talk: Energy Optimisation for Nano, Micro and Large Scale Communications
Room: LT-9 (Academic Area)

Abstract: Various sources point to annual electrical energy consumption by ICT of roughly 1500 TWH per year worldwide [1], similar to the total electricity consumption of two major industrial economies, namely Japan and Germany, and equal to about 10% of the total electricity consumption in the world. For instance, in the UK, it is estimated that the new £20 Billion nuclear generators at Hinckley point will not suffice to cover the UK's electricity needs for ICT. Next to that we must point to the potential for reducing energy consumption in other areas of the economy through the judicious use of ICT, but we must also recognise that ICT itself is a major and increasing consumer of electricity and that if this increase persists, it may raise questions of social acceptability and of cost. Thus this presentation will dwell on three approaches, drawn from our recent work, that can reduce or mitigate this growth. The first approach is to balance Quality of Service against energy consumption [2]. The second approach is to use Energy Packet Networks to dynamically manage energy flows among different components of a complex computer system that may include sensors or actuators, servers, and diverse sources of stored and renewable energies, so as to optimise a composite function that combines the types of energy used, the amount of stored energy, and the prioritised backlog of work [3,4]. The third approach is to revisit the way we encode information so that small numbers of elementary particles may represent or convey large amounts of data [5].

Friday, September 23 14:30 - 16:30 (Asia/Kolkata)

Tutorial 4 - Feature Based Image Segmentation and Classification Techniques using Random Forests

Speaker: Dr. Kumar Rajamani, Architect, Robert Bosch, Bangalore
Room: Computer Lab 1 (Academic Area)

Title of Talk: Feature Based Image Segmentation and Classification Techniques using Random Forests

Dr. Kumar Rajamani, Architect, Robert Bosch, Bangalore

Abstract: This talk presents some of the recent approaches for image classification and segmentation. Segmentation tasks are very challenging especially in the medical imaging context. The recent advances in feature extraction and classification makes some of the challenging problems tractable. First a brief overview of some of the recent feature extraction techniques is presented. This is followed by insights into Random Forest classifier. Finally a interactive training application ‘ilastik' is explained. ilastik provides real-time feedback of the current classifier predictions and thus allows for targeted training and overall reduced labeling time. In addition, an uncertainty measure can guide the user to ambiguous regions of the data. Once the classifier has been trained on a representative subset of the data, it can be exported and used to automatically process a very large number of images.

Tutorial 5- Are you safe on your browsers? Cyber attacks and spying using malicious browser extensions

Speaker: Mr. Gaurav Varshney, Research Scholar, Information Security Lab, Department of CS, IIT Roorkee
Room: DSP Lab (Academic Area)

Title of the Tutorial: Are you safe on your browsers? Cyber attacks and spying using malicious browser extensions

Mr. Gaurav Varshney, Research Scholar, Information Security Lab, Department of CS, IIT Roorkee

Abstract: There has been an immense utilization of browser extensions now a day for providing additional functionalities to users over the basic browser functionalities. In the recent times it has been identified that malicious browser extensions are allowing the attackers to carry out cyber frauds, cyber spying over targeted users using malicious browser extensions. This tutorial practically demonstrates the vulnerabilities that are exploited by malicious extensions and possible attacks that can be launched via attackers. This tutorial will provide browser developers and security researchers an insight into the current security vulnerabilities to patch them with improved designs in the near future to avoid malicious extension based attacks.

Outline including a short summary of every section:

  1. Introduction of browser extensions
  2. Chrome browser extension execution architecture as case study
  3. Cyber frauds via malicious extensions (Practical) a) Phishing b) Affiliate Fraud c) Webpage Manipulation
  4. Cyber spying via malicious extensions (Practical) a) Sniffing users email data b) Sniffing users form data c) Key loggers over browser
  5. Botnet based attacks via malicious extensions Using malicious extensions as a bot for launching DDoS attacks
  6. Discussion about the security flaws and recent research proposals
  7. Identifying research gaps and throwing future research directions Intended audience: Security researchers both from industry and academia, B. Tech, M. Tech, PhD students interested in research in the area of cyber security. People working in cyber forensics.

Specific goals and objectives: Showcase the current practices used by fraudsters to do cyber frauds and cyber spying with the help of malicious browser extensions.

Friday, September 23 16:00 - 17:00 (Asia/Kolkata)

Tea Break

Room: Academic Area and Mechatronics Dept.

Friday, September 23 16:30 - 17:30 (Asia/Kolkata)

L2: L2: Lightning Talks

Room: LT-9 (Academic Area)

Statistical Analysis of Skin Texture and Color for Classification and Disease Diagnosis of Human Skin (Gayatri Joshi)
Smart Sensor for Nox and So2 Emissions in Power Station Boilers (K.Sujatha, MGR Educational and Research Institute)
Dynamic Resource Allocation for the Analysis of the Big Data In Real Time (Manjunath Ramachandra, Wipro Technologies)
Raising Abstraction Level of System on Chip for Dut Verification (Nishit Gupta, Department of Electronics and Information Technology, Government of India)
Supercontinuum Sources are Next Generation Light Sources (Sandeep Vyas, Malaviya National Institute of Technology)
LTE Design: An Energy Efficient Approach (Saurabh Dixit, BBD University)
Dark Side of Science and Technology (Shakti Awaghad, GHRCE, Nagpur)
IoT Enabled Smart Education Systems (Vaibhav Neema, Devi Ahilya University, Indore)
Computational Data Driven Bipedal Model (Vijay Bhaskar Semwal, NIT Jamshepur)

Saturday, September 24

Saturday, September 24 9:00 - 10:30 (Asia/Kolkata)


Room: Registration Desk (Academic area)

Saturday, September 24 9:20 - 10:00 (Asia/Kolkata)

Keynote 13: Dr. Maheshkumar H Kolekar, , Center for Advanced Systems Engineering, IIT Patna

Title of the Talk: Abnormal Human activity Recognition for Video Surveillance
Room: LT-9 (Academic Area)

Title of Talk: Abnormal Human activity Recognition for Video Surveillance

Dr. Maheshkumar H Kolekar, , Center for Advanced Systems Engineering, IIT Patna

Talk description: Human Visual surveillance is one of the most active research areas in computer vision. It has a wide range of promising applications such as human identification, crowd flux statistics, monitoring and detection of abnormal behaviors. The general processing framework involved in video surveillance is modeling of background, motion detection, tracking, and recognition of actions. In this talk probabilistic approach based on Hidden Markov Model and template match approach based on Motion History Image will be discussed for recognizing activities such as walking, running, bending, clapping, hand raising, hand waving, baggage drop, baggage exchange.

Unattended objects are major threats in railway stations and airport. Detecting unattended object and tracking the person is one of the major concern and often difficult to detect in crowded scenario. Human motion analysis helps in solving many problems in indoor surveillance applications, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple actions of persons. The ability to recognize complex human activities from videos enables the construction of several important applications. Applications of surveillance systems will be discussed for public places like airports and subway stations.

Saturday, September 24 10:00 - 10:45 (Asia/Kolkata)

Keynote 14 (WCI'16): Dr. Punam Bedi, Department of Computer Science, University of Delhi, India

Title of the talk: Recommender Systems: Challenges and Opportunities
Room: LT-9 (Academic Area)

Title of Talk: Recommender Systems: Challenges and Opportunities

Dr. Punam Bedi, Department of Computer Science, University of Delhi, India

Talk description: Recommender systems (RSs) are intelligent applications which act as helping guide for users who are overburdened with information, choices and options. Recommender systems assist users to narrow down the choices from the plethora of options available to them, by recommending them the most suitable options. Recommender Systems use the opinions of members of a community to help individuals in that community identify the information most likely to be interesting to them or relevant to their needs. Collaborative filtering, content based and hybrid recommender systems are the three main categories of RSs discussed in literature. The extent to which users find the recommendations satisfactory is ultimately the key feature of recommender systems. These systems are widely adopted in different application domains such as books, movies, news, restaurants, travel, online shopping to name a few.

The talk will start with a brief introduction to recommender systems, distinguishing them from information retrieval systems. Then main categories of recommender systems and evaluation of recommender systems will be discussed, followed by discussion on various research challenges faced by researchers working in the area of recommender systems. The talk will also include some interesting research problems in recommender systems.

Saturday, September 24 10:45 - 11:00 (Asia/Kolkata)

Tea Break

Room: Lawns(Academic Area)

Saturday, September 24 13:00 - 14:00 (Asia/Kolkata)

Lunch Break

Room: LNMIIT Mess

Saturday, September 24 14:00 - 14:40 (Asia/Kolkata)

Keynote 15 (WCI'16): Dr. Usha S Mehta, Institute of Technology, Nirma University, India

Title of the Talk: Testing of ASIC to NoC: Paradigm Shift
Room: LT-9 (Academic Area)

Title of Talk: Testing of ASIC to NoC: Paradigm Shift

Dr. Usha S Mehta, Professor (EC), Institute of Technology, Nirma University, India

Talk description: Following the Moore's law, the semiconductor industry has traversed a long way from few of microns to 7 or 5 nm. During this journey, the manufacturing and design fields have evolved a lot. But each new technology window came not only with a large increase in transistor to pin ratio but also with variety of new defects in context of shrinking technology. Hence the testing of chips became more and more crucial in each technology window. New fault models were added which in turn exploded with test data volume. A lot of work was done in test time and test power reduction. Later on, to handle design complexity and short time-to-market, it became increasingly common to use modular design approach in form of SoC. Such SoCs containing IP cores of analog, digital and mixed modules with hidden architecture have further exaggerated the burning issues of testing. To reduce the communication delay, the industry is moving forward to NoC. The testing of NoC has not only to deal with issues of ASIC and SoC testing but also to handle the issues related to network testing. In general, this talk will start with introduction to ASIC testing and ASIC testing challenges. It will explore the various fault models, methods and tools in brief. It will cover SoC and NoC test challenges and a brief intro to evolution of IEEE standards for the same.


Saturday, September 24 14:40 - 15:10 (Asia/Kolkata)

WCI Panel Discussion - "Empowering Women in Computing and Informatics "

Moderator: Dr. Punam Bedi, Department of Computer Science, University of Delhi, India
Room: LT-9 (Academic Area)

Saturday, September 24 15:20 - 15:40 (Asia/Kolkata)


Room: LT-9 (Academic Area)

Wednesday, September 21

Wednesday, September 21 14:30 - 18:30 (Asia/Kolkata)

ICACCI--01: ICACCI-01: Best Paper Session-I

Room: LT-9 (Academic Area)
Chair: G. p. Sajeev (Govt Engineering College Wayanad & Amrita Vishwa Vidyapeetham, India)
ICACCI--01.1 14:30 A Cost-Effective Solution for Pedestrian Localization in Complex Indoor Environment
Prachi Kudeshia (The LNM Institute of Information Technology Jaipur, India); Santosh Shah (The LNM Institute of Information Technology, Jaipur India, India); Anup Bhattacharjee (The LNMIIT Jaipur, India)

In this paper, we have considered the problem of infrastructure-based indoor localization, where Bluetooth Low Energy (BLE) technology based network infrastructures for RSSI measurement are used. Our approach to localize the mobile node is based on the relative change in the RSSI values obtained from BLE network, provided the knowledge of certain area specific information. The main advantage of this approach is that even the large environmental changes in the working area creates a very less effect on the localization results. A new RSSI smoothing algorithm is proposed in such a way that it reduces the effect of sudden environmental changes on RSSI values. We have also formulated an integer optimization problem, where path loss exponent is optimized subject to a given area specific information. Since this problem is NP-Hard, we have presented sub-optimal solution to this problem, which are far better than the state-of- the-art approaches presented in the literature. Our proposed approaches provide an accuracy of average error in distance of 1.162 meter.

ICACCI--01.2 14:50 Texture Analysis of Breast Thermogram for Differentiation of Malignant and Benign Breast
Sourav Pramanik (New Alipore College, India); Debotosh Bhattacharjee and Mita Nasipuri (Jadavpur University, India)

In this paper, we developed a new local texture feature extraction technique, called block variance (BV), for texture analysis in the thermal breast image. Then, present a method based on the different features extracted from the texture image obtained using BV to differentiate the malignant breast thermograms from the benign breast thermograms. Variance is an established measure of contrast in the image. Block variance (BV) uses the local variation of intensities to identify the contrast- texture in the gray-scale thermal breast image. Asymmetric temperature distribution between right and left breast in thermal breast image is an indicator of the presence of abnormality. Thus, we investigate the potential of our proposed features in asymmetry measure. For our experiment purpose, we used a set of forty malignant and sixty benign thermal breast images of DMR database. A feed-forward neural network (FANN) with gradient decent training rule has been employed to evaluate the classification performance. The effectiveness of our proposed features is compared against a feature set derived by Acharya et al. [16] in terms of classification accuracy, sensitivity, and specificity. From the experimental results, it is shown that the proposed features perform better compared to Acharya et al. features in differentiating malignant breast thermograms from benign breast thermograms.

ICACCI--01.3 15:10 Detection and Retargeting of Emphasized Text for Content Summarization
Md Ajij (National Institute of Technology Meghalaya, India); Sanjoy Pratihar (National Institute of Technology Meghalaya); Kanishka Ganguly (University of Maryland, USA)

In this paper, we propose a simple and robust algorithm for detection and retargeting of emphasized words, written as italics, in a scanned document page. The detection of italics is done using an appropriate use of Principal Component Analysis (PCA), applied on a selected subset of pixels coming from the input character image boundary. The proposed method is font and style invariant. The localization of the emphasized words helps us in information retrieval by means of retargeted words. It is seen that, a good number of publication houses use emphasized (italic) words for specifying the keywords, author's affiliations etc., in the front page of the articles. Our method extracts and retargets the emphasized words to summarize the content of the papers. Experimental result shows the robustness and degree of precision.

ICACCI--01.4 15:30 Computational Characterization of Cerebellum Granule Neuron Responses to Auditory and Visual Inputs
Chaitanya Medini and Arathi Rajendran (Amrita Vishwa Vidyapeetham ( Amrita University), India); Aiswarya Jijibai (Amrita Vishwa Vidyapeetham, Amrita University, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

The multimodal nature of sensory and tactile inputs to cerebellum is of significance for understanding brain function. Granule neuron properties in modifying auditory and visual stimuli was mathematically modeled in this study. Cerebellum granule neuron is a small electrotonically compact neuron and is among the largest number of neurons in the cerebellum. Granule neurons receive four excitatory inputs from four different mossy fibers. We mathematically reconstructed the firing patterns of both auditory and visual responses and decode the mossy fiber input patterns from both modalities. A detailed multi-compartment biophysical model of granule neuron was used and in vivo behavior was modeled with short and long bursts. The cable compartmental model could reproduce input-output behavior as seen in real neurons to specific inputs. The response patterns reveal how auditory and visual patterns are encoded by the mossy fiber-granule cell relay and how multiple information modalities are processed by cerebellum granule neuron as responses of auditory and visual stimuli.

ICACCI--01.5 15:50 Modelling the Undulation Patterns of Flying Snakes
Viswesh Sujjur BalaramRaja (ISAE SUPAERO, France); Eashwra Sankrityayan S (Amrita Vishwa Vidyapeetham, India); Suraj Coimbatore Sivagurunathan (University of South Florida, USA & Amrita Vishwa Vidyapeetham, India); Balajee Ramakrishnananda (Amrita Vishwa Vidyapeetham, India); Rajesh Senthil Kumar T (Amrita Vishwa Vidhyapeetham, India)

Some species of snakes are good gliders and can travel as far as 330 feet from a height of 15m through air at the speeds of around 9-12m/s. They possess a unique and complex aerial locomotion compared to other species of gliders. During glide, the snake morphs its transverse body section into an airfoil-like shape. In addition, it undulates its body in a characteristic fashion. Understanding the change in shape of flying snake due to this undulation is vital for gliding and maneuvering during glide. Previous studies have explained the effects of 2-D shape. Earlier computational studies on a fixed 3-D wing inspired by the snakes have revealed favorable aerodynamic characteristics. In the current work undulation patterns of a representative snake geometry is modelled mathematically and numerically. The generated shape exhibits lot of similarity to experimentally observed ones. By adding the cross-section of the snake to this shape, the 3-D snake geometry at different instances of time during undulation can be generated. Three dimensional CFD study using ANSYS is performed on these shapes assuming quasi-steady flow. The computed average glide angle agrees well with experimental data. This shows promise for the undulatory model proposed. The current work throws a better understanding of the undulatory motion and may lead to advances in the development of unconventional Micro-Air Vehicles, Snake-Bots apart from biomimetics.

ICACCI--01.6 16:10 Edge PSO - A Recombination Operator Based PSO Algorithm for Solving TSP
Vaisakh S and Akhil P m (Indian Institute Of Space Science And Technology, India); Asharaf S (IIITM-K, India)

We propose a novel approach for solving TSP using PSO, namely edge-PSO by intelligent use of the edge recombination Operator. We observed that the edge recombination operator which was originally proposed for Genetic Algorithm can be used as a velocity operator for Particle Swarm Optimization so as to direct the search effectively to better corners of the hypercube corresponding to the solution space in each iteration thus significantly reducing the number of iterations required to find the optimum solution. The edge-PSO algorithm not only improved the convergence rate but also could produce near optimal solutions,with accuracy better than those obtained from GA even without use of a local search procedure for standard instances from TSPLIB.

ICACCI--01.7 16:30 Conditional Adherence Based Classification of Transactions for Database Intrusion Detection and Prevention
Indu Singh, Vaibhav Darbari, Lakshya Kejriwal and Aditya Agarwal (Delhi Technological University, India)

In recent times, database security has become a major concern of organizations and protection against privilege abuse has gained pivotal importance. We present a novel approach for classification of database transactions based on association rules and cluster analysis (CDTARCA) which detects and prevents malicious transactions from modifying any sensitive information in a database. Association rules are obtained by mining user data access patterns and role profiles are generated by clustering the user activity parameters from database logs. The extent of adherence to the mined rules along with membership of the current user profile in the aforementioned role profile clusters are used to classify the transaction as either malicious or non-malicious. The classification procedure uses adherence as the metric since the real world transactions are never fully compliant to the existing user behavior patterns. Our experimental evaluation on the typical data-set of a bank shows that our CDTARCA algorithm works effectively in detecting and preventing malicious transactions in database systems.

ICACCI--01.8 16:50 Modified Gammatone Frequency Cepstral Coefficients to Improve Spoofing Detection
Arun Das K, Kuruvachan K George, Santhosh C Kumar and Veni S (Amrita Vishwa Vidyapeetham, India); Ashish Panda (Tata Consultancy Services, India)

Voice spoofing is one of the major challenges that needs to be addressed in the development of robust speaker verification (SV) systems. Therefore, it is necessary to develop systems (spoofing detectors) that are able distinguish between genuine and spoofed speech utterances. In this work, we propose the use of modified gammatone frequency cepstral coefficients (MGFCC) as a spoofing countermeasure on enhancing the performance of spoofing detection. We also compare the effectiveness of GMM based spoofing detectors developed using mel frequency cepstral coefficients (MFCC), gammatone frequency cepstral coefficients (GFCC), modified group delay cepstral coefficients (MGDCC) and cosine normalized phase cepstral coefficients (CNPCC) with that of MGFCC. The experimental results on ASV spoof 2015 database show that MGFCC outperforms magnitude based, MFCC and GFCC, and phase based, MGDCC and CNPCC, countermeasures on the known attack conditions. Further, we performed a score level fusion of the systems developed using MFCC, MGFCC, MGDCC and CNPCC. It is observed that the fused system significantly outperforms all the individual systems for known and unknown attack conditions of ASV spoof 2015 database.

ICACCI--01.9 17:10 NvCloudIDS: A Security Architecture to Detect Intrusions At Network and Virtualization Layer in Cloud Environment
Preeti Mishra (Doon University, Dehradun, India); Emmanuel Shubhakar Pilli (Malaviya National Institute of Technology, Jaipur, India); Vijay Varadharajan (Macquarie University, Australia); Uday Tupakula (The University of Newcastle, Australia)

Today we are living in the era of Cloud Computing where services are provisioned to users on demand and on a payper- use basis. On oneside, Cloud Computing has made things easier but it has also opened new doors for cyber attackers. In this paper, we propose an efficient security architecture named as NvCloudIDS to deal with intrusions at Network and Virtualization layer in Cloud Environment. NvCloudIDS performs the behavioral analysis of network traffic coming to or going from Cloud Networking Server (CNS) and provides first level of defense from intrusions at network level. It also performs Virtual Machine (VM) memory introspection and VM traffic analysis at hypervsior layer of Cloud Compute Server (CCoS) and provides second level of defense at virtualization level. The architecture of NvCloudIDS is primarily designed to improve the robustness and power of attack detection of IDS by leveraging Virtual Machine Introspection (VMI) and Machine learning techniques. The framework is validated with recent intrusion dataset (UNSW-NB) and malware binaries collected from research centers and the results seem to be promising.

ICACCI--01.10 17:30 Energy Efficient Algorithms to Maximize Lifetime of Wireless Sensor Networks
Srikanth Jannu (Vaagdevi Engineering College, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

In a wireless sensor network (WSN), all the sensor nodes (SNs) are severely energy constrained. Therefore, maximizing lifetime of the network is determined as the main objective in designing algorithms. In many applications of WSN, the SNs closer to the base station (sink) are overburdened as they are responsible for relaying data of the entire region to sink. Consequently, the energy of the SNs gets depleted very fast which leads to network partitioning. This phenomenon is usually called as energy hole or hotspot problem. In this paper, unequal clustering and routing algorithm is presented by considering efficient use of residual energy of the SNs as a solution for the hotspot problem. The algorithms are tested with various cases of WSN along with some existing algorithms to show the efficacy of the proposed algorithms by considering various performance metrics.

Thursday, September 22

Thursday, September 22 14:30 - 17:30 (Asia/Kolkata)

ICACCI--09: ICACCI-09: Best Paper Session-II

Room: LT-9 (Academic Area)
Chairs: Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India), Viral Nagori (GLS Institute of Computer Technology (MCA) & GLS University, India)
ICACCI--09.1 14:30 ELM Based Imputation-Boosted Proactive Recommender Systems
Punam Bedi (University of Delhi, India); Richa Singh (Central University of South Bihar, India); Sumit Agarwal and Veenu Bhasin (University of Delhi, India)

Proactive recommender systems are smart applications which provide (i.e. push) pertinent recommendations to the users based on their current tasks or interests. The recommendation algorithms employed in these systems usually compute similarity score or develop a model offline using training data to generate online recommendations. As training in proactive recommender systems when the availability of items changes often and rapidly is very time consuming, existing recommendation algorithms are less effective in such application domains. To address this problem, we present a proactive recommender system that generates real time recommendations using the proposed Extreme Learning Machine based Imputation-Boosted Collaborative Filtering (ELMICF) algorithm. Extreme Learning Machine (ELM) is a machine learning algorithm which considerably reduces the time required for training the system as the learning process of ELM is very fast. It has been used in literature for numerous classification, generalization and prediction applications. ELMICF first employs an imputation technique to handle data sparseness in the input user-item rating matrix and then uses the ELM as a classifier to predict the novel ratings. A prototype of the system has been implemented for restaurant recommendations to show the feasibility of our proposed approach. The performance of ELMICF is compared with MLP/ANN and naïve based classification techniques using normalized Discounted Cumulative Gain (nDCG), average precision, training time and mean prediction time metrics.

ICACCI--09.2 14:50 CBCARS: Content Boosted Context-Aware Recommendations Using Tensor Factorization
Anjali Gautam, Parila Chaudhary, Kunal Sindhwani and Punam Bedi (University of Delhi, India)

Matrix Factorization (MF) is widely used by researchers in Collaborative Filtering (CF) technique to generate recommendations. In literature it is used for predicting the missing ratings by approximating the two dimensional rating matrix. These predictions do not analyze items (item content) and users (user context). Integrating item content and user context increases the quality of recommendation. Incorporating item content and user context in recommendations calls for a modified version of MF technique as it can no more be represented using a two dimensional matrix. This paper proposes a hybrid RS known as Content Boosted Context-Aware Recommender System (CBCARS), by incorporating item content and user context using the principle of Tensor Factorization. A tensor which is a generalization of matrix in multiple dimensions is used to accommodate item content and user contextual information. Recommendations are generated using Tensor Factorization (TF) which factorizes the sparse user-item-context tensor to fill in the missing values. The performance of CBCARS is evaluated using Mean Average Error (MAE) with experiments conducted on restaurant dataset. Results show that CBCARS outperforms the three approaches of traditional MF, content-boosted and context-aware MF by decreasing the MAE by 65%, 47% and 15% respectively.

ICACCI--09.3 15:10 Penalty Based Weighted Cooperative Spectrum Sensing Using Normal Factor Graph
Guman Shekhawat (MNIT Jaipur, India); Purnendu Karmakar (The LNM Institute of Information Technology, India)

In this paper, we have considered the problem of spectrum sensing in Cognitive Radio Networks (CRN). In these networks, Cooperative Spectrum Sensing (CSS) technique is used to overcome the problem of hidden terminal, fading and shadowing. In this paper, we have proposed the Weighted Cooperative Spectrum Sensing (WCSS) framework using Normal Factor Graph (NFG). The WCSS framework in CRN improves the sensing ability and accuracy of the network. The proposed probabilistic inference modeling of WCSS algorithm deals with the up gradation of weight factor values based on Secondary Users (SU) performance. Simulation result shows that the performance of the proposed scheme is better than the performance of CSS using NFG in time varying fading environment.

ICACCI--09.4 15:30 SIMO Diversity Reception in Rayleigh and Rician Fading with Imperfect Channel Estimation
Yash Vasavada (Dhirubhai Ambani Institute of Information and Communication Technology, India); Jeffrey Reed (Virginia Tech, USA); A. A. (Louis) Beex (DSPRL - Wireless@VT & Virginia Tech, USA)

Several new results are presented for optimum (diversity) combining with imperfect channel estimates (OC-ICE). A thresholding effect is demonstrated for the Rician fading channel; it is shown that the OC-ICE performance is near optimal (i.e., similar to that for the maximum ratio combiner (MRC)) below a (Rician $K$ factor dependent) threshold irrespective of the pilot channel received SNR. Above the threshold, the performance degrades and in certain cases approaches the performance of the Rayleigh fading case. A new analytical evaluation of the bit error probability of the OC-ICE in Rician fading is presented. This analysis is based on an observation made in this paper that an effect of imperfect channel estimates is to effectively lower the Rician $K$ factor.

ICACCI--09.5 15:50 Power Line Interference Removal From ECG Signals Using Wavelet Transform Based Component-Retrieval
Tanushree Sharma (Malaviya National Institute of Technology, India); Kamalesh Kumar Sharma (Malaviya National Institute of Technology, Jaipur, India)

Power line interference (PLI) is a major source of noise in the ECG signal which can severely affect its interpretation. Moderate PLI can mask the finer features of the underlying signal whereas severe interference can completely overwhelm it. The techniques for PLI removal proposed in the literature are mostly based on fixed notch and adaptive filters. The fixed notch filters perform poorly in case of PLI frequency variations, whereas adaptive filters suffer from issues such as slow convergence and requirement of an external reference signal. In this paper, we propose an alternative approach for PLI removal from the ECG signal which overcomes the limitations of both fixed and adaptive filters. We use wavelet transform (WT) based ridge retrieval and reconstruction technique that is inspired by the synchrosqueezing framework to obtain the PLI component, which is subtracted from the contaminated signal to remove the PLI. The proposed method is evaluated in two worst-case scenarios of PLI frequency and amplitude variation, for healthy and pathological cases and the simulation results show significantly better performance of the proposed method over fixed and adaptive filtering techniques, as quantified by the performance indices.

ICACCI--09.6 16:10 Automatic Program Generation for Heterogeneous Architectures
Deepika H V and Mangala N (C-DAC, India); Nelaturu Sarat Chandra Babu (Research & Development, C-DAC, India)

The new generation accelerator based HPC clusters use OpenCL language for exploiting the heterogeneous compute power. OpenCL is a portable language which allows dynamic binding of heterogeneous compute devices at runtime. It uses a kernel programming method by which the different functionality of the program can execute simultaneously on heterogeneous devices. An OpenCL program is made up of host code, execution environment and device code making the program lengthy and complex. This paper describes the software named OpenCLGen which automatically generates OpenCL programs for single or multiple CPU or GPU devices. A complete OpenCL program is generated with minimum programming effort based on the kernel code supplied by the user; thereby helping to improve programmer productivity. OpenCLGen produces a downloadable package aggregating the host code, header files, runtime for kernel code, Makefile and user instructions. This paper describes the design of the OpenCLGen software and offers a comparison to other similar software.

ICACCI--09.7 16:30 An Empirical Study to Assess the Effects of Refactoring on Software Maintainability
Ruchika Malhotra (Delhi Technological University, India); Anuradha Chug (Guru Gobind Singh Indraprastha University, India)

Maintenance is the most expensive phase of software and during this process refactoring is performed to improve the code without affecting its external behaviour. This study examines the effects of refactoring on maintainability using five proprietary software systems. Internal quality attributes were measured using design metrics suite whereas external quality attributes such as the level of abstraction, understandability, modifiability, extensibility and reusability were measured through expert opinion. The original versions of software are compared with refactored versions and the changes in quality attributes were mapped to maintainability. The results reveal that refactoring significantly improves the software quality and enhances software life. It was also found that even though refactoring is very tedious and might introduce errors if not implemented with utmost care, it is still advisable to frequently refactor the code to increase maintainability. Results of this study are useful to project managers in identifying the opportunities of refactoring while maintaining a perfect balance between re-engineering and over-engineering.

ICACCI--09.8 16:50 Entropy Based Informative Content Density Approach for Efficient Web Content Extraction
G. p. Sajeev (Govt Engineering College Wayanad & Amrita Vishwa Vidyapeetham, India); Manjusha Annam (Amrita Vishwa Vidyapeetham, India)

Web content extraction is a popular technique for extracting the main content from web pages and discards the irrelevant content. Extracting only the relevant content is a challenging task since it is difficult to determine which part of the web page is relevant and which part is not. Among the existing web content extraction methods, density based content extraction is one popular method. However density based methods, suffer from poor efficiency, especially when the pages containing less information and long noise. We propose a web content extraction technique build on Entropy based Informative Content Density algorithm (EICD). The proposed EICD algorithm initially analyses higher text density content. Further, the entropy-based analysis is performed for selected features. The key idea of EICD is to utilize the information entropy for representing the knowledge that correlates to the amount of informative content in a page. The proposed method is validated through simulation and the results are promising.

Wednesday, September 21

Wednesday, September 21 14:30 - 18:15 (Asia/Kolkata)

ICACCI--02: ICACCI-02: Signal/Image/video/speech Processing/Computer Vision/Pattern Recognition (Regular Papers)

Room: LT-1 (Academic Area)
Chairs: Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India), Shakti Awaghad (J. D. College of Engineering, India)
ICACCI--02.1 14:30 An Unsupervised Modified Spatial Fuzzy C-mean Method for Segmentation of Brain MR Image
Kamarujjaman Sk (Techno India University, West Bengal, India); Susanta Chakraborty (Indian Institute of Engineering Science and Technology, India); Mausumi Maitra (Govt. College of Engineering and Ceramic Technology, India)

Unsupervised segmentation of Magnetic Resonance Images (MRI) especially brain MRI is an eminent role for medical and scientific research purpose. The recent trend of medical image analysis and segmentation is laid on the super voxel and spatial information within the super voxel of magnetic resonance images. We propose a new local spatial membership function inherent within the unsupervised modified spatial fuzzy c-mean method to compute the optimized cluster centers with a minimum number of iteration. The proposed local spatial membership function deals with the noise sensitivity and uncertainties incumbent by the heterogeneity and bias field. The comparative study shows that the proposed method is significantly improved the cluster validation functions (i.e. partial coefficient and partial entropy etc.) as compared to the recently published FCM based works.

ICACCI--02.2 14:45 On the Adaptive Motion Estimation in Video Coding Based on Video Content Analysis
Kattula Shyamala (Osmania University College of Engineering, Hyderbad & Department of CSE, India); Mallesham Dasari (Stony Brook University, USA)

In this paper, we propose a novel, efficient and fast motion estimation (ME) algorithm for High Efficiency Video Coding (HEVC). The proposed work is based on video content that gives us the provision to select a best macro block mode and suitable search pattern for a given slice of the video frame. Each frame is divided into multiple slices based on a tiered scene labeling mechanism and a suitable search pattern is estimated for each slice by investigating texture of the slice. The inter-space distance between a macro block in current frame to the candidate macro block in the previous frame is calculated for all the macro blocks in a slice, to find the block movement factor (BMF) which identifies a appropriate search pattern for that slice. Also, the disparity between the neighboring blocks is measured for each slice to predict the mode to be considered for block matching in next frame. The experimental analysis is performed with x265 video codec, which shows considerable amount of improvement in compression performance.

ICACCI--02.3 15:00 Real-time Panorama Composition for Video Surveillance using GPU
Pritam Prakash Shete, Dinesh Sarode and Surojit Bose (Bhabha Atomic Research Centre, India)

Image stitching algorithms combine multiple low resolution images and provide a single high resolution composite image with a larger field of view available for video surveillance. In this research work, we put forward and realize real-time panorama composition for a video surveillance application using the power of a GPU. We utilize a cross platform OpenGL graphics library for real-time online image processing. We parallelize panorama composition using OpenGL objects such as texture object, vertex buffer object and framebuffer object for image warping as well as edge blending to create a seamless panoramic image. We divide our panorama composition algorithm into two stages for image sources with fixed relative positions with each other. Initially in an offline stage, we compute inverse lookup maps and feather weight masks using an OpenCV image processing library for each input image. Subsequently in an online stage, we utilize these inverse lookup maps to generate warped images, which are further edge blended with each other using feather weight masks with the help of OpenGL objects. Our panorama composition is more than 8.5 times faster than the CUDA optimized OpenCV realization. It produces a high resolution seamless panoramic image using nine input image streams each with 800x600 image resolution at about more than 75 frames per second using less than 90MB of GPU memory.

ICACCI--02.4 15:15 Human Action Recognition Using RGB-D Sensor and Deep Convolutional Neural Networks
Javed Imran (IIT Roorkee); Praveen Kumar (Visvesvaraya National Institute of Technology)

In this paper, we propose an approach to recognize human actions by the fusion of RGB and Depth data. Firstly, Motion History Images (MHI) are generated from RGB videos which represent the temporal information about the action. Then the original depth data is rotated in 3D point clouds and three Depth Motion Maps (DMM) are generated over the entire depth sequence corresponding to the front, side and top projection views. A 4 Channel Deep Convolutional Neural Network is trained, where the first channel is for classifying MHIs and the remaining three for the front, side and top view generated from depth data respectively. The proposed method is evaluated on publically available UTD-MHAD dataset which contains both RGB and depth videos. Experimental results show that combining two modalities gives better recognition accuracy than using each modality individually.

ICACCI--02.5 15:30 Novel Feature Ranking Criteria for Interval Valued Feature Selection
Guru D S (University of Mysore); Vinay Kumar N (University of Mysore, India)

In this paper, novel feature ranking criteria suitable for supervised interval valued data are introduced. The ranking criterion basically used to rank the features based on their relevancy prior to feature selection for pattern classification. In our work, initially, a vertex transformation approach is applied on interval valued data to obtain with a crisp type data. Then, the proposed feature ranking criterion is applied on the vertex interval data to rank the features based on their relevancy. This followed by the selection of top k ranked features from the given d set of interval features. Thus the obtained feature subset is evaluated using suitable learning algorithm. The efficacy of the proposed ranking criteria is validated using three benchmarking interval valued datasets and two symbolic classifiers. Finally, a comparative analysis is given to uphold the superiority of the proposed model in terms of classification accuracy.

ICACCI--02.6 15:45 Supervised and Unsupervised Learning in Animal Classification
N Manohar (University of Mysore, India); Yh Sharath Kumar (MIT Mysore, India); G Hemanthkumar (University of Mysore, India)

In this work, we have developed a supervised and unsupervised based classification system to classify the animals. Initially, the animal images are segmented using maximal region merging segmentation algorithm. The Gabor features are extracted from segmented images. Further, the extracted features are reduced based on supervised and unsupervised methods. In supervised method, we have used Linear Discriminate Analysis (LDA) dimension reduction technique to reduce the features. The reduced features are fed into symbolic classifier for the purpose of classification. In unsupervised method, we have used Principle component analysis (PCA) dimension reduction technique to reduce the features. The reduced features are fed into K-means algorithm for the purpose of grouping. Experimentation has been conducted on a dataset of 2000 animal images consisting of 20 different classes of animals with varying percentages of training samples. It is observed that the proposed supervised classification system achieves relatively good classification accuracy when compared to unsupervised method.

ICACCI--02.7 16:00 Representing Image Captions as Concept Graphs Using Semantic Information
Swarnendu Ghosh and Nibaran Das (Jadavpur University, India); Teresa F. Gonçalves (Universidade de Évora, Portugal); Paulo Quaresma (University of Evora, Portugal)

Conceptual interpretation of languages has gathered peak interest in the world of artificial intelligence. The challenge in modeling various complications involved in a language is the main motivation behind this area of research. Our main focus in this work is a conceptual graphical representation for image captions. We have used discourse representation structure to gain semantic information which is further modeled into a graphical structure. The effectiveness of the model is evaluated by a caption based image retrieval system. The image retrieval is performed by computing subgraph based similarity measures. Best retrievals was given an average rating of 2.95±0.707 out of 4 by a group of 25 human judges. The experiments were performed on a subset of the SBU Captioned Photo Dataset. This purpose of this work is to establish the cognitive sensibility of the approach to caption representations

ICACCI--02.8 16:15 Stockwell Transform Based Face Recognition: A Robust and an Accurate Approach
B H Shekar and Rajesh D S (Mangalore University, India)

In this work, we have designed a local and a global descriptor based on the Stockwell transform. The Stockwell transform time frequency distribution plot possesses a better time frequency resolution than the Gabor transform (the short term Fourier transform (STFT) with a Gaussian window) time frequency distribution. The imaginary part of the Stockwell transform time frequency distribution is found to be robust against illumination variation. Hence it must be possible to design better descriptors using the Stockwell transform. We have investigated this concept keeping face recognition problem in focus. Our experiments conducted on well known face databases with pose, light and expression variations have demonstrated the discriminative capability of our descriptors.

ICACCI--02.9 16:30 Recognition of a Person in a Crowd and Occluded Face Images Based on Skin Color Segmentation
Naveena M (University of Mysore); G Hemanthkumar and P Navya (University of Mysore, India)

Like many other methodological Face innovations, Skin color segmentation has been applied to many face recognition techniques. Recently partial face recognition is demanding and attracting much attention in the information science and technology society. However this research has been done to explore new ideas for identifying a particular person in a crowd along with occluded facial information. This paper derives from an in-depth study of face recognition under partial visibility such as chrominance, illuminations, Pose variations and Saturations etc. In this paper, we give an introductory course of this new information processing technology. The paper shows the readers a generic framework for the partial face recognition system and recognizes a particular person in a crowd, and the variants that are frequently encountered by the partial face recognizer. Several famous crowd or photo gallery detection and partial face recognition algorithms, such as SVM and neural networks, are also explored. The proposed methodology has good accuracy and recognition rate.

ICACCI--02.10 16:45 Online Discriminative Tracking with Optimal Feature Selection
Kaushik Gowda (MSRUAS, India); Lasitha Mekkayil (M. S. Ramaiah University of Applied Sciences, India); Hariharan Ramasangu (Research, India)

An efficient online tracking algorithm to track an object from a video is a challenging task. Existing algorithms updates objects based on the selected features, and hence can't address many of the challenges. Most of the discriminative trackers use a sampling and labelling strategy for the extraction of features, but all those features selected through this strategy are not necessarily informative or contribute for the effective tracking. The proposed method identifies a feature selection method called optimal feature selection, and the method can extract those features which carries relevant information. The contribution of relevant features can improve the performance of the classifier drastically. The method utilizes an average likelihood value and a cosine similarity measure for the selection of optimal features. Through optimal feature selection the algorithm can alleviate drift problem caused by irrelevant features. The proposed algorithm is an efficient online tracker, which can handle most of the challenges and performs well while compared with all the other existing state of the art trackers

ICACCI--02.11 17:00 Brain Tumor Segmentation From MR Brain Images Using Improved Fuzzy c-Means Clustering and Watershed Algorithm
Benson C C, Deepa V and Lajish Lajish V L (University of Calicut, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India)

Brain is the master and commanding organ of human body. Human brain is affected by many dangerous diseases. Brain tumor or neoplasm is the abnormal growth of tissues in the brain and surrounding regions. MRI is one of the method used for brain tumor diagnosis. Many algorithms are proposed for the automatic extraction of brain tumor tissues from MR brain images. Fuzzy c- Means (FCM) clustering and watershed algorithm are the two commonly used methods for brain tumor extraction. In this paper we implemented the improved version of fuzzy c-Means clustering and watershed algorithm. In fuzzy c-Means clustering we proposed an effective method for the initial centroid selection based on histogram calculation and in watershed algorithm we proposed an atlas based marker detection method for avoiding the over-segmentation problem. Before applying the segmentation algorithms as a pre-processing stage we performed three operations-noise removal, skull stripping and contrast enhancement. We achieved an accuracy of 88.91 and 81.56 of Dice and Tanimoto coefficients for the improved FCM clustering and an accuracy of 93.13 and 88.64 of Dice and Tanimoto coefficients for the improved watershed algorithm.

ICACCI--02.12 17:15 Content-Aware Image Retargeting with Controlled Distortion for Small Displays
Prasun Tripathi (Indian Institute of Technology, Patna, India); Rajarshi Pal (Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, India)

The problem of image retargeting is to fit a large image on a small display. Simple down-scaling resizes all contents in the image at a uniform factor. It may shrink the important contents too much so that viewers may face difficulty in recognizing the contents. In this paper, a novel content-aware image retargeting technique has been proposed. According to this technique, the important objects are identified based on the importance map. Each important object is encompassed with minimum bounding rectangle. These important rectangles are mapped in the target space. The remaining areas in the source as well as in the target space are divided into minimum number of rectangles. These rectangles are referred as unimportant rectangles. The sizes of important rectangles in the target space are enlarged and the sizes of unimportant rectangles are shrunk iteratively. To control the distortions in unimportant areas, the similarity scores for corresponding unimportant rectangles in source and retargeted images are computed. If the similarity score for any unimportant rectangle drops below an acceptable threshold, the growth/shrinkage of each rectangle is stopped. Subjective quality assessment by a group of volunteers shows that the proposed technique performs better than several existing content-aware image retargeting techniques.

ICACCI--02.13 17:30 Video Shot Boundary Detection Using Midrange Local Binary Pattern
Rashmi B S (Karnataka State Open University (KSOU), India); Nagendraswamy H S (University of Mysore, India)

The fundamental step in content based video retrieval is shot boundary detection. This step is essential for characterizing videos in any video processing system. Reliable detection of shots in videos is still a challenging issue. In this work, we address automatic detection of abrupt shots in video sequences. We have proposed a method termed as Midrange LBP (MRLBP), which enhances the discriminative capability of basic LBP. Each frame of a video is processed to extract LBP histogram values based on midrange statistics for its description. A dissimilarity measure is applied on the feature vectors of adjacent frames and the distance values obtained are used for shot detection process using adaptive threshold approach. To check the efficacy of the proposed method, we carried out experiments on a subset of standard video data set TRECVID 2001. The results obtained by the proposed approach outperform the existing shot boundary detection algorithms in terms of different performance measures.

ICACCI--02.14 17:45 Detection of Aggressive Driving Behavior and Fault Behavior Using Pattern Matching
Jessy George Smith (Cognizant Technology Solutions Limited, India); Mandar Patil and Kirthi Ponnuru (Cognizant Technology Solutions, India)

In time series processing, pattern matching is often used to cater to visual perception of behaviors of interest. The selection of data representation methods and distance measures is driven by domain considerations and is critical for implementation from lab to production scale. This paper discusses use cases from different domains where select pattern matching techniques are used.

In the first use case, a multi-variable pattern matching method has been used to detect and classify driving behavior. The risk associated with aggressive behavior is computed and applied to derive the driving scores that feed into the fleet performance management service.

In the second use case, pattern matching algorithms are used for the real time detection and diagnosis of faults in transformer systems. The evolving risk profile is used as lead indicator of impending failure. This feature is integrated into the condition monitoring system and configured to issue maintenance work orders.

ICACCI--02.15 18:00 A Scalable and Robust Framework for Intelligent Real-time Video Surveillance
Shreenath Dutt Sharma and Ankita Kalra (Indian Institute of Technology (BHU), Varanasi, India)

In this paper, we present an intelligent, reliable and storage-efficient video surveillance system using Apache Storm and OpenCV. As a Storm topology, we have added multiple information extraction modules that only write important content to the disk. Our topology is extensible, capable of adding novel algorithms as per the use case without affecting the existing ones, since all the processing is independent of each other. This framework is also highly scalable and fault tolerant, which makes it a best option for organisations that need to monitor a large network of surveillance cameras

ICACCI--02.16 18:15 Partial Differential Equation Diffusion in Complex Domain
Lanlan Li (Shanghai Bell Software Co., Ltd, Shanghai, China); Jinsong Wu (Universidad de Chile, Chile)

In the course of noise removal by means of anisotropic diffusion method, it is important to maintain image smooth and preserve image features. In this paper, a new anisotropic diffusion in complex field is proposed with better smoothing effects as well as well preverving image edges well. Experiment results have shown the effectiveness of this proposed method.

ICACCI--03: ICACCI-03: Embedded Systems/Computer Architecture and VLSI/Adaptive Systems/ICAIS'16 (Regular Papers)

Room: LT-2 (Academic Area)
Chairs: Abhishek Sharma (The LNM Institute of Information Technology, India), Vineet Sahula (MNIT Jaipur, India)
ICACCI--03.1 14:30 Incorporating Adaptivity Using Learning in Avionics Self Adaptive Software A Case Study
Rajanikanth N Kashi (International Institute of Information Technology, Bangalore, India); Meenakshi DSouza (IIIT-B, India); S Kumar Baghel and Nitin Kulkarni (International Institute of Information Technology, Bangalore, India)

Self Adaptive software is forging its way into avionics. Such software, while being adaptive, needs to meet safety, determinism, and real time responsiveness, like all avionics systems. We model avionics self adaptive software as a multiagent system. Each agent uses a BDI (Belief Desire Intention) model for adaptiveness and also incorporates learning to address several constraints. We illustrate our approach using a case study of adaptive Flight Management System (FMS). Our BDI model of adaptive FMS in Netlogo is a model that is adaptive while being deterministic and also responds in real-time. We propose a learning algorithm that agents use to adapt themselves better and also a way of measuring their adaptivity that provides quantitative gains illustrating the system's adaptability.

ICACCI--03.2 14:45 Area Optimization of Two-Stage Amplifier Using Modified Particle Swarm Optimization Algorithm
Smrity Ratan (IIT (BHU), Varanasi, India); Debalina Mondal and Anima R (NIT Durgapur, India); Chandan Kumar (IIT Bombay, India); Amit Kumar (IIT (BHU), Varanasi, India); Rajib Kar (National Institute of Technology, Durgapur, India)

Modern era cannot be consider without an electronics circuits. If Signal strength at the receiving antenna is very weak then there must be required a device that increase the strength of the received signal (power), known as Amplifier. Output of an amplifier is always stronger than its input. So amplifiers are the simplest yet one of the most important parts of electronic circuits. This is why the designing of amplifiers is crucial in analog circuit designing. This paper investigated to minimize the area of a two stage amplifier using modified particle swarm optimization (MPSO) method considering other constraints. The total transistor area obtain using MPSO in this paper is 2.61×10-10m2 and it is the best of my knowledge, also the power dissipation is much lesser, 1492 µW. Obtained values are verified by the CADENCE simulator and validated. The results proved that the use of modified particle swarm optimization made it possible to achieve better results.

ICACCI--03.3 15:00 Modified 4-2 Compressor Using Improved Multiplexer for Low Power Applications
Dinesh Kumar (GGSIPU (USICT) Delhi, India); Manoj Kumar (GGSIPU, India)

This paper reports a modified 4-2 compressor by improving the multiplexer design from traditional transmission gates with the optimized pass transistors. The proposed design shows power consumption of 16.93 µW as compare to 24.99 µW for the traditional design. Further power delay product (PDP) of proposed design is 7.855 fJ as compare to 9.071 fJ for the traditional design. Moreover effect of substrate bias has been studied for proposed and traditional design. Proposed design shows the power consumption of 15.98 µW and 10.75 µW as compare to 23.92 µW and 16.32 µW for traditional design with reverse substrate bias voltage (VSB) of 0.1V and 0.9V respectively. Proposed design shows improvement in PDP of 7.366 fJ and 4.837 fJ as compare to 8.515 fJ and 5.206 fJ for traditional design with VSB of 0.1V to 0.9V respectively. Effect of temperature variation has been studied on proposed and traditional design. All designs have been simulated in 0.18µm technology using SPICE. Results show a significant improvement in power delay product (PDP), transistors count and signal swing in the proposed design.

ICACCI--03.4 15:15 Modeling and Control of One-Link Robotic Manipulator Using Neural Network Based PID Controller
Rajesh Kumar (Netaji Subhas Institute of Technology, New Delhi, University Of Delhi, India); Smriti Srivastava and Jai Ram Prasad Gupta (Netaji Subhas Institute of Technology, India)

The dynamics of one-link robotic manipulator is complex and non linear and hence, cannot be easily controlled by conventional PID controller. The severity of the problem further increases when the plant's mathematical model is unknown or partially known which makes the use of PID control more difficult because it requires the dynamics of the system for tuning its parameters. Even if the dynamics are known, the parameters of PID controller are required to be retuned when external disturbance signals and/or parameter variations occurs in the system. In this paper, the PID controller is implemented using a multilayer feed forward neural network (MLFFNN) for the desired trajectory tracking control of one-link robotic manipulator (plant). To make the controller adaptive, the dynamics of plant is assumed to be unknown and hence, a separate multilayer feed forward neural network identification model is used which will approximate the plant's dynamics and operate simultaneously with the controller. The other benefits of using an identification model is that it can adjust its own parameters to reflect the effects of the disturbance signal and parameter variations on the system and provides this information to the controller which then makes necessary adjustment to its output to compensate these effects. Simulation results shows that MLFFNN based PID controller is able to control the plant and provides the desired trajectory in the presence of parameter variations and disturbance signal.

ICACCI--03.5 15:30 An Input Test Pattern for Characterization of a Full-Adder and N-Bit Ripple Carry Adder
Manan Mewada (Ahmedabad University & School of Engineering and Applied Science, India); Mazad Zaveri (School of Technology, Pandit Deendayal Petroleum University, India)

A 1-bit full adder (FA) is an important circuit block of many digital CMOS VLSI sub-systems, and its performance is input dependent. Input test patterns play a crucial role in the characterization and analysis of any circuit, including: measurement of propagation delay, estimation of power dissipation and functional verification. An improved input test pattern consisting of two sets (primary and supporting) is proposed in this paper. Each set consists of all 57 possible input transitions, including the transitions that lead to maximum propagation delay. Our input test pattern is designed such that, any individual FA within the nbit ripple carry adder (RCA) can be forced to all 56 possible input transitions for measurement of maximum propagation delay, estimation of fair power dissipation and functional verification. The propagation delay of a n-bit RCA is a function of carry propagation (through all n FAs). Our input test pattern also contains 18 input transitions for which carry is propagated through all n FAs of the n-bit RCA. First, our proposed (primary) input test pattern is applied to seven FAs (based on different circuits/logic-styles). The simulation results show that our proposed input test pattern provides comparable/correct estimates of maximum propagation delay and power dissipation, using less number of input transitions as compared to other patterns reported earlier. Next, as an example of n-bit adder, we build a 4-bit RCA, and all the individual FAs within this RCA are also characterized, using our proposed (primary and supporting) input test pattern. The simulation results indicate that propagation delay and power dissipation characteristics of individual FAs within the RCA vary, and are also a function of the particular FA circuit/logic-style used to build the RCA. Simulations are also done to measure carry propagation delay (through all n FAs) of the RCA. Simulation results prove that our 18 input transitions provides the correct estimation of carry propagation delay as compared to the conventional method.

ICACCI--03.6 15:45 GFDM/OQAM Implementation Under Rician Fading Channel
Shravan Kumar Bandari (National Institute of Technology Meghalaya, India); Venkata Mani Vakamulla (National Institute of Technology Warangal, India); Anastasios Drosopoulos (TEI of Western Greece, Greece)

Future wireless applications demand a high data rate per user, low latency, spectral efficiency and high connection density. Generalized frequency division multiplexing (GFDM), one of the contenders for 5G, seems to serve the next generation wireless needs due to its attractive properties when compared to the existing cyclic prefix orthogonal FDM (CP-OFDM). Nevertheless, due to the inherent non-orthogonal nature of the scheme, GFDM may introduce intersymbol and intercarrier interference (ISI/ICI). For better spectral efficient transmission with improved orthogonality, we explored in this article the well known Offset Quadrature Amplitude Modulation (OQAM) multicarrier signaling technique as applied to the GFDM system. In particular we investigated the GFDM/OQAM system model under Rician-K fading channel conditions providing the corresponding analytical expressions. Performance is measured in terms of symbol error rate (SER) and the results are seen to improve conventional GFDM.

ICACCI--03.7 16:00 Efficient Baseband Processing System for DVB-RCS to DVB-S2 Onboard Processing Satellite
Anurag Das (Indus Institute of Technology and Engineering, Indus University); Rajat Arora and Tvs Ram (Space Applications Centre (SAC), ISRO, India)

An open standard compatible onboard processing system is under development, which provides mesh connectivity between different user terminals. The system is designed to be compatible to Digital Video Broadcasting - Return Channel via Satellite (DVB-RCS) protocol in uplink and Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) in downlink channels. Due to widespread use of this protocol in DTH, low cost compatible terminals are widely available in market. The main elements of this system are DVB-RCS burst demodulator and decoder, protocol convertor, baseband processor with channel estimator and DVB-S2 modulator. This work presents the architecture of baseband processing and channel estimation module. So this requires advanced micro-controller and fast digital signal processor. Also this processor can be time shared for timing recovery module of burst demodulator. Since the system is designed to be implemented as an onboard processor (OBP) for communication satellite, therefore LEON-3 processor and Digital Signal Processor (DSP) SMJ320C6701 are chosen due to availability of their radiation tolerant versions. This paper presents an interface between LEON-3 and DSP for faster execution of both base band processing and floating point operations. Simulation of interface is done on ModelSim® and performance evaluation and comparison has been done on simulators. TSIM2 simulator is used to simulate LEON-3 where as to simulate DSP SMJ320C6701, Code Composer Studio v5 is used.

ICACCI--03.8 16:15 Design of Memristor-based Up-Down Counter Using Material Implication Logic
Anindita Chakraborty, Aparna Dhara and Hafizur Rahaman (Indian Institute of Engineering Science and Technology, Shibpur, India)

'Memristor' is a new emerging nanodevice that is gaining a lot of appreciation from the researchers these days. They possess dual properties of resistor, memory and find immense application in the fields of nanoelectronic circuit and memory designs. Material implication logic is applied in memristor-based circuit designs as it can be performed easily using two memristors and one resistor. In this paper a memristor-based T (toggle) flip-flop is implemented using material implication logic. Thereby this T flip-flop is employed in designing an Up-Down counter, based on the implication operations using memristors. The designs thus presented for the T flip-flop and counter need 6, 16 memristors respectively. Memristor technology being highly dense, our counter design will occupy lesser area as compared to its conventional CMOS-based design. Also the proposed T flip flop takes 11 computation steps to generate its outputs and 52 steps are needed by the Up-Down counter to perform its operation. Moreover in our memristor-based counter circuit, counting can be started, stopped and resumed at any desired logic states by simply controlling the externally applied voltages.

ICACCI--03.9 16:30 A Novel and Efficient Design of Golay Encoder for Ultra Deep Submicron Technologies
Chiranjeevi Sheelam (Vardhaman College of Engineering & Center of Advanced Research Computing Laboratory, India); Jayanthi V R Ravindra (Center for Advanced Computing Research Lab, India)

This paper lays out two different approaches for generation of binary golay code (23, 12, 7). Namely, Linear feedback shift register (LFSR) based CRC and hardware architecture based on CRC. There are certain disadvantages associated with these two architectures. To overcome those disadvantages, a new architecture has been proposed for binary golay code (23, 12, 7) generation. This paper also presents an efficient hardware architecture to generate extended golay code (24, 12, 8). High speed, low latency, low area with low power architecture has been designed and verified.

ICACCI--03.10 16:45 An Affine Combination of Two Adaptive Filters for System Identification with Variable Sparsity
Pogula Rakesh (National Institute of Technology Warangal, India); Kishore T (NIT Warangal, India)

Low complexity Normalized Least Mean Square (NLMS) adaptive algorithm is widely used in the adaptive system identification applications. To exploit the sparse impulse response of the system, different sparse penalties are introduced into the error function of the NLMS algorithm. Reweighted Zero Attracting-NLMS (RZA-NLMS) algorithm based on ℓ1-norm relaxation offers improved performance in identifying the system with sparse echo path but when the system is non-sparse, NLMS algorithm dominates the sparse adaptive algorithm. In order to identify the system with varying sparseness, a new strategy is required. In this paper, we propose an affine combination of RZA-NLMS and NLMS filters used for system identification with variable sparsity. The robust performance of our proposed approach has been verified from the MATLAB simulations.

ICACCI--03.11 17:00 Design and Simulation of Tri-band Spidron Fractal Equilateral Triangle Microstrip Antenna
Munish Kumar (USIC&T, India); Vandana Nath (Guru Gobind Singh Indraprastha University, India)

This paper is aimed at proposing an equilateral triangle microstrip patch antenna (ETMA) having Spidron fractal shape at one of its corners. The multiple resonance phenomenon of fractal geometry by repeating the similar segments in the radiating patch, is used in the proposed structure. Several iterations of Spidron fractal structure are simulated and analyzed. It is evident from the simulation results that the resonant frequency of the patch decreases after every iteration. The fourth iteration on the proposed antenna structure makes it suitable for operating in three different frequency bands and hence, suitable for wireless communication devoted to C (6.15-6.28 GHz), X (9.26 -9.62 GHz), and Ku (12.08-15.71 GHz) band applications. Miniaturization up to 62.78% is also achieved for the proposed antenna structure, making it appropriate candidate for several wireless applications. The proposed antenna structure has an optimized dimension of 18 mm×26 mm. The antenna characteristics such as reflection coefficient, radiation patterns, and radiation efficiency obtained by the proposed fractal antenna confirm to its effectiveness for these applications.

ICACCI--03.12 17:15 Design and Analysis of Different Types of Charge Pump Using CMOS Technology
Harshita Dadhich (GITS, India); Vijendra Maurya (Amity University, India); Kumkum Verma and Sanjay Jaiswal (Sangam University, India)

Basically charge pump are used to produce a higher voltage than the power supply voltage. The charge pump is a DC to DC converter which uses capacitor as energy storage elements to produce a higher or lower voltage. This paper present a comparison between two of the most admired charge pump structures such as Dickson charge pump and Charge pump circuit with cross connected NMOS cells. The comparison has been carried out considering the output voltage, power consumption, delay, output current and conversion ratio . The discussion is supported by practical analysis of both the charge pump.The two charge pumps are taken with the input clock signal, power supply voltage, the same storage capacitance per stage, and the same number of stages. The object of this paper is to compare the all the parameter of the charge pump to found best charge pump. The comparison showed that the charge pump circuit with cross connected NMOS cells is best rather than Dickson charge pump. All simulation and result of Dickson charge pump and Charge pump circuit with cross connected NMOS cells is done at tanner tool 14.1.

ICACCI--03.13 17:30 Biomarker Identification Using Next Generation Sequencing Data of RNA
Shib Sankar Bhowmick (Heritage Institute of Technology, India); Indrajit Saha (National Institute of Technical Teachers' Training & Research, Kolkata, India); Ujjwal Maulik and Debotosh Bhattacharjee (Jadavpur University, India)

Over the years, numerous studies have been performed in order to identify messenger RNAs (mRNAs) that are differentially expressed at different biological conditions for various diseases including cancer. In this regard, getting complete and noiseless data were always very challenging in previous technological set-up. While the inception of Next-Generation Sequencing (NGS) technology revolutionized the genome research, especially in the field of mRNA expression profile analysis. Here such data of breast cancer is used from The Cancer Genome Atlas (TCGA) to identify the cancer biomarkers. For this purpose, data have been preprocessed using statistical test and fold change concepts so that significant number of differentially expressed up and down regulated mRNAs can be recognized. Thereafter, wrapper based feature selection approach using Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) has been applied on such preprocessed dataset to identify the potential mRNAs as biomarkers. Identified top 10 biomarkers are COMP, LRRC15, CTHRC1, CILP2, FOXF1, FIGF, PRDM16, LMX1B, IRX5 and LEPREL1. The quantitative results of the proposed method have been demonstrated in comparison with other state-of- the-art methods. Finally, enrichment analysis and the KEGG pathway analysis have also been conducted for the selected mRNAs.

ICACCI--03.14 17:45 Controlled Motion in X and Y Direction by Piezoelectric Actuation
Kiran Junagal (Rajasthan Technical University Kota Rajasthan India, India); R. s. Meena (UCE, KOTA, India)

Piezoelectric actuators play huge role in positioning field and take micro technology to the new level. Micro positioning stages using piezoelectric actuators make it possible to enhance micro-technology. They are the key devices in micro management. This paper presents design and simulation of PZT XY-microstage with multiple degrees of freedom. This XY-Microstage has high precision positioning. The whole structure is arranged in a symmetrical manner made up of silicon having dimension 20mm×20mm×0.4mm. All PZT actuators can be drive individually, results deflection of a microstage in different directions. Two actuation units with Moonie amplification mechanism are integrated in the structure. Each actuation unit has the command or control on whole structure for movement in desired direction. The performances of the XY-Microstage are simulated with the help of Finite Element Method of COMSOL Multiphysics.

ICACCI--03.15 18:00 Analysis of Fuzzy Rules for Robot
Sandeep BS (Amrita Vishwa Vidyapeetham, Amrita University, India); Supriya P (Amrita Vishwa Vidyapeetham, Amrita University)

Path planning is an important aspect in autonomous mobile robots.Various fuzzy logic methods are conveniently employed for path planning of mobile robots as they are most helpful in handling uncertain data. In this work the robot starts from a fixed initial location towards a goal destination without colliding with obstacles in between within a shortest path possible both in static as well as dynamic environment. Here an analysis of fuzzy rules for path planning of a mobile robot for various membership funcions is done. Both simulation results and real time implementations on I Robot create platform are shown in this paper. Fuzzy tool box in Matlab is used for simulation. Three combinations of membership functions with 24 rules are considered.These include -Trapezoidal and Triangular, Bell and Trapezoidal and finally triangular and Bell.The error between the calculated value and actual value indicate that the combination of Trapezoidal and Triangular is least.

Wednesday, September 21 14:30 - 18:30 (Asia/Kolkata)

ICACCI--04: ICACCI-04: Sensor Networks, MANETs and VANETs/Distributed Systems (Regular Papers)

Room: LT-3 (Academic Area)
Chairs: Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India), Santosh Kumar Majhi (Veer Surendra Sai University of Technology, India)
ICACCI--04.1 14:30 Compute-Intensive Workflow Scheduling in Multi-Cloud Environment
Indrajeet Gupta (Indian Institute of Technology (ISM), Dhanbad, India); Madhu Sudan Kumar (IIT (ISM), Dhanbad, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

Workflow scheduling is recognized as a key problem in the perspective of cloud computing environment. Workflow applications always need high compute-intensive operations because of the presence of precedence-constrains. Optimal workflow scheduling in cloud computing has always been identified as a well-known NP-complete problem. The scheduling objective is to map the workflow application to the VMs pool at available clouds datacenter such that the overall processing time (makespan) is to be minimized and average cloud utilization is to be maximized. In this paper, we propose a two phase workflow scheduling heuristic with a new priority scheme. It considers the ratio of average communication cost to the average computation cost of the task node as a part of prioritization process in the first phase. Prioritized tasks are mapped to suitable virtual machines in the second phase. Proposed scheme is capable of scheduling large graph of dependent tasks in heterogeneous multi-cloud environment. We simulate the proposed algorithm rigorously on standard scientific workflows and compare the simulation results with the existing dependent task scheduling algorithms as per the assumed cloud model. The results remarkably show that the proposed algorithm supercedes the existing algorithms in terms of makespan, speed-up, schedule length ratio and average cloud utilization.

ICACCI--04.2 14:45 Performance Analysis of Two-Tier Cellular Network Using Power Control and Cooperation
Pragya Swami (Indian Institute of Technology Indore, India); Mukesh Kumar Mishra (Samrat Ashok Technological Institute, Vidisha, India); Aditya Trivedi (ABV-Indian Institute of Information Technology and Management Gwalior, India)

Femtocell deployment improves the data rate and indoor coverage of the heterogeneous cellular network (HCN). However, it also increases the cross-tier interference of the system which results in performance degradation of the macro base station (MBS) tier. In this work, a mathematical framework is developed to mitigate the cross-tier interference of MBS tier using a power control scheme (PCS). This PCS works on path loss inversion and location-based power level rule for femto base stations (FBS) to improve the coverage of the MBS tier. Furthermore, a cooperation scheme and association policy are proposed to improve the performance of the FBS tier. Stochastic geometry is considered for deriving the signal to interference and noise ratio (SINR) outage probability, total outage probability, and area spectral efficiency (ASE) for MBS and FBS tiers. The impact of the PCS, proposed cooperation scheme with association policy on outage probability, and ASE are numerically evaluated. Numerical results show that considerable improvement in outage performance of MBS tier is obtained using proposed PCS. Moreover, applying PCS results in degradation of outage performance of FBS tier. This loss in performance can be compensated by using the proposed cooperation scheme and association policy.

ICACCI--04.3 15:00 Energy Harvesting-Based Two-hop D2D Communication in Cellular Networks
Pooja Lakhlan (ABV-Indian Institute of Information Technology and Management, India); Aditya Trivedi (ABV-Indian Institute of Information Technology and Management Gwalior, India)

Powering device-to-device (D2D) communications by the radio frequency (RF) energy, harvested from the ambient interference of the underlaid cellular network, improves the energy efficiency of the network. In the recent works, the D2D transmitters can successfully communicate with their intended receivers only when there is sufficient harvested energy to transmit over the desired distance. In this paper, we propose a method to improve the network performance by allowing the D2D users to use multiple hops when harvested energy is insufficient for a direct communication. We evaluate the performance of the system in terms of the signal to interference plus noise ratio (SINR) outage probability at a D2D receiver and the transmission probability of a D2D user. The numerical analysis shows that the two-hop D2D communication outperforms the direct D2D communication.

ICACCI--04.4 15:15 Interference-Fault Free Transmission Schedule in Tree-structured WSN
Beneyaz Ara Begum (Academy of Scientific and Innovative Research (AcSIR) & CSIR-Indian Institute of Chemical Technology, India); Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India)

Interference due concurrent link transmissions has for long been recognized as a major cause for issues like packet retransmissions, distorted signal strength and communication link failures in WSN. The two best known approaches to model interference, namely the Protocol Interference (PrI) and Physical Interference models, fail to identify all Potential Interferers (PIs) to a given link. Therefore, the two models fail miserably in determining an Interference-Fault Free Transmission (IFFT) schedule for active links. This leads to serious repercussions, especially in tree structured WSNs, where data aggregation is hierarchical. In a pioneering work done earlier, we analytically proved that the Composite Interference Mapping (CIM) model introduced by us is complete, i.e., the CIM model succeeds in identifying all PIs to all active links in the WSN. In this paper, we implement the CIM model to map the potential sources of interference from amongst the neighboring nodes to all transmission links. We develop an IFFT-Tree algorithm to obtain an IFFT schedule for all links in an aggregation tree, and analytically prove that the algorithm is both optimal and complete. To support our analytical studies, we implement the IFFT-Tree algorithm and carry out extensive simulations to show that the algorithm minimizes the number of time slots required to schedule all active links, while at the same time maximizing the number of IFF transmissions during the initial slots. We introduce three new performance metrics to study the performance of IFFT-Tree algorithm, and to compare the efficacy of the CIM model compared with PrI model. The results of the simulations prove that the PrI model identifies only a small subset of total PIs identified by the CIM model. This finding highlights the threat to the credibility of data aggregation, if interference-faults arising due to PIs which are not identified by the PrI model proliferate into the tree-structured WSN.

ICACCI--04.5 15:30 Measurement Results for File Transfer Using RaptorQ in a Mobile Environment
John N. Daigle, Demba B Komma and Feng Wang (University of Mississippi, USA)

We report the results of a limited number of measurements that were performed to compare the performance of a RaptorQ-based file transfer protocol to SFTP in a mobile environment. In particular, we measured times required to transfer files from a laptop in a vehicle and tethered to the Internet via an iPhone hotspot to a stationary host located on the Internet and having a manually assigned IP address. Even though our RaptorQ-based protocol is in its very early stages of development, we found that the file transfer times of the two protocols are comparable in the stationary environment even without doing any specific optimization work on the RaptorQ-based protocol. We also found that SFTP was very prone to stalling whenever we passed through a region where propagation is poor whereas, in the same propagation environment, the RaptorQ-based protocol tended to make use the available bandwidth and complete the file transfer but with a reduced file transfer time.

ICACCI--04.6 15:45 LAWI: A Load balanced Architecture for Wireless Network on chip
Priyanka Mitra (Malaviya National Institute of Technology, India)

The advancement in the designing of multicores system-on-chip pose the requirement of communication infrastructure which provides target performance to meet the computation requirement of gigascale processors. Thus a promising solution called LAWI, a Load balanced Architecture for Wireless Network on chip, has been proposed to bridge the widening gap between the communication efficiency and computation requirements of gigascale system-on-chip devices. It comprise of intelligent router that balances the traffic load across long distance transmission and reduces the congestion delay. An efficient low-cost deadlock-free routing scheme LAWIXY has been proposed that reduces the network congestion and hence improves performance of wireless network on chip. It is demonstrated that LAWI outperforms its counterpart network architectures and improves performance for larger system size.

ICACCI--04.7 16:00 Strategies to Handle Big Data for Traffic Management in Smart Cities
Satyanarayana V Nandury (CSIR-Indian Institute of Chemical Technology & Academy of Scientific & Innovative Research, India); Beneyaz Ara Begum (Academy of Scientific and Innovative Research (AcSIR) & CSIR-Indian Institute of Chemical Technology, India)

The myriad sensors deployed across a smart city, serve as major source for Big data, which can potentially be used for various applications like smart governance, smart energy, smart traffic, smart environment management etc. However, issues related to the handling of such huge volumes of data originating from thousands of heterogeneous sensor and IoT devices placed across the length and breadth of the city, emerge as a major challenge. While it may be relatively easy to identify the IT devices necessary for processing Big data, in the absence of clear strategies and robust platform architecture for handling Big data, the deployment of these resources attain limited success. The SWIFT architecture introduced by us in an earlier work provides a ubiquitous platform for seamless interaction of various smart objects, devices and systems, and hence may prove to be an ideal architecture to capture, process and assimilate information from Big data. In this paper we discuss issues related to implementation of SWIFT architecture for handling of Big data for traffic management in smart cities. Various strategies to provide Big data solutions for smart traffic in terms of profiling traffic density, traffic signaling, managing the parking lots, smart navigation and monitoring vehicular pollution are discussed in the paper.

ICACCI--04.8 16:15 A High Throughput Internet Access Protocol (HTIAP) for Moving Things in Smart Cities
Arvind Kumar Shah (IIEST, Shibpur & Microsoft, India); Sohini Roy (Arizona State University, USA); Abhijit Sharma (National Institute of Technology Durgapur, India); Uma Bhattacharya (Bengal Engineering & Science University, India)

In spite of regular usage of internet in our daily lives, we are yet to receive highly incessant internet connection while travelling from one location to another. This occurs as a result of frequent handoffs between base stations of cellular networks. A high throughput Internet access protocol (HTIAP) is proposed in this paper in order to provide uninterrupted internet connectivity to the mobile devices in transit. In the proposed protocol, a network of moving vehicles and stationary hosts (GPS enabled Wi-Fi hotspots), placed along the roadsides of a city is considered. The performance of this network infrastructure is evaluated using ONE (Opportunistic Network Environment) simulator. The simulation result is compared with Default Gateway Selection (DGwS) algorithms and QoS-balancing Scheme for Gateway Selection (QGwS) on the basis of Vehicle to Infrastructure (V2I) communication efficiency. Together with high throughput and low delivery delay, HTIAP also offers the mobile users travelling along the city to have incessant access to the Internet which the other existing approaches fail to provide.

ICACCI--04.9 16:30 Fault-Tolerant Sensor Using Model Based Simulated Value for Space Environment Simulation Applications
Kaushik S, Ajay G, Dhanush S and Mahendra S Gowda (Bangalore Institute of Technology, India); Faizuddin Faruqui (ISRO Satellite Center, India)

Integrated spacecrafts and associated appendages have to be tested for performance in simulated space environments before launch. These tests involve simulation of thermal and vacuum environments for the space which are performed in thermo-vacuum chambers as well as thermal cycling tests for appendages. Temperature variations in these tests are performed by radiatively coupling the test object with an active surface heated and cooled by the media flowing within. Usually thermocouples placed on the test object act as the process value to which the control system responds by manipulating the temperature of the media in the active surface. Unlike other process applications where a fault in the control channel value can be tolerated or rectified, sensors in thermovacuum chambers are inaccessible. The failure of this thermocouple leads to loss of control or wrong control action. This paper introduces the application of model identification techniques to simulate the test object thermocouple data. The same simulated value is used as the process value for the control system. The developed application identifies linear models viz. ARX and ARMAX and non-linear models viz. NLARX and Neural. The simulated process value for the control system is automatically selected in case of constraint violation by the actual sensor value. Process dynamics can be observed and either PID or Fuzzy control can be applied. Both model based process value simulation and control gave excellent results.

ICACCI--04.10 16:45 Flow Statistics Based Load Balancing in OpenFlow
Karamjeet Kaur (Panjab University, India); Sukhveer Kaur (UIET, Panjab University, Chandigarh, India); Vipin Gupta (U-Net Solutions, Moga, India)

Traditional network devices are not very efficient in handling large amount of web related traffic. All these devices contain tightly coupled control plane plus data plane. Software Defined Networking (SDN) is an emerging architecture based on principle of decoupling of these planes. The control plane which is called the brain of network devices is also known as OpenFlow or SDN Controller. By creating SDN applications such as IDS, IPS, load balancer, firewall & router on top of control plane, a simple data plane can be converted into robust network device. We have created a load balancer SDN application which converts the data plane into a powerful load balancer. Load balancers are the network devices which distribute the load among different servers based on particular strategy. The strategy could be random, round robin, weighted round robin. Each strategy has got its own pro & cons. In this paper, we have implemented flow statistics based strategy for our load balancer that runs on top of POX controller. It is written in Python Language. For testing our application, we used a real lab setup up of four computer systems. We compared this application with already available round robin based SDN load balancer application.

ICACCI--04.11 17:00 Checkpoint Based Multi-Version Concurrency Control Mechanism for Remote Healthcare System
Ammlan Ghosh, Rituparna Chaki and Nabendu Chaki (University of Calcutta, India)

This paper addresses the synchronization aspect for multiple concurrent threads in a Remote Healthcare System (RHS) under development. Resources like data files are shared by multiple stakeholders and users including doctors, trained paramedic staff, patient party or even Government agencies collecting statistical and demographic data. In the system under trial, medical kiosks are setup at distant places where patients visit and their complaints are recorded by trained caregivers. Later, doctor accesses such data from the system, makes a diagnosis and suggests prescriptions accordingly. In such a set-up, it is quite often that more than one users access the same shared resource simultaneously. In conventional lockbased process synchronization, such concurrent processes are blocked by the process holding a lock over the files. The major contribution of this paper is to use checkpoint-based multiversioning Software Transactional Memory for our application domain to achieve non-blocking process synchronization. With help of multi-versioning technique the proposed method is also able to reduce number of expensive remote validations as well as transactional aborts. Experimental verification finds that proposed method yields higher throughput in comparison to clairvoyant type multi-version concurrency control mechanism.

ICACCI--04.12 17:15 Tree Based Tracking Target in Wireless Sensor Network
Suman Bhowmik (College of Engineering & Management, Kolaghat, India); Sushovan Das (College of Engineering and Management, Kolaghat, India); Chandan Giri (Bengal Engineering & Science University, Shibpur, India)

A wireless sensor network (WSN) consists of spatially distributed sensors for monitoring the physical environment. Target tracking based applications are widely referred as the most interesting applications of WSNs. It has many real-life applications such as wildlife monitoring, security applications for buildings and international border monitoring. The issues which are to be resolved in the wireless sensor networks are high energy consumption and low packet delivery rate. This paper studies these issues by proposing techniques to detect and track a mobile target. We use dynamic convoy tree-based collaboration with 100\% tree coverage and low energy consumption. We propose a scheme for tree expansion and pruning and tree reconfiguration using the fuzzy sensing model. Extensive simulations are conducted to compare our algorithm with existing ones. The results show that our algorithm performs better than the existing ones in terms of coverage and energy.

ICACCI--04.13 17:30 Energy Aware Routing in WSN for Pest Detection in Coffee Plantation
Roshan Zameer Ahmed (M S Ramaiah Institute of Technology, India); Rajashekhar Biradar (Reva University, India)

Coffee production stands crucial asset in growing the economy of varied countries. The serious impact to the coffee plantation is due to the pest named Coffee White Stem Borer (CWSB). CWSB once enters into coffee stem, it bores inside and does not reveal its existence till the plant completely collapses. Current manual methods to identify and reduce the damage to coffee plantation fail to arrest CWSB attacks. We propose a novel idea of arresting the growth of CWSB to enhance coffee productivity by means of Wireless Sensor Networks (WSNs). WSN is deployed to cover all the stems of coffee plants and if CWSB is detected, it transmits such information to a nodal center so that necessary measures could be taken to reduce further growth of the pest. In this paper, we propose an Energy aware Routing in WSN for Pest Detection (ERWPD) for transmitting the information of CWSB existence in the Coffee Arabica plants through Cluster-Heads (CHs) to the sink node (a nodal center). The CH aggregates the data received from a primary nodes and the aggregated data is forwarded by establishing a route to the sink. Route establishment takes place by flooding of Route REQuest packets (RREQ) and creating Route REPly packet (RREP) at the sink node. Energy conservation is achieved by reducing the number of control packets for route establishment. The simulation analysis of ERWPD illustrates better packet delivery ratio (PDR), lower delays and overheads in comparison to Data Routing In-Network Aggregation (DRINA).

ICACCI--04.14 17:45 Security Threats in Vehicular Ad Hoc Networks
Ahmed Shoeb Al Hasan (Bangladesh University of Engineering & Technology, Dhaka, Bangladesh); Md Shohrab Hossain (Bangladesh University of Engineering and Technology, Bangladesh); Mohammed Atiquzzaman (University of Oklahoma, USA)

Vehicular Ad Hoc Network (VANET) is a new form of Mobile Ad Hoc Network (MANET) which enables intelligent transportation system by supporting vehicle to vehicle and vehicle to roadside communication to provide road security and reduce traffic jam. However, security issues for VANETs have become a major concern for researchers. VANET is different from other ad hoc networks because of its dynamic topology and mixed structural design. Hence, designing security schemes to authenticate broadcasted messages and discard malicious messages are crucial in VANETs. In this paper, we first identify various security threats for VANET and discuss possible defense mechanisms to prevent or mitigate those threats. We then classify the defense mechanisms into major categories and critically analyze them based on different performance criteria. Finally, we list several open research issues related to VANET security to inspire researchers to work on these open problems and propose solutions towards efficient trust organization in VANET.

ICACCI--04.15 18:00 Online Reviews: Determining the Perceived Quality of Information
Gobinath J (Amrita School of Business, Coimbatore, India); Deepak Gupta (Amrita School of Business & ASB, India)

The technology scape has undergone tremendous changes in the last couple of decades. With increased changes in technology, a lot of changes have occurred in the way consumers behave. One major change area is in the way consumers gather information about the products to make purchase decisions. Online reviews have become the major source of information and have taken over many traditional sources that existed earlier. The quality of information obtained from any source plays a major role in the consumer decision-making. In this study, the factors that influence the consumer perception of information quality of online reviews are identified. For this purpose, a conceptual model was developed by reviewing literature in the following areas such as online reviews, electronic word of mouth, and information quality. The model was tested using a pan-India survey. The sample size included 155 online consumer review readers in their product purchase to identify the impact of various factors on perceived quality of information. The data was analyzed using ordered logistic regression. This study identified that factors such as Perceived Informativeness, Perceived Persuasiveness, Source Credibility, and Attitude towards Online Reviews have significant positive impact on the consumer's perception of quality of information obtained from online reviews.

ICACCI--05: ICACCI-05: Embedded Systems/Computer Architecture and VLSI/Adaptive Systems/ICAIS'16 (Regular Papers)

Room: LT-4 (Academic Area)
Chair: Kapil Jainwal (LNMIIT, Jaipur, India)
ICACCI--05.1 14:30 Load Forecasting in India At Distribution Transformer Considering Economic Dynamics
Kumar Padmanabh (EBTIC, United Arab Emirates)

The end consumers of Smart Grid have NO say in the ecosystem of electricity Grid. The price of electricity and infrastructure of grid have been solely governed by utility companies and government entities. One of the objectives of the smart grid is to bring consumer on board using IoT technologies. Peak demand is a major concern for government and utility. Since different neighborhood would have different consumption pattern and hence load forecasting model at utility level would not predict load at all neighborhood. Socio-economic activities of consumers of a particular neighborhood are coherent. Existing research on this topic considers uniform distribution of assets and uniformity in appliances and pattern of consumption hence they are not good enough for Indian condition. The behavioral pattern of user, other economic activities and different aspect of the weather affect the consumption. In this paper we analyzed the effect of socio-economic dynamics on demand, developed forecasting mechanism and presented the results. This study reveals that the peak demand is actually growing exponentially. Moreover a unique mechanism of forecasting is presented in this paper which is a two steps process- (i) initially a template pattern of consumption is created using parametric estimation, (ii) then total consumption of the day is created using machine learning technique and (ii) finally total consumption is redistributed to deduce time of the day consumption.

ICACCI--05.2 14:45 Comparative Studies on Design of Fractional Order Proportional Integral Differential Controller
Rosy Pradhan and Pratyush Patra (Veer Surendra Sai University of Technology, Burla, India); Bibhuti Pati (Veer Surendra Sai University of Technology, India)

In recent days the fractional order proportional integral differential (FOPID) controller took the place of conventional PID controller because of their proven advantages. FOPID can be used in various science and engineering applications due to comparable stability factor over PID. Research is still going on to develop tune methods for FOPID applying in existing control systems. This paper deals with the comparative studies on design of fractional order proportional integral differentia (FOPID) controller. The two well known tuning methods are described for the design of FOPID controller's parameters. The first design technique is Ziegler-Nichols tuning method for FOPID and the second technique is Astrom-Hagglund method for FOPID controller. The design techniques are described by a second order integer order plant and its robustness is checked by the simulation. In addition to that there is a comparison studies between these two methods.

ICACCI--05.3 15:00 Finite Element Analysis of Composite Overwrapped Pressure Vessel for Hydrogen Storage
Atul S Takalkar (VIT University, India); Shantanu S Bhat (KIT's College of Engineering); Shubham S Chavan, Swapnil B Kamble, Arpit P Kulkarni and Sandesh B Sangale (KIT's College of Engineering, India)

This paper investigate the effect of winding angle on composite overwrapped pressure vessel (COPV) manufactured by filament winding process where continuous fibers impregnated in resin which is wound over a liner. Three dimensional shell model is considered for the structural analysis of pressure vessel. The study on COPV is carried out by considering carbon T300/epoxy material. The thickness of composite vessel is calculated by using netting analysis. The study focused on optimum winding angle, total deformation, stress generation and failure analysis of composite pressure vessel. The failure of COPV is predicted by using Tsai-Wu failure criteria. The classical laminate theory (CLT) and failure criteria is used as analytical method and obtained results are compared with numerical results which are obtained from ANSYS workbench (ACP) for validation. This comparison further helps in predicting behavior of COPV for change in winding angle and operating internal pressure

ICACCI--05.4 15:15 A Novel Design Microstage Based on Piezoeletric Actuation
Kiran Junagal (Rajasthan Technical University Kota Rajasthan India, India); R. s. Meena (UCE, KOTA, India)

In this paper, a newly modeled 3D MEMS piezoelectric actuated microstage has been designed and simulated using COMSOL Multiphysics based on finite element method (FEM). Piezoelectric actuation is most widely used for providing fast response, low driving voltage, precise positioning ability and high operational frequency. The microstage is designed containing moonie structure which amplifies the actuation of PZT actuator. Two designs of microstage are presented for Micropositioning and Nanopositioning. During designing poly-silicon material is used as this material increases the displacement.

ICACCI--05.5 15:30 Fixed Point Implementation of Trigonometric Function Using Taylor's Series and Error Characterization
Sunil Prasad (National Aerospace Laboratories, India); C. m. Ananda (National Aerospace Laboratory, India); Rekha S (PES University, Bangalore, India); Somesh Shivappa Nandi (RV College of Engineering, India)

The aim of this paper is to analyze fixed point implementation of functions designed for signal processing algorithms. In this paper basic building block function is taken to analyze the errors while implementing the algorithm using fixed point technique. To streamline the process of converting the floating point to fixed point, model based design concept is adopted where models of the basic functions are designed and tested in MATLAB (Matrix Laboratory) environment before converting to fixed point. Hardware Description Language (HDL) coder is used to convert the model in VHSIC Hardware Description Language (VHDL) code to implement in Field Programmable Gate Array (FPGA). The work proposed in this paper is abstract function which is implemented using Taylor's approximation series as some functions are not supported for HDL code generation for inverse trigonometric functions like inverse sine (ARCSINE), inverse tangent (ARCTANGENT). These functions are calculated with different number of iterations in MATLAB and using HDL coder. The script is successfully converted to fixed point VHDL code and the error between floating and fixed point is calculated which are presented in this paper.

ICACCI--05.6 15:45 Performance Comparison of Proximal Methods for Regression with Nonsmooth Regularizers on Real Datasets
Mridula Verma (Indian Institute of Technology, (BHU), Varanasi, India); K K Shukla (IIT-BHU, India)

First order methods are known to be effective for high-dimensional machine learning problems due to their faster convergence and low per-iteration-complexity. In machine learning, many problems are designed as a convex minimization problem with smooth loss function and non-smooth regularizers. Learning with sparsity-inducing regularizers belongs to this class of problems, where a number of first order methods are already available in the literature of optimization and machine learning theory. Proximal methods also come under the class of first-order methods and lead to better sparse models. In this paper, we discuss three state-of-the-art proximal methods for the problem of regression, when the loss minimization is associated with a sparsity-inducing regularizer. This paper presents for the first time their comparison based on practical convergence rates, prediction accuracy and consumed CPU time on six real datasets.

ICACCI--05.7 16:00 A Fast Universal Search by Equivalent Program Pruning
Swarna Kamal Paul (Tata Consultancy Services & Jadavpur University, India); Parama Bhaumik (Jadavpur University, India)

Universal search is the asymptotically fastest method to solve a wide class of inversion problems. Recently this method has been used to develop several efficient optimal general problem solvers. However the huge constant slowdown factor associated with this algorithm prevents it from widespread practical application. Our endeavor is to reduce this constant slowdown factor in the non incremental version of the algorithm which can be easily extended in the incremental version. While searching the exponential program space, universal search method generates and tests several equivalent programs. Pruning equivalent programs will evidently reduce the search space and consequently the search time. Experimental analysis reveals a huge speed up of the search method. Even though pruning is applied it has also been shown theoretically that if universal search can find a solution then the proposed search method can always find a solution for a given problem.

ICACCI--05.8 16:15 Circular Polarization in Transparent Circular Patch with Defected Structure
Nurul Gondane (SPPU PUNE, India); Jayashree Shinde and Pratap Shinde (SAE KONDHWA PUNE, India)

This paper presents an optically transparent circular shaped microstrip patch antenna with modifying the circular geometry which is made of transparent ITO (Indium Tin Oxide) film. The transparent circular microstrip antenna comprises a thin sheet of soda lime glass substrate with conductive ITO film coating for radiating patch and ground plane and is feed through a CPW feeding technique. The circular shaped patch is modified step by step by inserting inset slit, rectangular slit in patch and nearly rectangular slot in the ground plane this makes antenna more optically transparent and observed a shift in the resonating frequency toward lower side. The targeting frequency of this antenna is 2.4 GHz. The impedance bandwidth of proposed antenna is 3.6 GHz(2GHz - 5.6GHz),while the radiation efficiency is 89%. The peak gain over the operating bandwidth is 3.6 dB.The optical transparency is enhanced due to the use of defected structures both in patch and ground plane from 90% to 94.44%. The antenna is circular polarization in nature with axial ratio bandwidth of 1.91 GHz and is a candidate of S-band application.

ICACCI--05.9 16:30 Multi User Interference Characterization of Multiband OFDM UWB System in the Presence of Log-normal Fading Channels
Sai Krishna Kondoju and Venkata Mani Vakamulla (National Institute of Technology Warangal, India)

This article mainly describes the modelling of multi user interference (MUI), in multiband orthogonal frequency division multiplexing (OFDM) for ultra wideband (UWB) devices. Based on this outage probability and average bit error rate (BER) expressions are derived for different diversity techniques like maximal ratio combining (MRC) and equal gain combining (EGC) along with MUI caused by other multiband OFDM UWB devices simultaneously operating in same network. The approximation of sum of squares of and the square of sum of lognormal random variables lead to simple mathematical analysis to evaluate performance metrics like outage probability and average BER. Moreover, as a result of derived expressions for combiner output, it shows that how the over all effect of MUI on target multiband OFDM UWB transceiver device

ICACCI--05.10 16:45 A Novel Decoding Technique for Least-Squares Notching Precoder in OFDM Cognitive Radio
Ravinder Kumar (Indian Institute of Technology Roorkee, India)

OFDM has been recognized as a suitable candidate for cognitive radio systems. However, the high out-of-band leakage in OFDM systems has the tendency to interference with primary users operating in adjacent bands. In order to protect neighboring systems, sidelobe leakage needs to be restricted below the prescribed mask. There are various well-known techniques in the literature for sidelobe suppression. In particular, the least-squares notching precoder (LSNP) based on projection matrix is computationally very effective and has good sidelobe suppression. However, it invariably introduces distortion on each subcarrier thereby degrading the system's error-rate performance. Existing solutions for enhancing error performance are either computationally expensive or degrade throughput of the system. In this paper we proposed a novel decoding technique which significantly improves error-rate performance without affecting the throughput of the system. In addition to that, the proposed scheme when joined with the existing iterative recovery algorithm further improves error performance and require as few as one iteration to achieve minimum bit-error-rate.

ICACCI--05.11 17:00 Hiper-Ping: Data Plane Based High Performance Packet Generation Bypassing Kernel on X86 Based Commodity Systems
Aravind Ajayan (Amrita University, India); Prabaharan Poornachandran (Amrita Vishwa Vidyapeetham, Amrita University, India); Manu Krishnan (Amrita Vishwa Vidyapeetham, India); Soumajit Pal (Amrita University, India)

Numbers of commodity hardware available today are capable of saturating Gigabit Ethernet networks. The key today is to identify the fact that the hardware is not the bottleneck when it comes to generating traffic workloads at high rates. There are not many reliable tools out in the market which could help in evaluating the security and performance of high speed networking devices using the commodity hardware. To address the problem of higher packet processing efficiency of commodity servers we have developed a tool called "Hiper-Ping". We run "Hiper-Ping" on top of x86 based commodity hardware and highlight its performance for security auditing and testing of firewalls and networks. Hiper-Ping shows superior performance in terms of both the achieved throughput and the bitrate values, under all possible values of frame length. Using Hiper-Ping we were able to achieve a speed of 1.488 Million minimum sized packets per second on a commodity server equipped with a single Gigabit Ethernet interface

ICACCI--05.12 17:15 Improved Set-Membership Partial-Update Pseudo Affine Projection Algorithm
Felix Albu (Valahia University of Targoviste, Romania); Paulo Diniz (UniversidadeFederal do Rio de Janeiro, Brazil)

In this paper, an improved set-membership partial-update pseudo affine projection (I-SM-PUPAP) algorithm is presented. An approximation that leads to solving a linear system with a direct method is used. It is shown that I-SM-PUPAP algorithm has a much lower numerical complexity and memory requirements than recently proposed I-SM-PUAP algorithm. Simulation results identify an inherent compromise between the convergence rate, complexity reduction and the number of updates.

ICACCI--05.13 17:30 Analytical Study on the Effect of Dimension and Position of Slot for the Designing of Ultra Wide Band (UWB) Microstrip Antenna
Ranjan Mishra (University of Petroleum and Energy Studies Dehradun, India); Raj Gaurav Mishra (University of Petroleum and Energy Studies, Dehradun); Piyush Kuchhal (University of Petroleum and Energy Studies, Dehradun, India)

This research paper presents a simple design consideration of Ultra-Wide Band (UWB) Microstrip antenna using a centrally loaded rectangular slot. An analytical study of the effects of different size and shapes of slots on the performance characteristic of UWB Microstrip antenna is presented. Insertion of slot and the changes in dimension of ground plane has a high impact on the behaviour and parameter of the patch antenna. To improve the bandwidth of the patch antenna, proper insertion of slot on the planer patch structure has been used. In the paper 12 mm by 15.6 mm rectangular patch antenna carved on FR4 substrate is presented. Both the simulated and the measured result show the operation of the antenna in the entire UWB range. This parametric study would be of a great interest in the designing of compact antennas for wireless communications operating in UWB.

ICACCI--06: ICACCI-06: NLP'16/Natural Language Processing and Machine Translation (Regular Papers)

Room: LT-6(Academic Area)
Chair: Shashirekha Hosahalli Lakshmaiah (Mangalore University, India)
ICACCI--06.1 14:30 Detection of a New Class in a Huge Corpus of Text Documents Through Semi-Supervised Learning
Guru D S (Mysore University, India); Mahamad Suhil, Harsha S Gowda and Lavanya Narayana Raju (University of Mysore, India)

This paper poses a new problem of detecting an unknown class present in a text corpus which has huge amount of unlabeled samples but a very small quantity of labeled samples. A simple yet efficient solution has also been proposed by modifying conventional clustering technique to demonstrate the scope of the problem for further research. A novel way to estimate cluster diameter is proposed which in turn has been used as a measure to estimate the degree of dissimilarity between two clusters. The main idea of the model is to arrive at a cluster of unlabeled text samples which is far away from any of the labeled clusters guided by few rules such as diameter of the cluster and dissimilarity between pair of clusters. This work is first of its kind in the literature and has tremendous applications in text mining tasks. In fact the model proposed is a general framework which can be applied onto any application which necessarily involves identification of unseen classes in a semi-supervised learning environment. The model has been studied with extensive empirical analysis on different text datasets created from the benchmarking 20Newsgroups dataset. The results of the experimentation have revealed the capabilities of the proposed approach and the possibilities for future research.

ICACCI--06.2 14:45 Analysis of Polysemy Words in Kannada Sentences Based on Parts of Speech
Rahul Rao (M S Ramaiah Institute of Technology, India); Jagadish S Kallimani (M S Ramaiah Institute of Technology)

Polysemy word is a word which has multiple meanings. The same word can be used in different context to mean different things. Consider a paragraph of Kannada sentences, a person with no prior Kannada language knowledge cannot distinguish the difference in the meaning of a polysemy word. She/he might think the same meaning for all the occurrence of the word in the whole paragraph which is incorrect. The application has polysemy words in the database, which can be appended. The application aims at creating a Kannada polysemy word analyzer to solve the ambiguity. It allows the user to select a Kannada sentence from a list provided. It uses the Shallow parser, which gives the parts of speech information of each word in a sentence. Different meanings of a polysemy word are stored in a database. When the correct match is found between the shallow parser results and the database the exact meaning of the polysemy word used in the sentence is highlighted.

ICACCI--06.3 15:00 ResearchAssist: Analyzing Author Interests
Debarati Das, Lisa Thomas and Kavi Mahesh (PES Institute of Technology, India)

The typical predicament that haunts a researcher is whether to explore new research domains or specialize in their current domain. The prevalent methodology in scientific research is to do emerging trend detection by analyzing topics from scientific texts - a topic modeling approach. In this paper, we propose an alternative methodology. Initially, the researcher makes a fundamental choice of specializing in a specific area or experimenting in related areas. A rule based labeling algorithm is also proposed to categorize authors based on their inclination to experimentation or specialization. Once the individual choice is made, research domains of the authors having similar dispositions can be analysed as the researchers themselves are the best proxy for emerging trends. Finally, this paper explains the possible applications of this methodology for a new researcher, researchers looking to explore or specialize, formation of collaborations and interdisciplinary teams.

ICACCI--06.4 15:15 Multiclass Classification and Class Based Sentiment Analysis for Hindi Language
Sumitra Purushottam Pundlik (Viraj Computech Systems, India); Prachi Kasbekar, Gajanan Gaikwad, Prasad Dasare and Akshay Gawade (MIT College of Engineering, India); Purushottam Pundlik (Design Engineer, India)

With recent development of Web 2.0 and Natural Language Processing, use of regional languages is also grown for communication. In India peoples are expressing their views by using mother tongue such as Hindi, Bengali, Kannada, Marathi etc. As Hindi is fourth most spoken language in the world so many researchers are working on Hindi Sentiment Analysis. Sentiment Analysis is natural language processing task that mine information from various text forms such as blogs, reviews and classify them on basis of polarity as positive, negative or neutral. A speech is combination of variety of topics. So there is requirement of classifying given Hindi speech document in to different classes and then extract sentiments in terms of positive, negative and neutral for identified classes. In this paper we have proposed a model for classification of Hindi speech documents into multiple classes with the help of ontology. Further, sentiment analysis is carried out using HindiSentiWordNet (HSWN) to determine the polarity of individual class. To improve accuracy of polarity extraction result we have combined HSWN and LMClassifier.

ICACCI--06.5 15:30 Automatic Ranking of Essays Using Structural and Semantic Features
Sunil Kumar Kopparapu (Tata Consultancy Services, India); Arijit De (TCS, India)

Evaluating an essay automatically has been an area of active research even though there has been a shift to multiple choice answers in many competitive exams. In this paper, we propose an unsupervised technique to rank essays based on the structural and semantic content of the essays. The approach is unsupervised because it makes use of a the complete set of essays to determine the rank of the an individual essay. We purposely avoid deep parsing and the approach is based on use of both structural features of the essay and also the semantic content of the essay. We evaluate the proposed approach on a set of essays submitted to a competition generated from a single prompt. We compare the ranks of the essays with the ranks given by two different human evaluators. The results show a good correlation between the proposed unsupervised algorithm and the human evaluators. The proposed approach, as designed, is independent of any external knowledge base.

ICACCI--06.6 15:45 Sentiment Analysis for Mixed Script Indic Sentences
Rupal Bhargava (UpGrad Education Pvt. Ltd., India); Yashvardhan Sharma (Faculty, CS & IS, BITS-Pilani, India); Shubham Sharma (Birla Institute of Technology and Science, India)

India is a multi-lingual and multi-script country. Developing natural language processing techniques for Indic languages is an active area of research. With the advent of social media, there has been an increasing trend of mixing different languages to convey thoughts in social media text. Users are more comfortable in their regionalistic language and tend to express their thoughts by mixing words from multiple languages. In this paper, we have attempted to develop a system for mining sentiments from code mixed sentences for English with combination of four other Indian languages (Tamil, Telugu, Hindi and Bengali). Due to the complex nature of the problem the technique used is divided into two stages, viz Language Identification and Sentiment Mining Approach. Evaluated results are compared to baseline obtained from machine translated sentences in English, and found to be around 8% better in terms of precision. The proposed approach is flexible and robust enough to handle additional languages for identification as well as anomalous foreign or extraneous words.

ICACCI--06.7 16:00 Multi-level Inflection Handling Stemmer Using Iterative Suffix-Stripping for Malayalam Language
Balasankar C (Adi Shankara Institute of Engineering and Technology, Kalady, India); Sobha T (MG University, India); Manusankar C (SSV College Valayanchirangara & Mahathma Gandhi University, India)

Stemming is the process of extracting a stem or root word from an inflected word. Corpus based methods for stemming that involves maintaining a corpus with all inflected forms of all words is not practical for Dravidian languages like Malayalam that have a high degree of inflection. This paper proposes a rule based stemmer that uses iterative suffix stripping method. Using a root word corpus that is relatively smaller in size, the proposed method presents an increased efficiency and reduced complexity. It is able to handle multiple levels of inflection that can occur to a word, by following an iterative suffix stripping methodology.

ICACCI--06.8 16:15 Coreference Between Subjective Expression and Holder:A Classification Perspective
Sunit Bhattacharya (Central University of Rajasthan, India); Dipankar Das (Jadavpur University, India)

Instead of applying the traditional keyword spotting approach, the presence of coreference between the holders and their subjective expressions often becomes important to identify and classify opinion and sentiment, accurately. In the present task, we have developed a classification system to resolve coreference of subjective expressions with their holders and thus automate the process of text annotations. The dependency parsed features in addition to sentiment words helped in classifying the corefered instances using machine learning framework.

ICACCI--06.9 16:30 A Rule Based Question Generation Framework to Deal with Simple and Complex Sentences
Rubel Das, Antariksha Ray, Souvik Mondal and Dipankar Das (Jadavpur University, India)

Influenced by rapid interests in the area of language generation, in the present attempt, we have employed various generation and disambiguation rules for generating questions from simple as well as complex sentences. In addition to building a question generation system along with pre and post processing modules, we have also developed and integrated a clause identification module to deal with the complex sentences. The proposed three-graded rating scheme along with error analysis has shown that the system achieves good performance for various Wh-questions along with types how much and how many.

ICACCI--06.10 16:45 Rule Based Kannada Agama Sandhi Splitter
Shashirekha Hosahalli Lakshmaiah (Mangalore University, India); Vanishree Kasaravalli Suryanarayana (Government First Grade College Shivamogga, India)

The development of tools and techniques for automatic processing of Kannada language texts at different levels for various applications including machine translation needs to be addressed with at most priority. Sandhi splitter is one such automated tool which acts as preprocessing task for morphological analyzer that identifies the morpheme boundaries in a compound word based on the Sandhi rules defined in reverse direction. Due to highly agglutinative and morphologically rich nature of Kannada - one of the Dravidian languages of south India, complex compound words are formed by combining more than one morpheme / word based on the Sandhi rules. In this paper, as a preliminary work we have presented a rule based Kannada Agama Sandhi splitter, which identifies two flavors of Agama Sandhi namely, Yakaragama and Varkaragama and splits the compound word according to the rules of these two Agama Sandhis. Syllables are extracted from the word to identify the split point of the word. Our model uses a dictionary of root words and suffixes to check the validity of split words and rules to identify the split point. The problems of romanisation and transliteration is overcome by using Unicode with UTF-8 representation for Kannada. Words from Prajavani - a popular Kannada daily newspaper, is used to test our model and the results are illustrated.

ICACCI--07: ICACCI-07: Fourth International Symposium on Intelligent Informatics (ISI'16) - Regular Papers

Room: LT-7(Academic Area)
Chairs: Sudhanshu S. Gonge (Symbiosis Institute of Technology, Lavale & Symbiosis International University, Pune, India), Viral Nagori (GLS Institute of Computer Technology (MCA) & GLS University, India)
ICACCI--07.1 14:30 Common Threats to Software Quality Predictive Modeling Studies Using Search-based Techniques
Ruchika Malhotra and Megha Khanna (Delhi Technological University, India)

Development of Software Quality Predictive Models (SQPM) is an important research area as it helps in effective use of project resources and assures a good quality software product. A number of studies in literature have developed successful SQPM using search-based techniques which are meta-heuristic in nature. However, in order to perform a successful empirical study which develops SQPM using search-based techniques, it is essential to consider various probable sources of threats to the empirical study so that the developed models are realistic and efficient. This study reviews and analyzes 33 empirical studies in literature which have successfully used search-based techniques for prediction of two common software quality attributes i.e. fault-proneness and change-proneness in order to comprehensively present various probable threats to such studies. The study also proposes remedial actions to mitigate these threats and presents an analysis of the most common threats which are missed by researchers.

ICACCI--07.2 14:45 Facial Expression Based Music Player
Sushmita Kamble (KLS Gogte Institute of Technology); Anandtirth Kulkarni (KLSGIT, India)

Conventional method of playing music depending upon the mood of a person requires human interaction. Migrating to the computer vision technology will enable automation of such system. To achieve this goal, an algorithm is used to classify the human expressions and play a music track as according to the present emotion detected. It reduces the effort and time required in manually searching a song from the list based on the present state of mind of a person. The expressions of a person are detected by extracting the facial features using the PCA algorithm and Euclidean Distance classifier. An inbuilt camera is used to capture the facial expressions of a person which reduces the designing cost of the system as compared to other methods. The results show that the proposed system achieves upto 84.82% of accuracy level in recognising the expressions.

ICACCI--07.3 15:00 Clustering using Cuckoo Search Levy Flight
Aishwarya Palaiah, Akshata Prabhu and Reetika Agrawal (PESIT, India); Natarajan S (VTU, India)

Clustering of Web document has become a vital task, due to the tremendous amount of information that is available on web today. The task of finding suitable information with less time has become a big challenge in information retrieval. So, it's very much necessary to adopt a method that can be used organize the information well. This is possible only when good document groups are formed, which in turn can be achieved when effective and optimized cluster heads are identified. Our main concern is to apply an optimized algorithm for web document clustering. The algorithm proposed in this paper is, Cuckoo Search based on Levy Flight. Efficient cluster heads can be located using proposed Cuckoo Search algorithm. And Levy Flight helps us to speed up the local search which also ensures that it covers output domain efficiently. This algorithm is simple, efficient and it is easy to implement. A relative study of the proposed Cuckoo Search based on Levy Flight and K-means algorithm is carried out. The obtained result shows that good performance can be achieved when Cuckoo Search based on Levy Flight algorithm is used for clustering of web documents.

ICACCI--07.4 15:15 The Winner Decision Model of Tic Tac Toe Game by using Multi-Tape Turing Machine
Sneha Garg (Goverment Mahila Engineering College, Ajmer, India); Dalpat Songara (Rajasthan Technical University, India)

The Tic tac toe is very popular game having a 3 X 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.

ICACCI--07.5 15:30 Application of Transfer Learning in RGB-D object recognition
Abhishek Kumar (Vellore Institute of Technology, India); Nithin Shrivatsav Srikanth (National Institute of Technology, Tiruchirappalli, India); Gorthi R K Sai Subrahmanyam (Indian Institute of Space Science and Technology, India); Deepak Mishra (IIST, India)

In this work, we apply Transfer Learning for a Multimodal Deep learning network for fast and robust object recognition using RGB-D dataset. The ability for a network to train quickly and recognize objects robustly is very important in the field of Robotics. The Multimodal deep learning network avoids time-consuming hand-crafted features and makes use of a RGB-D architecture for robust object recognition. Our architecture has two important features. First, it makes use of both RGB and Depth information of an image to recognize it. To achieve this, our architecture has two CNN processing streams, one for RGB modality and the other for the depth modality. This enables the network to achieve higher accuracy than normal single stream RGB network. We encoded the depth image into colour image before passing it into the CNN stream. The other important feature is the speed of training and improving the accuracy further. To achieve this, we made use of Transfer learning. Firstly we trained a CNN network with 10 classes of different objects and then we transfer the parameters to RGB and depth CNN network. This enables the network to train faster and also achieve higher accuracy for a given number of epochs.

ICACCI--07.6 15:45 Comparative Study of the Impact of Processor Architecture on Compiler Tuning
Sankar N A B Chebolu (ANURAG DRDO, India); Rajeev Wankar (University of Hyderabad, India)

Deciding the nearly optimal optimization options and selecting the right values for compiler parameter set is a combinatorial problem. In order to obtain the maximal performance, sophisticated tuning strategies were employed by many researchers. Impact of tuning and thereby quality of the Compiler generated code often depend on many factors like the optimization infrastructure of the compiler and its maturity level, optimization objective, application source and also the target processor architecture. In order to understand the impact of processor architecture, we have conducted an empirical study on X86 and ARM platforms and compared the results. We have employed Genetic Algorithm based tuning techniques on SPEC benchmark programs for both the compiler optimization option selection and also the parameter tuning problems independently as well as together. Results demonstrate that there is a significant impact of processor architecture on compiler tuning.

ICACCI--07.7 16:00 Extension of the Multi-Agent Resource Conversion Processes Model: Implementation of Agent Coalitions
Konstantin Aksyonov, Eugene Bykov, Olga Aksyonova, Alena Nevolina and Natalia Goncharova (Ural Federal University, Russia)

Most simulation models have an idea preceding their design. An idea to improve the effectiveness of a certain solution or to estimate the consequences of some activity. For the most part simulation software helps us, especially when we have a complex, but a predefined scenario for the process development. But in the case when we have multiple decision making persons interacting with each other, having conflicts when using common resources or trying to achieve a common goal but using different methods, we need to simulate this behavior as well. In the paper we present the architecture of the multi-agent resource conversion processes, which was extended with the support for the agent coalitions, that allows the simulation of such scenarios with multiple interacting agents.

ICACCI--07.8 16:15 Stroke Based Online Handwritten Gurmukhi Character Recognition
Ramandeep Kaur (Thapar University, India); Mandeep Singh (Thapar Institute of Engineering & Technology, India)

In this paper, we present a preliminary system to effectively recognize the strokes for handwritten Gurmukhi characters. In this paper, 32 stroke classes have been considered and implemented for recognition. The proposed system extracts Spatiotemporal and Spectral features from collected stroke database. These features were then used to train the K-Nearest Neighbor (KNN), Multilayer Perceptron (MLP) and Support Vector Machines (SVM). The proposed methods for recognition were applied on the database using tenfold cross validation and percentage split technique. Recognition rate of 89.35% was obtained using K-Nearest Neighbor, 89.89% using Multilayer Perceptron and 89.64% using Support Vector Machines.

ICACCI--07.9 16:30 Multi-agent planning with Quantitative Capability
Satyendra Chouhan (MNIT, India); Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

In the recent years, there has been a lot of research in multi-agent planning (MAP). Multi-agent planning applications include search and exploration, robotics, and logistics etc. Existing works show that agents have different capabilities if the agents have different action sets. However, agents may have different capabilities even if the action sets are same. In this paper, we present an approach for multi-agent planning problem, where capability of the agents are represented in a quantitative manner. The proposed approach translate such MAP problem into classical planning problem. The resultant classical planning problem is solved by an existing state-of-the-art classical planner. Experiments were performed on some benchmark planning domains and the results are quite promising.

ICACCI--07.10 16:45 Evaluating Applicability of Perturbation Techniques for Privacy Preserving Data Mining by Descriptive Statistics
Alpa Shah (Sarvajanik College of Engineering and Technology & Gujarat Technological University, India); Ravi Gulati (Veer Narmad South Gujarat University, India)

Extensive research has been carried out for preserving the privacy of identifiers in dataset during Data Mining. Various dimensions based on Cryptographic principles, Perturbation and Secure Sum Computation have been studied to achieve privacy. Effective techniques to maximize privacy and minimize information loss have always been intriguing. The work in this paper presents a comparison based on experimental study of three fundamental perturbation techniques viz. - Additive, Multiplicative and Geometric Data Perturbation [GDP] for Privacy Preserving Data Mining [PPDM]. These techniques form the basis of many advanced Perturbation techniques as described later. The literature doesn't embark a clear cut comparison amongst the three techniques based on suitable metrics. We have identified various statistical metrics that must be considered for evaluating Perturbation techniques. The facet of research is independent in this context, and this paper will try to confer the applicability of perturbation techniques by descriptive statistics through experiments under one roof. A comparison amongst the perturbation based techniques is conferred at the end to exemplify the importance of this research.

ICACCI--07.11 17:00 Cross-Reference EWOD Driving Scheme and Cross- Contamination Aware Net Placement Technique for MEDA Based DMFBs
Pampa Howladar (Indian Institute of Engineering Science and Technology, Shibpur, India); Debashri Roy (Northeastern University, USA); Pranab Roy (Indian Institute of Engineering Science and Technology, Shibpur, India); Hafizur Rahaman (Bengal Engineering and Science University, Shibpur, India)

Droplet based digital microfluidics is a popular emerging technology for laboratory experiments. However, certain limitation exists in specific cases for implementation that requires further enhancement. Pin-count minimization and cross-contamination avoidance between droplets of different bio-molecules during droplet routing are primary design challenges for biochips. A competent architecture namely Microelectrode Dot Array (MEDA) has been recently introduced as a new highly scalable, field programmable and reconfigurable dot array architecture which allows dynamic configuration. This work considers the cross contamination problems in pin constrained biochips based on MEDA architecture. In order to reduce the cross-contamination problem, in this work we present a MEDA architecture based cross-reference driving scheme that allows simultaneous driving of multiple droplets and thereby propose a suitable net placement technique applicable for MEDA architecture. The objectives of this proposed technique includes reducing the crossovers with intelligent collision avoidance, minimizing the overall routing time and increasing grouping number to reduce the total pin-count. Simulation results thus presented in this paper indicate the efficiency of our algorithm for practical bioassays.

ICACCI--07.12 17:15 Rule-Based System for Automated Classification of Non-Functional Requirements from Requirement Specifications
Prateek Singh, Deepali Singh and Ashish Sharma (GLA University, India)

Unmasking the non-functional requirements (NFRs) such as quality attributes, interface requirements and design constraints of software is crucial in finding the architectural alternatives for software starting from early design opinions. For developing quality software product, extraction of NFRs from requirement documents is needed to be carried out and it's beneficiary if this process becomes automated, reducing the human efforts, time and mental fatigue involved in identifying specific requirements from large number of requirements in a document. The proposal presented in this paper combines automated identification and classification of requirement sentences into NFR sub-classes with the help of rule-based classification technique using thematic roles and identifying priority of extracted NFR sentences within the document according to their occurrence in multiple NFR classes. F1-measure of 97% is obtained on PROMISE corpus and 94% F1-Measure on Concordia RE corpus. The results established validates the claim that proposal provides specific and higher results than previous state of art approaches.

ICACCI--07.13 17:30 Frequency-based similarity measure for Context-Aware Recommender Systems
Mohammed Wasid (Aligarh Muslim University, India); Vibhor Kant (Jawaharlal Nehru University, India); Rashid Ali (AMU Aligarh, India)

Collaborative Filtering (CF), the widely used and most successful technique in the area of Recommender Systems, provides useful recommendations to users based on their similar users. Computing similarity among the users efficiently is the major step in CF. Further, it has been observed from literature that the context into CF provides more accurate and relevant recommendations for users but it is hard to represent and model contextual factors directly into the system. In this paper, we have incorporated the contextual information into user profile as an additional feature through a proposed novel frequency count method. After extending the user profiles, items are recommended based on similar profiles computed through a novel similarity measure. To evaluate the performance of our proposed recommendation strategy, several experiments are conducted on the popular LDOS-CoMoDa dataset.

ICACCI--07.14 17:45 Chaotic Hash Function Based Plain-Image Dependent Block Ciphering Technique
Sahil Wadhwa (Jamia Millia Islamia, India); Musheer Ahmad (Jamia Millia Islamia, New Delhi, India); Harsh Vijay (Jamia Millia Islamia, India)

Secure hashes have an indispensable purpose to play in modern multimedia image encryptions. Traditional block ciphering techniques are quite complex, command colossal processing time for key generation and sometimes are a source of redundancy. This paper presents an approach for designing one-way cryptographic hash function and a block ciphering scheme based on proposed hash codes. In the proposed work, we have divided the message into blocks with each block processed individually using chaotic map. The two intermediate hash values are generated using evolved control and input parameters. The two intermediate hash values are then employed to produce a final variable length hash. The simulation and statistical outcomes justify the striking performance of proposed chaotic hash method. Moreover, the generated hash code is applied for realizing an image block ciphering technique. The encryption process is plain-image dependent thereby exhibits satisfactory encryption effect suitable for practical applications.

ICACCI--07.15 18:00 A Biometrics-based Robust and Secure User Authentication Protocol for e-Healthcare Service
Sandip Roy (Asansol Engineering College, India); Santanu Chatterjee (Research Center Imarat & Defence Research and Development Organization, Hyderabad, India); Samiran Chattopadhay (Jadavpur University, India); Amit Gupta (DRDO, India)

In e-healthcare services like Telecare Medicine Information Systems (TMIS), proper user authentication is necessary for a secured access of medical server data by authorized doctors or patients. As key size Chebyshev chaotic maps is smaller, it has been noted that chaotic maps based user authentication schemes have more efficiency than ECC or RSA based schemes. Unfortunately, literature study reveals that most of the available chaotic map based user authentication schemes in TMIS suffer from some common serious security weakness like parallel session and reflection attack, ephemeral secret key leakage attack etc. In this paper, we propose a provably secure and efficient new user authentication scheme based on extended chaotic map for TMIS. The scheme ingeniously uses smart card, password and user biometrics to achieve three-factor authentication. To prove the security of the proposed scheme, we provide both informal security analysis as well as formal security verification using BAN (Burrows, Abadi and Needham ) logic. Further, performance analysis shows that the scheme is quite efficient and light-weight.

ICACCI--07.16 18:15 Bridging the Skill Gap Using Vocational Training Simulators: Validating Skill Assessment Metrics
Aswathi P, Amritha Natarajan, Namitha K, Nagarajan Akshay, Balu M Menon and Rao R. Bhavani (AMMACHI Labs, Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India)

Skill development continues to remain a daunting task especially in developing countries, owing to its numerous challenges such as unavailability of expert trainers, need for constant monitoring and support during the training phase and lack of tools to measure and quantify human motor skill learning. Researchers are trying to address this problem by introducing technology enhanced learning systems. APTAH is one such solution in aiding skill development in vocational training. For each training session provided by APTAH to the user, it generates a set of data which needs to be proved reliable in revealing the worthy relevant information that can further support in perceiving the trainees's actual learning. The paper presents the experiments performed on simulator generated data, in order to examine the existence and strength of fundamental dependencies among the skill parameters involved. The regression methods used in the analysis demonstrates that the device is capable in capturing most of the vital relationships among the skill parameters such that it can later be used in enhancing skill learning of a trainee.

Thursday, September 22

Thursday, September 22 14:30 - 17:30 (Asia/Kolkata)

ICACCI--10: ICACCI-10: Signal/Image/video/speech Processing/Computer Vision/Pattern Recognition (Regular Papers)

Room: LT-1 (Academic Area)
Chairs: Ranjan Gangopadhyay (The L.N.Mittal Institute of Information Technology, India), Nimmagadda Padmaja (Sree Vidyanikethan Engineering College & Tirupati, INDIA, India)
ICACCI--10.1 14:30 Identifying High Quality Jpeg Compressed Images Through Exhaustive Recompression Technique
Zeeshan Akhtar (IIT Kanpur India, India); Ekram Khan (Aligarh Muslim University, India)

An important step in digital image forensics is to identify whether a given image is JPEG compressed or not. Most of state-of-the-art image forensic methods fail to differentiate between an uncompressed and a high quality compressed (near lossless) images. In this paper we propose a method to efficiently differentiate an original uncompressed image from high quality JPEG compressed images (but saved as TIFF or BMP) by analyzing the changes in the DCT coefficients after recompression. We define a new parameter and experimentally show that it acquires a very small value for uncompressed images compared to the JPEG compressed counterpart. Simulation results verify that the proposed method significantly outperforms other methods, for images taken from various sources

ICACCI--10.2 14:45 Statistical Textural Feature and Deformable Model Based MR Brain Tumor Segmentation
Shoaib Amin Banday (Islamic University of Science and Technology, India); Ajaz H. Mir (National Institute of Technology Srinagar, India)

Segmentation of abnormalities is one of the main focus in medical image processing field for the purpose of diagnosis and treatment planning. The work put forth in this paper has proposed and implemented a semi-automatic technique that yields appropriate segmented regions from MR brain images. The Segmentation technique here utilizes fusion of information beyond human perception from MR images to develop a fused feature map. The information beyond human perception include second order derivatives that are computed from an image which are discussed in detail in relevant section of this paper. This obtained feature map acts as a stopping function for the initialized curve in the framework of an active contour model to obtain a well segmented region of interest. The obtained segmentation results are compared with ground truth segmentation results obtained from experts manually using Jackard's Co-efficient of Similarity and Overlap index. The results obtained on various case studies like Craniophryngioma, High grade Glioma and Microadenoma show a good efficacy of the overall method.

ICACCI--10.3 15:00 Hardware and Software Implementation of Weather Satellite Imaging Earth Station
Chinmay Patil (University of Mumbai); Tanmay Chavan and monali nitin Chaudhari (University of Mumbai, India)

Monitoring weather patterns and interpreting satellite images is one of the most widely utilized applications of remote sensing. Satellites have been used over the past several decades to obtain a wide variety of information about the earth's surface. In spite of that, huge expenses, poor resolution of the images, and very low availability of useful information from them have always been among the top issues faced by satellite enthusiasts. Fine reception of these images and extraction of relevant information is easier said than done. This paper aims to decrease the cost of imaging substantially, and greatly improve availability of such images. By making use of locally available raw materials, an antenna was constructed and tested with good results that could receive fine APT signals from NOAA 15, 18, 19. Using synch pulses as reference, the audio signals were decoded into an image in MATLAB. Filtering, cross correlation and noise reduction were some of the steps implemented to form the image. This system thus provides a comprehensive solution for receiving satellite images with a Software Defined Radio, an appropriate antenna and various application environments for decoding the audio signals into an intelligible image. It requires very less processing power thus making weather forecasting quite convenient for the common man. It is therefore a low cost and a home brew elucidation of a technique that is otherwise regarded as quite sophisticated by space enthusiasts.

ICACCI--10.4 15:15 Head Pose Estimation for Recognizing Face Images Using Collaborative Representation Based Classification
Srija Chowdhury and Jaya Sil (Indian Institute of Engineering Science and Technology, Shibpur, India)

Real time face recognition is challenging due to time taken for searching the test image within a wide variation of training images. We propose an efficient face recognition method by applying collaborative repersentation based classification (CRC) technique in two steps. Using CRC first we select the images closer to the pose of the test image compare to others in the training set. The test image is then searched among the selected training images only, instead of all, thus reducing the search space and time. For person recognition we calculate Gabor wavelets of the eye regions of the test and the selected training images as the basis functions containing detail edge information. The images are reconstructed as a linear span of the basis vectors using CRC and the sparse coefficient vectors are used as feature vectors. To obtain the best match image we apply l2-norm on the feature vectors corresponding to the test and the selected images. The proposed head pose estimation based face recognition method has been validated using PIE and Head Pose databases and obtain comparable accuracy with other methods at a much lesser storage and time complexity.

ICACCI--10.5 15:30 Breast Cancer Detection Using Non-invasive Method for Real Time Dataset
Basavaraj Hiremath (MSRIT and Jain University, India); Prasannakumar S C (RVCE, Bangalore, India); Praneethi K (MSRTH, India)

Breast Cancer is one of the most horrible and dangerous diseases that affect women health. This paper aims to detect the breast cancer in a non-invasive manner with the help of mammograms and enables advanced characterization of the lesion using following steps: mammogram enhancement using adaptive median filter, cancer area detection using seed value based segmentation, extraction of CSLBP and GLDM features and finally, classification of cancer using RBF-SVM. The Algorithm is evaluated on real time mammogram breast dataset consisting of 249 images and for the considered dataset, accuracy is found to be 95.18%

ICACCI--10.6 15:45 A Novel ECG Data Compression Algorithm Using Best Mother Wavelet Selection
Amol Motinath Veer, Chandan Kumar Jha and Mahesh Kolekar (Indian Institute of Technology Patna, India)

this paper presents a novel electrocardiogram (ECG) data compression algorithm based on discrete wavelet transform with best mother wavelet selection for minimum percentage root mean square difference. Performance of the proposed algorithm has been evaluated using 48 records of ECG signal which have been taken from MIT-BIH arrhythmia database. The proposed algorithm provides a fast Daubechies mother wavelet selection approach based on minimum value of percent root-mean-square difference (PRD). For effective encoding of transform coefficients, combination of backward difference and run length encoding is used. The proposed compression algorithm offers average compression ratio, PRD, and QS of 15.02, 0.23 and 67.68 respectively over 48 records of ECG signal.

ICACCI--10.7 16:00 Reconstruction of Surveillance Videos by Non-Iterative Pseudo Inverse Based Recovery Algorithm (NIPIRA): A Subjective Experience
Florence Gnana Poovathy John (Hindustan Institute of Technology and Science, India); S Radha (SSN College of Engineering & Anna University, India)

Compressed sensing is a conceptual process where the input data is compressed to a lower dimensional vector i.e. M ≤ N. Perfect recovery of the original data from these diminished measurements is carried out using various recovery procedures. Many of the procedures are iterative in nature that introduce complications such as computational complexity, increased elaspsed time etc. Hence Non-Iterative pseudo inverse based recovery algorithm (NIPIRA) is formulated to face the above said discrepancies effectively. The recovery of surveillance videos has been carried out perfectly using NIPIRA and the recovered videos are subjectively evaluated with the help of naive viewers. Mean opinion score (MOS) for 8 recovered surveillance videos, under various low level attributes and abstract attributes, are collected from 20 naive viewers. The average score given by the scrorers is around 4.5 denoting the 'Excellent' range in MOS table proving NIPIRA to be the best algorithm for recovery of videos from compressively sensed measurements.

ICACCI--10.8 16:15 Reinforced Fast Marching Method and Two Level Thresholding Algorithm for Cloud Detection
Harinder Kaur and Neelofar Sohi (Punjabi University Patiala, India)

Cloud detection is preliminary task which shows the pathway to meteorological and cloud field research. This paper puts forward the effective cloud detection scheme for cumulus and cirrus cloud. First, Two Level Thresholding Algorithm is proposed for cirrus cloud detection. Second, In order to detect cumulus cloud, we conduct research and enhance fast marching method (FMM) by providing the provision towards appropriate selection of seed locations in digital image. Also, graph is constructed from image, edge weights are computed using gradient magnitude (Smoothness constraint) as well as grey level intensity difference between pixel and target cloud location. Edge weights are utilized to divide the image into segments. Experimental results illustrate that the proposed algorithm shows significant results for cloud detection and give better F-score than existing state-of -art cloud detection algorithms.

ICACCI--10.9 16:30 Combining Temporal Interpolation and DCNN for Faster Recognition of Micro-expressions in Video Sequences
Veena Mayya (Manipal Institute Of Technology & Manipal University, India); Radhika M. Pai (Manipal Institute of Technology, India); Manohara Pai (Manipal Institute of Technology & Manipal University, India)

Micro-expressions are the hidden human emotions that are short lived and are very hard to detect them in real time conversations. Micro-expression recognition has proven to be an important behavior source for lie detection during crime interrogation. SMIC and CASME II are the two widely used spontaneous micro-expression datasets which are available publicly with baseline results that uses LBP-TOP for feature extraction. Parameter estimation is the key factor for feature extraction using LBP-TOP. This requires effort both in terms of computation and time. In this paper, the facial features are extracted using deep convolutional neural network(DCNN) on CUDA enabled General Purpose Graphics Processing Unit(GPGPU) system. Results show that the proposed combination of DCNN and TIM can achieve better performance than the results published in baseline publications. The feature extraction time is reduced due to the usage of GPU enabled systems.

ICACCI--10.10 16:45 Quantitative Analysis of Break-lock in Monopulse Receiver Phase-Locked Loop Using Noise Jamming Signal
Harikrishna Paik and Neti Narasimha Sastry (V R Siddhartha Engineering College, India); SantiPrabha I (J N T K University, India)

It is evident that noise jamming is one of the several active jamming techniques employed against tracking radars and missile seekers. The noise jamming mainly aims at completely masking the desired radar signal by the externally injected noise signal. Of several parameters to be considered for the analysis of noise jamming problem, the noise jammer power is one of the most critical parameter. In this paper, emphasis is given for quantitative estimation and analyses of the effectiveness of break-lock in a missile borne phase locked loop (PLL) based monopulse radar receiver using external noise signal. The analyses involve estimating the jamming signal power required to break-lock as a function of radar echo signal power through computer simulation and experimental measurements. The simulation plots representing the receiver PLL output are presented for selected echo signal powers from -14 dBm to -2 dBm. The simulation results are compared and verified with experimental results and it is established that these results are close approximate within 2 dB. It is noted that the measured values of jamming signal power at break-lock using HMC702LP6CE, HMC703LP4E, and HMC830LP6GE PLL synthesizers are -19.5 dBm, -18.1 dBm, and -17.6 dBm, respectively, while the simulated value is -18.8 dBm for a typical radar echo signal power of -10 dBm. The fairly good and consistent agreement between these results validates the simulation data.

ICACCI--10.11 17:00 Dynamic Sectored Random Projection for Cancelable Iris Template
P Punithavathi and Geetha S (VIT University Chennai Campus, India)

Biometrics is an indispensable tool which is being used widely in sensitive authentication applications. The increase in usage of the biometrics has also raised several issues related to the security of the biometrics. Several template protection schemes have been introduced to secure the biometrics from being compromised. Cancelable biometrics is a template protection scheme which enables the biometrics to be revoked like a token or password. The enrolment and matching of the biometrics are performed in a transformed domain. A dynamic sectored random projection for cancelable iris template has been proposed. The technique projects the sectored iris features on a dynamic random projection matrix to generate a transformed template. The dynamic random projection matrix is derived with respect to the iris feature itself and no external key is required. The samples from IIT-Delhi iris and CASIA Iris image version 1.0 databases were used in the experiments. The matching performance, non-invertibility and distinctiveness of the transformed templates generated with the proposed technique, have been analyzed. The transformed templates generated using the proposed approach, have proved to be promising and satisfying the characteristics of cancelable biometrics.

ICACCI--10.12 17:15 A Fast Algorithm for Optic Disc Segmentation in Fundus Images
Santhakumar Ramamoorthy and Elagiri Ramalingam Rajkumar (VIT University, India); Megha Tandur (Visvesvaraiah Technological University, India); Geetha K s (VTU, RVCE, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India); Girish Haritz (Robert Bosch Engineering and Business Solutions, India)

Advances in computational complexity of the computer have made Computer-Aided Diagnosis a reality. In the frame of Computer-Aided Diagnosis, this paper presents a fast and efficient method for optic disc detection in fundus Images captured from a portable fundus camera. The algorithm uses a combination of adaptive mean thresholding and statistical evaluation to detect optic disc. Experiments show that optic disc detection accuracies of 98%, 95%, and 90% are obtained for the OPTOMED database, the MESSIDOR database, and the DIARETDB1 database, respectively. Average runtime of our algorithm is 0.8 s which is substantially faster than many of the existing methods.

ICACCI--10.13 17:30 Undecimated Dual Tree Complex Wavelet Transform Based Face Recognition
Rajesh D S and B H Shekar (Mangalore University, India)

In this paper, we have developed a local descriptor and two global descriptors based on the Undecimated Dual Tree Complex Wavelet Transform (UDTCWT). Undecimated dual tree complex wavelet transform possesses certain advantages over the traditional wavelet transforms and hence it is capable of representing digital image signals more accurately. We have explored this concept considering the face recognition problem. Given a face image we compute the complex UDTCWT coefficient images of the face image at 4 scales and 6 orientations. Using these coefficient images we compute 48 Local UDTCWT Phase Patterns (LUPPs) and 8 Global UDTCWT Phase Patterns (GUPPs). Dividing these patterns into blocks and concatenating the 2D magnitude weighted phase histogram of the complex coefficients in these blocks, we form our Global descriptor. To handle pose and expression variation in face images, we have developed a key point based local descriptor. Given a face image, using the box filter response scale space, we have obtained scale dependent size square regions around interest points and these square regions are represented using UDTCWT. Extensive experiments conducted on benchmark face recognition datasets FERET, ORL, YALE and UMIST have demonstrated the appropriateness of our descriptors for face recognition applications.

ICACCI--10.14 17:45 Multiple Instance Learning for the Determination of Appropriate Images for Fundus Image Algorithms
Amruthavarshini Talikoti (R. V. College of Engineering, India); Kavya Venkatesan (R V College of Engineering, India); Geetha K s (VTU, RVCE, India); Digvijay Singh (Medanta-The Medicity, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India)

Glaucoma is a disease that affects the eye and can lead to potential blindness. The need to develop an effective detection system is very necessary as the symptoms of Glaucoma may not be apparent in the early stages. Glaucoma is a disease that can affect all age groups and starts with a decreasing field of vision. Diabetic Retinopathy is a disease that causes damage to the retina due to diabetes, which can lead to eventual blindness. This affects almost all diabetic patients suffering for 20 years or more. This paper proposes a method to distinguish fundus images in which the Optic disc are present from those in which they are absent. When a fundus image is captured, the Optic Disc appears as a bright region in the image. The possible reason for the absence of the OD is that the field of view of a portable fundus camera is very small and this might exclude the OD from the image. Another reason, though fairly uncommon, is the improper capture of the image. A system to differentiate between these images is of paramount importance as it helps in the segmentation of the Optic Disc and Optic cup improves the overall performance of the algorithm for detection of Glaucoma and DR. The proposed method makes use of the concepts of multiple instance learning to effectively detect the OD. Twenty iterations of the algorithm are performed. The average accuracy, sensitivity and specificity values are 96.85, 95.19 and 98.52 respectively. These values show that the method is effective in differentiation of images with and without OD, thus improving the efficiency.

ICACCI--11: ICACCI-11: Artificial Intelligence and Machine Learning/Data Engineering/Biocomputing (Regular Papers)

Room: LT-2 (Academic Area)
Chairs: Anupam Singh (LNMIIT, India), Anil K Dubey (ABES Engineering College Ghaziabad, Uttar Pradesh, India)
ICACCI--11.1 14:30 Enhanced Shuffled Bat Algorithm (EShBAT)
Hema Banati (University of Delhi & DYAL SINGH COLLEGE, India); Reshu Chaudhary (University of Delhi, India)

Bat Algorithm (BA) is a simple and effective global optimization algorithm which has been applied to a wide range of real-world optimisation problems. Various extensions to Bat algorithm have been proposed in the past; prominent amongst them being ShBAT. ShBAT is a hybrid between BA and Shuffled Frog Leaping Algorithm -SFLA; a memetic algorithm based on food search behavior of frogs. ShBAT integrates the shuffling and reorganization technique of SFLA to enhance the exploitation capabilities of BAT. This paper proposes Enhanced Shuffled Bat algorithm (EShBAT) an extension to ShBAT. In ShBAT, different memeplexes evolve independently, with different cultures. EShBAT improves the exploitation capabilities of ShBAT by grouping together the best of each memeplex to form a super-memeplex. This super-memeplex evolves independently to further exploit the best solutions. The performance of EShBAT is verified over 30 well-known benchmark functions. Experimental results indicate a significant improvement of EShBAT over BA and ShBAT.

ICACCI--11.2 14:45 Exploiting Apache Flink's Iteration Capabilities for Distributed Apriori: Community Detection Problem as an Example
Sanjay Rathee (Indian Institute of Technology Mandi, Himachal Pardesh, India); Arti Kashyap (IIT-Mandi, India)

Extraction of useful information from large datasets is one of the most important research problem. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exists many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these type of distributed applications, mapreduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software framework with mapreduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. But heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes hadoop inefficient. A number of map reduce based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in Mapreduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allow to start a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori algorithm on Flink. We also use community detection graph mining problem as a test case to demonstrate our implementations.

ICACCI--11.3 15:00 Predicting Software Change-Proneness with Code Smells and Class Imbalance Learning
Arvinder Kaur and Kamaldeep Kaur (Guru Gobind Singh Indraprastha University, India); Shilpi Jain (Guru Gobind Singh Indraprastha University Delhi, India)

The objective of this paper is to study the relationships between different types of object oriented software metrics, code smells and actual changes in software code that occur during maintenance period. It is hypothesized that code smells are indicators of maintenance problems. To understand the relationship between code smells and maintenance problems, we extract code smells in a Java based mobile application called MOBAC. Four versions of MOBAC are studied. Machine learning techniques are applied to predict software change- proneness with code smells as predictor variables. The results of this paper indicate that codes smells are more accurate predictors of change-proneness than static code metrics for all machine learning methods. However, class imbalance techniques did not outperform non-class imbalance machine learning techniques in change-proneness prediction. The results of this paper are based on accuracy measures such as F-measure and area under ROC curve.

ICACCI--11.4 15:15 Review of Back-propagation Algorithms for Defect Elimination with Proposed DMASIC Methodology
Shweta Loonkar (Mukesh Patel School of Technology Management and Engineering, Mumbai, India); Dhirendra Mishra (Mukesh Patel School of Technology Management and Engineering, Mumbai & NMIMS University, India)

In India textile industry holds an irreplaceable self- sustain position. There are various factors on which the quality aptness of garment industry is based such as performance, reliability, durability, and demand for high quality products with less number of defects. This paper attempts to review and present the detail study of fabric defects occurring at each level of textile manufacturing process starting from the raw material to the end product and its elimination using proposed DMASIC methodology. To reduce these defects Six Sigma DMAIC ((Define, Measure, Analyze, Improve, and Control) technology is proposed at each level of textile manufacturing process. With DMAIC methodology we introduce one more phase in it i.e. SORT phase after the analyses step resulting the DMASIC methodology. This phase of sorting categorizes the defects into three levels: minor, major and critical defects. After DMASIC Sort process we use automatic defect detection and classification process using artificial neural networks to get the various defects under knitted, woven, dyeing, finished category. To obtain the best results for neural network we compare the four back-propagation training algorithm Simple Back Propagation, Levenberg-Marquardt, Conjugate Gradient, and Resilient back propagation. These algorithms are compared based on their speed, accuracy, convergence, implement complexity, and memory requirement. It has been observed that RPROP can be most efficient algorithm along with proposed DMASIC for defect detection and classification due to its high rate of convergence and robustness.

ICACCI--11.5 15:30 Generalized Similarity Measure for Categorical Data Clustering
Shruti Sharma (Gurukul Institute of Engineering & Technology kota, India)

Categorical data needs special treatment before it can be clustered using popular methods of pattern analysis. Or separate methods to deal with categorical data have to be devised. All such methods use some kind of similarity metric to judge how similar two data objects are. There are several popular similarity measures and switching between them requires much effort. This paper presents a Generalized Similarity Metric (GSM) which inculcates five popular measures into a single parameterized formulation. Its implementation in famous ROCK algorithm is also presented to show the efficiency of proposed metrics.

ICACCI--11.6 15:45 An Empirical Semi-Supervised Machine Learning Approach on Extracting and Ranking Document Level Multi-Word Product Names Using Improved C-value Approach
Sivashankari R (VIT Univerisity, Vellore, India); Valarmathi B (VIT UUniversity, Vellore, India)

In recent years, the volume of data submissions (ECommerce data) in online on products, service, and organizations is increasing exponentially. This online data is abundantly unstructured; extracting knowledge from that huge volume of data is a non-trivial task. In recent years, extracting product names become a very popular approach and also one of the important methods in sentiment analysis. This product name extraction is very useful in E-commerce, because it helps in identifying people interest on products, generation of reviews' metadata and identification of product attributes, etc. The existing approaches in product name extraction are capable of extracting single word product names. However, the product names can be a sequence of words, which is also called multi-word product names that cannot be obtained automatically by the existing methods. In this paper, a combined approach of semi-supervised machine learning and improved C-value approach is proposed to discover the multi-word product names, ranking those product names and identifying a dominant product in review documents.

ICACCI--11.7 16:00 An Adaptive Distributed Approach of a Self Organizing Map Model for Document Clustering Using Ring Topology
Ajeissh Mukundan (Amrita Vishwa Vidyapeetham, India); Sandhya Harikumar (Amrita Vishwa Vidyapeetham & Amritapuri, India)

Document clustering aims at grouping the documents that are coherent internally with substantial difference amongst different groups. Due to huge availability of documents, the clustering face scalability and accuracy issues. Moreover, there is a dearth for a tool that performs clustering of such voluminous data efficiently. Conventional models focus either on fully centralized or fully distributed approach for document clustering. Hence, this paper proposes a novel approach to perform document clustering by modifying the conventional Self Organizing Map (SOM). The contribution of this work is three-fold. The first is a distributed approach to pre-process the documents; the second being an adaptive bottom-up approach towards document clustering and the third being a neighbourhood model suitable for Ring Topology for document clustering. Experimentation on real datasets and comparison with traditional SOM show the efficacy of the proposed approach.

ICACCI--11.8 16:15 Fast Convergent Biogeography Based Optimization Algorithm
Priya Sharma (Rajasthan Technical University kota Rajasthan INDIA, India); Harish Sharma (Rajasthan Technical University, Kota, India)

Biogeography Based Optimization (BBO) Algorithm is a stochastic and population-based evolutionary search technique modeled on the theory of biogeography. Like other population-based evolutionary algorithms, BBO also suffers from the problem of slow convergence. Therefore, in this article, a new variant of BBO algorithm is introduced, namely Fast Convergent Biogeography based Optimization (FCBBO) Algorithm. In the proposed algorithm, a fitness based position update process is introduced. In the proposed fitness based process, step size of the solutions are controlled through a probability which is a function of fitness. Further, Mutualism-relationship is introduced in migration process which modifies individuals by evaluating the difference among the finest solution and the mean of two random individuals. This modification helps to improve the exploration capability of the proposed algorithm. The developed algorithm compared with BBO and two other algorithms, namely Differential Evolution (DE) Algorithm and Particle Swarm Optimization (PSO) Algorithm with the experiments over 12 test problems. Obtained results confirm the competitive performance of the proposed algorithm.

ICACCI--11.9 16:30 Elitism Based Shuffled Frog Leaping Algorithm
Pragya Sharma (RTU, India)

Shuffled Frog-Leaping Algorithm (SFLA) is a memetic meta-heuristic approach for solving complex optimization problems. Like other evolutionary algorithms, it may also suffer from the problem of slow convergence. To elevate the convergence and to improve the intensification and diversification capabilities of SFLA, elitism is embedded by calculating the mean of local best and second local best solutions while updating the position of worst solution in local best updating phase. Similarly, mean of global best and second global best solutions is used to improve the position of worst solution while updating the position of worst solution in global best updating phase. The proposed algorithm is named as Elitism based Shuffled Frog-Leaping Algorithm (ESFLA). The modified algorithm ESFLA is analysed over 15 distinct benchmark test problems and compared with conventional SFLA, its recent variant, namely Binomial Crossover Embedded Shuffled Frog-Leaping Algorithm (BC-SFLA) and two other nature inspired algorithms, namely Gravitational Search Algorithm (GSA) and Biogeography-Based Optimization Algorithm (BBO). The results manifest that ESFLA is an antagonist variant of SFLA.

ICACCI--11.10 16:45 Predicting Rice Crop Yield Using Bayesian Networks
Niketa Gandhi (Senior Member IEEE, India); Leisa J Armstrong (Edith Cowan University, Australia); Owaiz Petkar (University of Mumbai, India)

Rice crop production plays a vital role in food security of India, contributing more than 40% to overall crop production. High crop production is dependent on suitable climatic conditions. Detrimental seasonal climate conditions such as low rainfall or temperature extremes can dramatically reduce crop yield. Developing better techniques to predict crop productivity in different climatic conditions can assist farmer and other stakeholders in important decision making in terms of agronomy and crop choice. This paper reports on the use of Bayesian Networks to predict rice crop yield for Maharashtra state, India. For this study, 27 districts of Maharashtra were selected on the basis of available data from publicly available Indian Government records with various climate and crop parameters selected. The parameters selected for the study were precipitation, minimum temperature, average temperature, maximum temperature, reference crop evapotranspiration, area, production and yield for the Kharif season (June to November) for the years 1998 to 2002. The dataset was processed using the WEKA tool. The classifiers used in the study were BayesNet and NaiveBayes. The experimental results showed that the performance of BayesNet was much better compared with NaiveBayes for the dataset.

ICACCI--11.11 17:00 Rainfall-Runoff Modeling Using Computational Intelligence Techniques
Dhananjay Kumar and Pradhan Sarthi (Central University of South Bihar, Patna, India); Prabhat Ranjan (Central University of South Bihar, India)

Rainfall and corresponding Runoff estimation are substantially dependent on various geographic, climatic, and biotic features of the catchment or basin under study and these factors often induce a linear, non-linear or highly complex relation between rainfall and runoff. The few of key factors include precipitation, percolation, infiltration, evaporation, stream flow, air temperature etc. Plenty of Rainfall-Runoff(RR) regression models are available, each one distinguished by a varying level of complexity and data requirement. Most of the time due to complex relationship between rainfall and runoff the traditional models(SCN-CN, MISDc, GA, CN4GA) with regression equations don't resembles the correct scene of rainfall-runoff connection.

Computational Intelligence(CI) approaches plays a key role in modeling those complex tie-ups between rainfall and runoff. The rainfall-runoff process was modeled using a mamdani Fuzzy Inference System(FIS) implemented within a layered design of Artificial Neural Network(ANN) and was applied to a small area of Koshi basin in Bihar, using 12 year's(1980-1992) observed records of daily rainfall, soil moisture and runoff.

A comparison was also made between proposed models and existing soft computing models. The proposed computational intelligence model proves significantly better than existing soft computing models in terms of performance.

ICACCI--11.12 17:15 Distributed Feature Selection Using Vertical Partitioning for High Dimensional Data
Bakshi Rohit Prasad and Unmesh Bendale (IIIT-Allahabad, India); Sonali Agarwal (Indian Institute of Information Technology, Allahabad, India)

Feature selection is one of the most significant steps in machine learning that reduces the features space in order to achieve faster learning and yielding simpler models with high accuracy and interpretability. With rapid development in technologies, large scale high dimensional datasets are common today which degrades the performance of traditional feature selection techniques as they suffer with the scalability issues. Parallel feature selection is an obvious solution to deal with this problem. Due to advent of many distributed computing frameworks scalable computing has become a viable strategy in reference to feature selection. Present work proposes a distributed parallel feature selection technique that employs vertical distribution strategy for dataset to exploit parallel computation. It uses information gain filter based ranking method which evaluates multiple disjoint feature subsets of dataset in parallel. The key idea is the distribution of evaluation and rank generation of features over several computing nodes in parallel. Experiments are performed on multiple large scale and high dimension datasets and significant reduction in overall computation time is achieved.

ICACCI--11.13 17:30 Discovering Preservation Pattern From Co-Expression Modules in Progression of HIV-1 Disease: An Eigengene Based Approach
Sumanta Ray and SK MD Mosaddek Hossain (Aliah University, India); Lutfunnesa Khatun (University of Kalyani, India)

In recent years, microarray based gene expression analysis has emerged as a well established way to discover stage specific changes in expression pattern of a specific disease progression. In this paper we have developed a framework to analyze microarray data of three different HIV-1 infection stages and identified modules among coexpressed genes. The modules initially provided a description of coexpression patterns and an inter-relation between the infection stages. We have observed that, coexpressed modules in each HIV-1 infection stage do not exist in isolation, instead they form a network in which higher order structure reflects the relationship among them. To illustrate the relationship we have compiled module eigengene (ME) network among the modules which describes relationship between modules. We further explored the relationship between gene coexpression modules by comparing the ME networks between each pair of stages. For this, an existing preservation measure is utilized here to elucidate the expression similarity between modules across different stages of infection. Additionally, one novel preservation measure is proposed to detect the preservation pattern in modular organization of coexpressed networks. We have found that the modular organization of coexpression network remain more preserved during the transition of infection from acute to long term nonprogressor stage than to the latent chronic stage. However, the average preservation scores is little bit higher for acute and chronic stage (mean preservation score=0.7737) than for acute and nonprogressor stage (mean preservation score= 0.7307). We have also identified a higher order meta-networks by grouping coexpressed modules which exhibit similar ME expression patterns. Our findings provide a new direction for understanding the modular organization and preservation patterns of microarray expression data across different stages of HIV infection.

ICACCI--12: ICACCI-12: Security, Trust and Privacy/Steganography (Regular Papers)

Room: LT-3 (Academic Area)
Chair: Alpa Shah (Sarvajanik College of Engineering and Technology & Gujarat Technological University, India)
ICACCI--12.1 14:30 Robust distributed Key issuing Protocol for Identity based cryptography
Dasari Kalyani (VNRVJIET, India)

In this paper, we propose a robust distributed threshold key transfer protocol or issuing protocol that solves the key escrow problem in identity-based approach. We use threshold cryptographic techniques in each phase of the algorithm such as in system public key setup, key issuing, key securing and private key reconstruction. Our protocol is robust means even t KPA's (out of n = 2t+1) are corrupted or dishonest the user recover the private key. This protocol can be efficient even communication between authorities (KGC and KPAs) is insecure. This approach useful and offered benefits that are in identity-based techniques, while eliminating key escrow from the setup. In our protocol, either KGC or KPA cannot cheat the users to obtain their private keys. Security analysis of proposed protocol and active adversary assumptions are also presented.

ICACCI--12.2 14:45 Local Binary Pattern Operator Based Steganography in Wavelet Domain
Anuradha Singhal and Punam Bedi (University of Delhi, India)

In today's digital era, secret data hiding has become important part of information security. Local Binary Pattern (LBP) operator which exploits the local intensity relationship of a coordinate with its neighborhood has been successfully implemented in texture classification and image retrieval. This paper proposes a novel steganographic technique based on LBP operator in wavelet domain for embedding and extraction of secret information. Proposed technique has been implemented in Matlab. Image quality metrics PSNR and SSIM are used to compare proposed technique with LSB substitution method.

ICACCI--12.3 15:00 A Practical Identity Based Signcryption Scheme From Bilinear Pairing
Arijit Karati (National Sun Yat-sen University, Taiwan); G P Biswas (ISM Dhanbad, India)

Signcryption is one of the most recent public key paradigms that fulfills both the requirement of confidentiality and authenticity of messages between parties. It works more efficiently with a cost significantly smaller than that required by signature-then-encryption technique. In this work, a practically implementable ID-based sigcryption scheme using bilinear pairing is presented. The proposed scheme is implemented under the hardness of CDH (Computation Diffie-Hellman) assumption in standard model without random oracle model. Performance evaluation of the scheme shows satisfactory result after comparing with other relevant ID- based signcryption schemes. Thus, our scheme should be implemented in real life scenario where both the confidentiality and authenticity is required with low computational cost.

ICACCI--12.4 15:15 Improving False Alarm Rate in Intrusion Detection Systems Using Hadoop
Mukund Yelahanka Raghuprasad, Sunil Nayak and Chandrasekaran K (National Institute of Technology Karnataka, India)

Intrusion Detection Systems are a vital part of an organization's security. This paper gives an account of the existing algorithms for Intrusion Detection using Machine Learning, along with certain new ideas for improving the same. The paper mainly talks about employing the Decision Tree mechanism for Intrusion Detection and improve it with the distributed file system, Hadoop. Initially a method that uses a dirty-flags to check the consistency of the Decision Tree, which changes with every wrong classification of the system is employed. The wrong classification is identified by a certain user who informs the system about the same and helps it learn. In the further sections, a new method which does not use a dirty-flag, but rather modifies the Key-Value pair in the results of the $reduce()$ function is tested as an improvement to the previous method. The two methods are compared, with the help of the Hadoop Simulation Tool - YARN. The main aim of the paper is to propose the use of the Distributed File System for Machine Learning along with some improvements to the current Hadoop File System, so that it reduces the total Time Taken, when Machine Learning algorithms are employed along with it.

ICACCI--12.5 15:30 Cryptographic Turbo Code for Image Transmission Over Mobile Networks
Vidya Sawant (NMIMS University & MPSTME, India); Archana Bhise (Mukesh Patel School of Technology Management & Engineering, India)

Mobile communication has become an essential part of our daily life for accessing and sharing data over internet in addition to voice communication. Mobile communication channel is an open network and hence maintaining the confidentiality and reliability of the data has always been an area of concern. Reliability of data against channel noise can be ensured by various error correcting codes. Turbo Code (TC) is an excellent channel encoder with near Shannon limit error correction performance. However, TC does not guarantee the security of the transmitted image against intruders on the wireless channel. Proposed Cryptographic Turbo Code (CTC) is a modification of the existing TC to provide encryption and error correction as a single entity. Encryption of data is achieved by the proposed Elliptic Curve Cryptographic Interleaver (ECCI) of CTC. The ECCI is an asymmetric secret key interleaver of CTC to shuffle the input bit sequence based on the Elliptic Curve (EC) arithmetic and a secret key. Shuffling the bits also reduces its correlation and improves the error correction performance of the code. The CTC ensures the secrecy of the shared secret keys over an insecure wireless channel by Elliptic Curve Diffie-Hellman Key Exchange (ECDHKE). This makes the CTC robust against cryptographic attacks. The qualitative and quantitative performance analysis of the proposed code is evaluated to validate its effectiveness for image transmission over a mobile network in contrast to other state-of-art methods. Simulation results illustrate a similar coding gain, Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) for images retrieved by the proposed CTC, encrypted TC and TC for authorized users. Additionally, it ensures security of the data from unauthorized users as compared to TC. Investigation results also depict the strength of CTC against Brute Force, known plaintext, chosen plaintext and cipher text attacks.

ICACCI--12.6 15:45 Performance Measurements for Hypervisors on Embedded ARM Processors
Axel Sikora, Sebouh Toumassian and Rico Werner (University of Applied Sciences Offenburg, Germany)

Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load "Cpuburn-a8" application and compare it to a native Linux system running on ARM processors.

ICACCI--12.7 16:00 A Hybrid Intelligent Security Technique used for Digital Still Image
Sudhanshu S. Gonge (Symbiosis Institute of Technology, Lavale & Symbiosis International University, Pune, India); Ashok Anandrao Ghatol (Director Genba Sopanrao Moze College of Engineering Pune, India)

Digital communication & internet technology has brought big revolution data transfer in 21st century. As Android operating system has enhanced all application of digital technology. Digital data can be transfer in various format like image, text, animation, graphics, message, video, audio, etc. For transferring the digital data like image, there is need of various multimedia models which can support digital image property. Such as image types, seeing colors containing different color model, human vision, interaction of light with matter, image resolution & bit depth. To maintain all these property of single digital still image of scanned cheque document, there is requirement of security, privacy & authentication. This can be done by using techniques like digital image watermarking and its combination with encryption & decryption techniques. In this research paper, a hybrid intelligent DWT-SVD digital image watermarking technique is used for authenticity & ownership whereas advanced encryption standard technique using 256 bit key is used for security of digital cheque image.

ICACCI--12.8 16:15 Split First Encryption Next Model for Cotenant Covert Channel Protection
Sankarasetty Rama krishna (VYCET, CHIRALA, India); Bokka Padmaja Rani (Jawaharlal Technological University, India)

Unexpected infrastructural needs in peak load management make small and medium business organization situations difficult to handle. These needs increase the setup cost of infrastructure over small and medium enterprises unbearably. Even resource utilization is not optimized because peak loads are rare in small and medium enterprises. Maintaining infrastructure as fault tolerant and available with high uptime is an overhead to handle. Cloud computing can be considered a solution to such problems for these small and medium enterprises as it provides services such as software, platform and infrastructure as utility computing. Therefore, companies can easily rely on the cloud for their infrastructural needs without any hassle. Resource optimization in the cloud depends on virtualization technology to enable multi-tenancy, as multi-tenancy enables several VMs to share the same physical space. However, though it optimizes resource utilization it raises several security for cloud service providers. If these questions are not answered properly then it may become a hurdle for cloud technology adoption. In this paper we propose a solution model for the disc contention problem by cotenant covert channel. These bypassing routes without user intervention leak the user data. Our model, split first encryption next (SFEN), will address how to mitigate this problem in lazy and immature cloud service providers. It isolates data storage from conflict cotenant by splitting and storing it in multiple places with encryption. Observations discussed in this paper provide satisfactory results.

ICACCI--12.9 16:30 Image Encryption using Wavelet based Chaotic Neural Network
Sushil Kumar (Guru Gobind Singh Indraprastha University, India)

The present work proposes the implementation of Wavelet based Chaotic Neural Network (WCNN) for image encryption. The neural network used is chaotic in nature i.e. its weights and bias will be determined by chaotic sequence with the help of 1-D logistic map. Private key system is achieved with the help of initial conditions. WCNN provides two level security along with the data compression to be transmitted. The image is first encrypted and then decrypted using WCNN. Encryption is carried out using only approximate coefficients, which reduces the transmitted data to a large extent. CNN is also applied for comparison purpose. The proposed methodology is applied on standard and real images. Further the behavior of system is verified with different key values at the receiver end and the proposed encryption scheme is proved to be key sensitive.

ICACCI--12.10 16:45 Lightweight Security Framework for IoTs using Identity based Cryptography
Sriram Sankaran (Amrita University, India)

Internet of Things (IoTs) is gaining increasing significance due to real-time communication and decision making capabilities of sensors integrated into everyday objects. Securing IoTs is one of the foremost concerns due to the ubiquitous nature of the sensors coupled with the increasing sensitivity of user data. Further, power-constrained nature of the IoTs emphasizes the need for lightweight security which can tailor to the stringent resource requirements of the sensors. In this work, we propose a lighweight security framework for IoTs using Identity based Cryptography. In particular, we develop a hierarchical security architecture for IoTs and further develop protocols for secure communication in IoTs using identity based cryptography. Our proposed mechanism has been evaluated using simulations conducted using Contiki and RELIC. Evaluation shows that our proposed mechanism incurs lesser overhead than traditional public key based mechanisms and that it can be applied within IoTs.

ICACCI--12.11 17:00 Secure Strategic Mail Application with Hardware Device
Thirumal kumar Kanakurthi (BEL, India); Hemanth Kumar Nooka (CRL-BEL, India); Anka rao Ittadi (BEL, India); Akila M and Bhanusree K (BEL-CRL, India)

With the growth of technology most of the sensitive data transactions are happening over the internet, which has to be protected against unauthorized access. The design and development of a Secure Hardware Token (SHT), is explained in this paper, which will protect the sensitive data from most of the threats, As it generates the highly secure one-time passwords (Time based OTP) ensuring that only legitimate users are authorized to access sensitive data/applications and the SHT will perform cryptographic functionalities like Key Generation, Digital Signature creation, Signature verification, Encryption and Decryption & Hashing techniques. Since all the functionalities are happening in Hardware token there is no risk of Operating System dependent vulnerabilities.In this paper a messaging application is shown, which makes use of SHT and perform authentication as well as all the cryptographic operations for military mail application system. Such types of applications are needed in military scenario where highly confidential data flows across the public network.

ICACCI--13A: ICACCI-13A: Artificial Intelligence and Machine Learning/Data Engineering/Biocomputing (Regular Papers)

Room: LT-4 (Academic Area)
Chairs: Vineet Sahula (MNIT Jaipur, India), Shashirekha Hosahalli Lakshmaiah (Mangalore University, India)
ICACCI--13A.1 14:30 Lbest Gbest Artificial Bee Colony Algorithm
Harish Sharma (Rajasthan Technical University, Kota, India); Sonal Sharma (Career Point University, Kota, India); Sandeep Kumar (CHRIST University, India & Imam Muhammad ibn Saud Islamic University, Saudi Arabia)

Artificial Bee Colony (ABC) algorithm is the most popular add-on to class of swarm intelligence based meta-heuristic which is evolved to resolve the complex real world optimization problems. Most of the swarm intelligence based algorithms face the problem of stagnation, and premature convergence and ABC is not an exception. To reduce the chance of these problems as well as to control equilibrium between intensification and diversification capabilities of ABC, a unique variant of ABC is intended. In this intended variant, the employed bee stage, as well as onlooker bee stage of ABC algorithm is modified by taking inspiration from a local best candidate as well as the global best candidate. The intended ABC variant is named as Lbest Gbest ABC (LGABC) algorithm. The accuracy and efficiency of LGABC have examined over 12 benchmark functions and evaluated with the basic ABC, best so far ABC, Gbest ABC and Modified ABC and found that it may be an efficient contender in the field of swarm intelligence based algorithms.

ICACCI--13A.2 14:45 Designing Automatic Note Transcription System for Hindustani Classical Music
Prasenjit Dhara (IIT-Kharagpur, India); Pradeep Rengaswamy (IIT Kharagpur, India); K. Sreenivasa Rao (Indian Institute of Technology Kharagpur, India)

Hindustani music is heterophonic with lead voice accompanied by instruments. A trained Hindustani musician is capable of perceiving the notes based on the lead voice but a novice person is unable to decode the notes. This necessitates to the development of an automated note transcription system. The automatic system will recognize the notes present in the music file and it generates a note transcription file. In this work, the melody contour is extracted from the audio file using salience based method. The extracted melody values are normalized in cent scale. Each note has a fixed note melody value in cent scale. Each of the melody value from the melody contour is compared with the fixed note melody value. If the melody value matches within a given range of tolerance of the note melody value then the corresponding note is assigned. The consecutive same note for each melody value of the contour is merged into a single note preserving the start and end time.We are eliminating the notes which have less duration valve than an empirically determined threshold. These notes are termed as transition notes. The tolerance for the note melody value and threshold duration of the note plays an important role in the accuracy of the transcription system. These parameters are optimized to maximize the accuracy of the system. The performance of the system is evaluated by two metrics. The results show the note transcription system performs satisfactorily.

ICACCI--13A.3 15:00 Simple yet Effective Classification Model for Skewed Text Categorization
Mahamad Suhil (University of Mysore, India); Guru D S (Mysore University, India); Lavanya Narayana Raju and Harsha S Gowda (University of Mysore, India)

In this paper, the problem of skewness in text corpora during classification is addressed. A method of converting an imbalanced text corpora into a more or less balanced one is presented through application of classwise clustering algorithm. Further, to avoid curse of dimensionality, chi-squared feature selection is employed. Nevertheless, each cluster of documents has been given a single vector representation by the use of a vector of interval-valued data which accomplishes a compact representation of text data thereby requiring less memory for storage. A suitable symbolic classifier is used to match a query document against stored interval valued vectors. The superiority of the model has been demonstrated by conducting series of experiments on two benchmarking imbalanced corpora viz., Reuters-21578 and TDT2. In addition, a comparative analysis of the results of the proposal model versus that of the state of the art models indicates that the proposed model outperforms several contemporary models.

ICACCI--13A.4 15:15 An Improved Fuzzy Based Approach to Impute Missing Values in DNA Microarray Gene Expression Data with Collaborative Filtering
Sujay Saha (Heritage Institute of Technology, India); Anupam Ghosh (Netaji Subhas Engineering College, India); Saikat Bandopadhyay (Heritage Institute of Technology, India); Kashi Nath Dey (University of Calcutta, India)

DNA microarray experiments normally generate gene expression profiles in the form of high dimensional matrices. It may happen that DNA microarray gene expression values contain many missing values within its data due to several reasons like image disruption, hybridization error, dust, moderate resolution etc. It will be very unfortunate if these missing values affect the performance of subsequent statistical and machine learning experiments significantly. There exist various missing value estimation algorithms. In this work we have proposed a modification to the existing imputation approach named as Collaborative Filtering Based on Rough-Set Theory (CFBRST) [10]. This proposed approach (CFBRSTFDV) uses Fuzzy Difference Vector (FDV) along with Rough Set based Collaborative Filtering that analyzes historical interactions and helps to estimate the missing values. This is a suggestion based system that works on the principle of how suggestion of items or products arrive to an individual while using FB, twitter or looking for books in Amazon. We have applied our proposed algorithm on two benchmark dataset SPELLMAN & Tumor Cell (GDS2932) and the experiments show that the modified approach, CFBRSTFDV, outperforms the other existing state-of-the art methods as far as RMSE measures are concerned, particularly when we increase the number of missing values.

ICACCI--13A.5 15:30 Computational Reconstruction of fMRI-BOLD From Neural Activity
Chaitanya Nutakki (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, Amrita University, India); Ahalya Nair (Amrita Vishwa Vidyapeetham, Amrita University, India); Chaitanya Medini (Amrita Vishwa Vidyapeetham ( Amrita University), India); Manjusha Nair (Amrita Vishwa Vidyapeetham, Amritapuri, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

In this paper, we model function magnetic resonance imaging signals generated by neural activity (fMRI). fMRI measures changes in metabolic oxygen in blood in brain circuits based on changes in biophysical factors like concentration of total cerebral blood flow, oxy-hemoglobin and deoxy-hemoglobin content. A modified version of the Windkessel model by incorporating compliance has been used with a balloon model to generate cerebellar granular layer and visual cortex blood oxygen-level dependent (BOLD) responses. Spike raster patterns were adapted from a biophysical granular layer model as input. The model fits volume changes in blood flow to predict the BOLD responses in the cerebellum granular layer and in visual cortex. As a comparison, we tested the balloon model and the modified Windkessel model with the mathematically reconstructed BOLD response under the same input condition. Delayed compliance contributed to BOLD signal and reconstructed signals were compared to experimental measurements indicating the usability of the approach. The current study allows to correlate dynamic changes of flow and oxygenation during brain activation which connects single neuron and network activity to clinical measurements.

ICACCI--13A.6 15:45 Computing LFP From Biophysical Models of Neurons and Neural Microcircuits
Sandeep Bodda (Amrita Vishwa Vidyapeetham, Amrita University, India); Harilal Parasuram (Amrita Institute of Medical Sciences, Kochi, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Local Field Potentials (LFP) allow interpretations of patterns of information generated by neuronal populations. LFPs are Low frequency (<300 Hz) population signals recorded with glass or metal electrodes and are known to be generated by complex spatiotemporal interactions of synaptic stimuli in combination with sink-source behavior in the circuit. Computational reconstruction of local field potentials allows to constrain detailed neuronal models and network microcircuits and study the function and dysfunctions via simulations. In this paper, we present a comparison of various methods and tools available for LFP computations in single neurons and populations of cells. We compare our LFPsim and ReConv methods to LFPy, VERTEX while mathematically computing local field potentials in single neurons and network models made with detailed multi-compartmental models and available through databases like ModelDB.

ICACCI--13A.7 16:00 A Study of Gene Prioritization Algorithms on PPI Networks
Sinsha KP (Amrita University, India); Bhadrachalam Chitturi (Amrita University)

The undiscovered genes that contribute to a particular disease are of great interest in medical informatics. The candidate pool of such genes from GWAS or similar studies is typically vast. A direct experimental verification all such genes is prohibitively expensive in terms of time and money. In order to limit the number of candidates for experimental verification, numerous computational methods have been developed. One of the most widely used technique applies gene prioritization(GP) algorithms on a weighted protein-protein interaction network (PPIN). We analyzed the popular GP algorithms in semiautomatic manner. The idea is to study their behavior, rank them and attempt to improve their performance by modifying them. Additionally, we derived an expression for the steady state scores of the power method.

ICACCI--13A.8 16:15 Identification of Olfactory Receptors Using a Parametric Model
Pranay Sakhare, Rajneesh Rani and Ranjeet Kumar Rout (Dr B R Ambedkar NIT Jalandhar, India)

Human nose can smell number of chemicals having distinct odors. These odor molecules are detected by the Olfactory receptors. Olfactory Receptors (ORs) have a large family of mouse, rat, human, chimpanzee, earthworm, dog, etc. As there are number of families of distinct species, the mouse OR genes are identified to work upon, because restricted amount of work has been done using this species. In this paper, OR family of mouse and human has been studied by using quantification of Barcode matrix, DNA walk and Haar wavelet coefficients. These parameters have been studied for the proper understanding of their DNA sequences, to know the difference between intricate sequences of DNA and to know the hidden symmetries between the DNA sequences. Subsequently, clustering method is applied after knowing the quantitative results. Then a proper study has been done for the clustering results to examine the mouse OR. Using this proposed model, a probable justification or deterministic nullification can be given that whether a given sequence of DNA string composed of nucleotides is a probable mouse OR or not.

ICACCI--13A.9 16:30 Alignment Free Promoter Sequences Analysis Using Feature Reduced Cumulative Distribution of Motifs
Kouser K (GFGC Gundlupete(University of Mysore), India); Lalitha Rangarajan (University of Mysore, India)

Advancement in the DNA sequencing machinery has led to tremendous accumulation of sequence data. This has encouraged researchers to develop more robust analysis methods. Promoter sequences are an important part of these DNA sequences which take part in gene expression and regulation. Here, we attempt to perform alignment free variance based feature selection on PSMMs of promoter sequences. Then, we analyze similarity/differences existing in these sequences using the cumulative distribution of the selected features/motifs. To demonstrate the efficacy of the proposed technique we use promoter data from NCBI database. The similarity/dissimilarity values get enhanced when we use only the selected features/motifs instead of all the features. Hence, the combination of feature selection and analysis using cumulative distribution of motifs proposed for promoter sequence analysis has the potential to enhance the similarity/differences that exist between promoter sequences.

ICACCI--13A.10 16:45 Faster Mahalanobis K-Means Clustering for Gaussian Distribution
Ankita Chokniwal (Gurukul Institute of Engineering and Technology, Kota ,Rajasthan); Manoj Singh (Gurukul Institute of Engineering and Technology, India)

The famous probabilistic theory of Gaussian

distribution suggests that distribution of real world data when

collected in large quantities always follows Gaussian distribution.

The increasing amount of data to be managed and analyzed

currently requires clustering approaches which are successful in

recognizing the dense areas of Gaussian model. These areas

might not be always spherical, as often discovered by k-means;

rather the shapes might be oblong. The shapes of clusters formed

highly depend on the distance metric used. Use of Mahalanobis

distance to identify clusters in a mixed Gaussian distribution field

has always been appreciated, reasons being Gaussian Mixture

Models (GMMs) supporting Mahalanobis distance and its ability

to identify real life-like elliptical clusters. The only challenge is

deciding the proper initial estimates used in computation of

Mahalanobis distance. The elliptical clusters recognized by use of

this measure in k-means can be viewed as a general case of

spherical clusters produced by Euclidean distance in k-means.

This paper explores how well the initialization method of k-

means++ works with Mahalanobis k-means.

ICACCI--13A.11 17:00 Effect of Event Sequence on Intragroup Dynamics: Simulation Using Modified Hopfield Neural Network
Amita Kapoor (University of Delhi & Shaheed Rajguru College of Applied Sciences for Women, India); Narotam Singh (IMD, Ministry of Earth Sciences, India)

This paper proposes use of modified Hopfield neural network to simulate the effect of event sequence on intragroup dynamics. Each node in the network represents an individual, with unique personality; the network as a whole models a group of N number of individuals. The network is subjected to sequence of events in different chronological order, the effect of changing the chronological order on intra-group dynamics is observed with the help of network analysis tools. The results demonstrate that both individual's personality and his connections in group are important in shaping the entire group dynamics. Our results show that Hopfield neural network can model social groups and group dynamics; also group dynamics is affected by the chronological order of events.

ICACCI--13A.12 17:15 Design and Development of Balance Training Platform and Games for People with Balance Impairments
Amritha N, Mahima M. Menon, Namitha K, Radhakrishnan Unnikrishnan and Harish Mohan (AMMACHI Labs, Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India); Ravi Sankaran (Department of Physical Medicine and Rehabilitation, Amrita Institute of Medical Sciences and Research Center, Kochi); Rao R. Bhavani (AMMACHI Labs, Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India)

Medicine in India has an emerging need for balance rehabilitation due to its growing population of elderly, diabetics and stroke patients. In this paper, we describe the design of a cost effective system that provides static and dynamic balance training through interactive virtual reality games. The intention is to positively influence the activities of daily living (ADL) of patients suffering from balance disorders. The test-retest reliability of the balance platform is evaluated using intra-class correlation coefficient (ICC) and standard error of the measurement (SEM). Thirty healthy individuals performed quiet standing with eyes opened/eyes closed and activities of daily living for twenty seconds. The center of pressure (COP) path length, mean velocity and range of displacement are computed to demonstrate device consistency across different trials. We also discuss the results of a pilot study on the utility of the device among clinical practitioners of physical medicine and rehabilitation.

ICACCI--13A.13 17:30 Design & Realization of Multi Mission Data Handling System for Remote Sensing Satellite
Lalitkrushna J Thakar (Indian Space Research Organization, India); Chayan Dutta (ISRO Satellite Centre, India); P s Sura (Indian Space Research Organization, India); S Udupa (ISRO Satellite Centre, India)

In current scenario various Base band Data Handling Systems (BDH) systems are getting realized by various space agencies. They are unique and project specific. This will call for realization of new system or re-engineering of existing systems, intern takes lot of lead time for design, realization and testing. This paper gives novel approach towards realization of multi mission system where by changing software modules and mounting of desired hardware chips system can be reconfigured for the new project, which will save realization and testing time. Uniqueness of this work is the system is realized in a single board system.

ICACCI--13A.14 17:45 Effort Estimation of Web-based Applications Using Machine Learning Techniques
Shashank Mouli Satapathy (Vellore Institute of Technology, Vellore, India); Santanu Kumar Rath (National Institute of Technology (NIT), Rourkela, India)

Effort estimation techniques play a crucial role in planning of the development of web-based applications. Web-based software projects, considered in the present-day scenario are different from conventional object oriented projects, and hence the task of effort estimation is a complex one. It is observed that the literature do not provide a guidance to the analysts to use a particular model as being the most suitable one, for effort estimation of web-based applications. A number of models like IFPUG Function Point Model, NESMA, MARK-II, etc. are being considered for web effort estimation purpose. The efficiency of these models can be improved by employing certain intelligent techniques on them. Keeping in mind the end goal to enhance the efficiency of evaluating the effort required to develop web-based application, certain machine learning techniques such as Stochastic Gradient Boosting and Support Vector Regression Kernels are considered in this study for effort estimation of web-based applications using IFPUG Function Point approach. The ISBSG dataset, Release 12 has been considered in this study for obtaining the IFPUG Function Point data. The performance effort estimation models based on various machine learning techniques is assessed with the help of certain metrics, in order to examine them critically.

ICACCI-13B: ICACCI-13B: Artificial Intelligence and Machine Learning/Data Engineering/Biocomputing (Regular Papers)

Room: LT-13 (Mechatronics Dept)
Chair: Sakthi Balan Muthiah (LNMIIT, India)
ICACCI-13B.1 14:30 Impact of Dilution of Precision for Position Computation in Indian Regional Navigation Satellite System
Mehul Desai (Government Polytechnic for Girls, India); Darshna Jagiwala and Shweta Shah (Sardar Vallabhbhai National Institute of Technology, Surat, India)

The System of seven satellites, Indian Regional Navigation Satellite System (IRNSS) will provide Special Positioning Service (SPS) and Precision Service (PS) towards the Indian subcontinent. In Positioning Navigation Timing (PNT) application, measurement is affected by some intentional and unintentional sources of error. This Measurement also depends on the volume of tetrahedron created by the geometry of measuring satellites. The Satellites geometry is measured by single dimensionless numbers called GDOP. Lower the GDOP value, the better the satellite Geometry, hence the position measured by the system is more precise. Currently, IRNSS system has six active satellites in an orbit. In this paper performance analysis of IRNSS, GPS and IRNSS+GPS are investigated by computing the GDOP due to all satellites in view. The performance analysis is done using ACCORD IRNSS receiver which is provided by SAC, ISRO, Ahmedabad. The dual frequency IRNSS receiver of SVNIT, SURAT (21:16 deg Lat, 72:78 deg Long ) is explored for best GDOP configuration and position determination over the Indian subcontinent.

ICACCI-13B.2 14:45 Neuro-Endo-Activity-Tracker: An Automatic Activity Detection Application for Neuro-Endo-Trainer
Britty Baby and Vinkle Srivastav (Indian Institute of Technology Delhi, India); Ramandeep Singh (Indian Institute of Technology, Delhi, India); Ashish Suri (All India Institute of Medical Sciences, India); Subhashis Banerjee (Indian Institute of Technology Delhi, India)

Neuro-endoscopy is a highly demanding surgical specialty and requires dedicated training systems for imparting the skills. The assessment of surgical skills to identify the level of expertise of technical and cognitive skills, has primarily been performed subjectively by an expert. The development of objective motion analyses and automated skills evaluation platform in the minimally invasive surgical procedures can be a significant and suitable alternative. The video-based automatic segmentation can divide the primary activity into sub-tasks and then evaluate them by statistical analysis of motion. In this work, we developed an automated video-based surgical evaluation application for identifying the basic eye-hand coordination and dexterity of a trainee, while performing a grasping and pick-place task on Neuro-Endo-Trainer. The activity was divided into sub-tasks using Mixture of Gaussian based background subtraction and Tracking Learning Detection algorithms. The kinematic analysis of the tool-tip trajectory was used to provide the synopsis of activity as feedback to the trainee for self-improvement.

ICACCI-13B.3 15:00 An FSM Based Methodology for Interleaved and Concurrent Activity Recognition
Kavya J (Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India); M Geetha (Amrita University, Amritapuri, Kollam India Clappana P O, India)

Research on human activity recognition is one of the most promising research topic and is attracted attention towards a number of disciplines and application domains. Successful research has so far focused on recognizing sequential human activities. In real life people are performing actions not only in sequential but also in complex (concurrent or interleaved) manner. Recognizing complex activities remains a challenging and active area of research. Due to a high degree of freedom of human activities, it is difficult to have a model which can deal with interleaved and concurrent activities. We propose a method that uses automatically constructed finite state automata, stack and queue data structures for recognizing concurrent and interleaved activities.

ICACCI-13B.4 15:15 Improving the Intelligibility of Dysarthric Speech Towards Enhancing the Effectiveness of Speech Therapy
Arun Kumar Shanmugam (AMRITA University, India); Santhosh C Kumar (Amrita Vishwa Vidyapeetham, India)

Dysarthria is a neuro-motor disorder in which the muscles used for speech production and articulation are severely affected. Dysarthric patients are characterized by slow or slurred speech that is difficult to understand. This work aims at enhancing the intelligibility of dysarthric speech towards developing an effective speech therapy tool. In this therapy tool, enhanced speech is used for providing auditory feedback with a delay to instill confidence in the patients, so that they can improve their speech intelligibility gradually through relearning. Feature level transformation techniques based on linear predictive coding (LPC) coefficient mapping and frequency warping of LPC poles are experimented in this work. Speech utterances from Nemours dataset with mild and moderate dysarthria are used to study the effectiveness of the proposed algorithms. The quality of the transformed speech is evaluated using subjective and objective measures. A significant improvement in the intelligibility of speech was observed. Our method henceforth could be used to enhance the effectiveness of speech therapy, by encouraging the dysarthric patients talk more, thus helping in their fast rehabilitation.

ICACCI-13B.5 15:30 Indian Sign Language Recognition: An Approach Based on Fuzzy-Symbolic Data
Nagendraswamy H S, BM Chethana Kumara and R Lekha Chinmayi (University of Mysore, India)

In this paper, the task of recognizing signs made by hearing impaired people at sentence level has been addressed. A novel method of detecting sign boundaries in a video of continuous signs has been proposed and extraction of spatial features to capture hand movements of a signer through fuzzy membership functions has been proposed. Frames of a given video of a sign are preprocessed to extract face and hand components of a signer. The centroids of the extracted components are exploited to extract spatial features. The concept of interval valued type symbolic data has been explored to capture variations in the same sign made by the different signers at different instances of time. A suitable symbolic similarity measure is studied to establish matching between known and unknown signs and a simple nearest neighbor classifier is used to recognize an unknown sign as one among the known signs by specifying a desired level of threshold. An extensive experimentation is conducted on a considerably large database of signs created by us during the course of our research work in order to evaluate the performance of the proposed system.

ICACCI-13B.6 15:45 Language Identification Using PLDA Based on I-Vector in Noisy Environment
Manish Rai (Central University of South Bihar, India); Jainath Yadav (IIT Kharagpur, India); K. Sreenivasa Rao (Indian Institute of Technology Kharagpur, India); Neetish Kumar (Central University of Bihar, India); Md. Shah Fahad (Central University of South Bihar, India)

This paper investigates a new language identification technique based on i-vector and PLDA in noisy environment. Previously various technique were employed to find identity of language. In high dimension and noisy environment, existing techniques became too complex and degraded their performance. We have developed a new technique based on i-vector and applied PLDA over internally centered i-vector for noisy data set to make it robust and efficient. For clean speech, the average equal error rate of traditional GMM system and proposed system are 19.002\% and 5.4278\%, respectively. The performance of language identification system is degraded significantly in noisy environment. The average equal error rate in noisy environment is 37.493\% for baseline system and 30.7764\% for proposed system. In this paper, the speech enhancement method (spectral subtraction) is used for noise suppression. After enhancing speech, the baseline and the proposed system shows output by 26.525\% and 16.695\%, respectively in term of EER. Experimental result shows the robustness of our proposed language identification technique over existing techniques.

ICACCI-13B.7 16:00 Assessing impact of seasonal rainfall on rice crop yield of Rajasthan, India using Association Rule Mining
Niketa Gandhi (Senior Member IEEE, India); Leisa J Armstrong (Edith Cowan University, Australia)

For developing countries which are highly dependent on agriculture, there are ever growing concerns that this change in climate variability will further impact the serious challenge of food security they already experience. It is important to have a deeper understanding of the impact of this climate change on crop production and adopting coping mechanisms. This paper assesses the impact of distributed seasonal rainfall on rice crop yield of Rajasthan state, India using data visualisation and further applying association rule mining techniques. For the present study the dataset considered was of twenty nine districts of Rajasthan state for forty three years from 1960 to 2002 depending on the data availability. Three divisions were made for the rainfall in Kharif season from June to November. Beginning of the season was considered as June and July, Middle of the season as August and September and End of the season as October and November. The effect of variation in the rainfall at the beginning, middle and end of season on the rice crop yield was investigated and some interesting results are reported.

ICACCI-13B.8 16:15 Object Based Schema Oriented Data Storage System for Supporting Heterogeneous Data
Anindita SarkarMondal, Samiran Chattopadhyay, Sarmistha Neogy and Nandini Mukherjee (Jadavpur University, India)

Object based data storage system provides application aware, hybrid cloud data storage system. Health data consists of multiple types of data sets like image, text, structured, file etc. So, it is important to store multiple types of data in a seamless manner. A storage manager uses storage object or data container to store data. In this paper, we propose a schema oriented storage object which describes the structure of the proposed object based data storage system. In the proposed system, data is stored as attribute values rather than data blocks or files. To describe the working procedure of the proposed object based data storage system which consults storage object to store the heterogeneous dataset, we have proposed an algorithm based on multilayer view of the data storage unit. Besides configuring data storage space, the proposed algorithm helps to support data operations.

ICACCI-13B.9 16:30 HealthAnalytic: A Concept Application for Customizable Visualization and Analysis of Health Informatics Datasets
Sumit Soman and Sushil Kumar (Center for Development of Advanced Computing, India); Sujeet Kumar (CDAC, India)

A large amount of health statistics of the Indian population has recently become available in the public domain owing to the National Data Sharing and Accessibility Policy (NDSAP). This statistical data is beneficial if it can be used to obtain meaningful inferences from data-driven models. Analysis and visualization of such data has been a challenge and has become an imperative requirement for executive decision making. The challenge could be attributed to the fact that the set of features that an attribute is dependent on is subjective and data-dependent. Hence, there is a need to provide a pervasive and flexible data analytics framework that would allow this data to drive decision making. We propose a concept for the design and development of a fully customizable data analysis framework on the Android platform which allows users to visualize the effect of any combination of attributes drawn from these datasets in predicting a particular attribute (label). We use feature scoring techniques to achieve this, and also provide an additional option to view data correlation. The intended application, called HealthAnalytic, is beneficial for executive level decision-making, as it provides visualization via data-driven models along with the pervasiveness of a mobile application. The initial version of the application uses existing methods, however newer scalable methods can be incorporated in the future.

ICACCI-13B.10 16:45 Performance Prediction and Behavioral Analysis of Student Programming Ability
Medha Sagar (Indira Gandhi Institute of Technology & Guru Gobind Singh Indraprastha University, India); Arushi Gupta (Microsoft India R&D, India); Rishabh Kaushal (Indira Gandhi Delhi Technical University for Women, India)

Computer Programming as a process embodies the creation of an executable computer program for a given computational problem by analyzing the task and developing an algorithm that evaluates the desired result. Due to its complex and diverse nature, programming requires a certain level of expertise in analysis of algorithms, data structures, mathematics, formal logic as well as related tasks such as testing and debugging. Nowadays, computer programming is an integral part of various computer science and related courses, as it helps students to understand the concepts of theoretical computer science and consequently, use those concepts to solve real-world problems. As a result of the increasing popularity of programming, there now exists a plethora of competitive programming websites where students can practice and solve problems. But, despite the vast increase in the programming websites, there is no tool available to predict the programming performance of students. The aim of this research is to assess the performance of students based on their programming capabilities. It will not only help the learners to assess their solutions, but it will also aid the educators to evaluate the progress of the students. For this research, the data was collected from two different competitive programming environments, namely, HackerEarth- a globally accessible competitive programming website and the IGDTUW's in-house programming portal, a university-based programming environment. We used supervised learning to predict the performance of students for both the datasets. The accuracy obtained for the HackerEarth dataset is 80%, while the accuracy for the IGDTUW dataset was computed to be 91%. Apart from predicting the performance, rigorous analyses were done unearth hidden trends responsible for a learners programming acumen.

ICACCI-13B.11 17:00 A Comparative Study of Segment Representation for Biomedical Named Entity Recognition
Shashirekha Hosahalli Lakshmaiah (Mangalore University, India); Hamada Ali Nayel (Benha University, Egypt)

Biomedical Named Entity Recognition (Bio-NER) is an important subtask of Biomedical Text Mining (BioTM), where the performance of further tasks, such as relation extraction, protein-protein interaction and hypothesis generation depend on the performance of Bio-NER. Bio-NER involves determining the biomedical named entities, such as DNA, RNA, cell types, gene and protein present in the biomedical research articles. Annotating the dataset for training the classifier to recognize and classify named entities is the crucial task in BioNER. Segment representation (SR) is an efficient way of annotating Biomedical Named Entities (BioNEs) within a sentence to differentiate them from non-BioNEs. In this paper, we have used Support Vector Machines (SVMs) to train different BioNER models with the benchmark JNLPBA 2004 using different SRs. The performance of SR models shows that more complex the model worse performance of f-score.

ICACCI-13B.12 17:15 Term-Class-Max-Support (TCMS): A Simple Text Document Categorization Approach Using Term-Class Relevance Measure
Guru D S (Mysore University, India); Mahamad Suhil (University of Mysore, India)

In this paper, a simple text categorization method using term-class relevance is proposed. Initially, text documents are processed to extract significant terms present in them. For every term extracted from a document, we compute its importance in preserving the content of a class through a novel term-weighting scheme known as Term-Class Relevance (TCR) measure proposed by Guru and Suhil (2015). In this way, for every term, its relevance for all the classes present in the corpus is computed and stored in the knowledgebase. During testing, the terms present in the test document are extracted and the term-class relevance of each term is obtained from the stored knowledgebase. To achieve quick search of term weights, B-tree indexing data structure has been adapted. Finally, the class which receives maximum support in terms of term-class relevance is decided to be the class of the given test document. The proposed method works in logarithmic complexity in testing time and simple to implement when compared to any other text categorization techniques available in literature. The experiments conducted on various benchmarking datasets have revealed that the performance of the proposed method is satisfactory and encouraging.

ICACCI-13B.13 17:30 A Multi-Cue Information Based Approach to Contour Detection by Utilizing Superpixel Segmentation
Sandipan Choudhuri, Nibaran Das, Swarnendu Ghosh and Mita Nasipuri (Jadavpur University, India)

Contour detection forms one of the primitive, yet inherent operations of computer vision systems. Owing to the significance of this fundamental task, a number of approaches have been proposed till date. This paper characterizes the functionality of a multi-scale feature-based edge detection strategy that exploits joint information from different feature-channels, modelled over a measure of spacial dispersion associated with structured discontinuities in an image. The issue of eliminating false edges is achieved by incorporating an iterative clustering procedure that divides the image into disjoint groups of perceptually semantic regions by constructing naturally adaptive region borders, thereby recovering precise object boundaries. From the experiments conducted on the BSDS300 dataset, it appears that the proposed detector achieves noteworthy performance by attaining promising detection results when compared to the state-of-the-art edge detection approaches.

ICACCI--14: ICACCI-14: Symposium on Advances in Applied Informatics (SAI'16) - Regular Papers

Room: LT-6(Academic Area)
Chairs: Debashis Saha (Indian Institute of Management (IIM)- Calcutta, India), Ch. D. V. Subba Rao (Sri Venkateswara University College of Engineering, India)
ICACCI--14.1 14:30 DoS Attack Detection Technique Using Back Propagation Neural Network
Monika Khandelwal (NIT Jalandhar, India); Deepak Kumar Gupta (National Institute of Technology Jalandhar, India); Pradeepkumar Bhale (ABV-Indian Institute of Information Technology and Management, India)

Denial of Service attack is an endeavor to make a gadget or framework resources occupied to its proposed clients. DoS attack expends casualty's framework assets, for example, data transfer capacity, memory, CPU by sending gigantic number of fake requests so that the intended user cannot obtain services and denial of service happens. This paper presents an intelligent technique for the detection of denial of service attack. This technique can easily detect DoS attack by using back-propagation neural network (BPNN). The parameters used in this technique are CPU usage, frame length and flow rate. In this technique, analysis of server assets and network traffic for training and testing the ability of detection method and the results shows that the proposed method can detect DoS attack with 96.2% accuracy.

ICACCI--14.2 14:45 Role of Physical Environment (Dinescape Factors Influencing Customers' Revisiting Intention to Restaurants
Saravanan Mahalingam and Bhawana Jain (Amrita School of Business, Coimbatore Amrita Vishwa Vidyapeetham Amrita University India, India); Mridula Sahay (Amrita Vishwa Vidyapeetham University, India)

With the growth of Indian economy, incomes of individuals have also risen. With higher disposable income in their hands, people have adopted new lifestyle trends. One of such trends is visiting restaurants. Given the increase in the number of people visiting restaurants and with the demanding nature of such people, it is imperative for restaurateurs to understand the likes and dislikes of their customers. This paper focuses on the role of dinescape factors, other than food and service, in influencing the revisit intention and the expectations of restaurant customers in the Indian context. The data was collected through a questionnaire survey from 205 respondents from across India. The ordered logistic regression results show that customers' expectations from a restaurant match the reasons they give for revisiting their favorite restaurant. This study aims to identify the major dinescape factors that influence the restaurant customers.

ICACCI--14.3 15:00 Learning Curve Analysis for Virtual Laboratory Experimentation
Saneesh F and Vysakh Kani Kolil (Amrita Vishwa Vidyapeetham, India); Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India)

Learning is a life long process and the understanding gained depends on the extent of practice and effort from the learner. Learning process involves the cognitive ability to grasp and process the information in order to utilize the knowledge effectively. In science education, use of laboratories to supplement learning is integral to overall development of the learner. However, due to numerous challenges such as the lack of effectiveness of practical laboratories to impart conceptual knowledge, the learning goals are compromised. In this work, learning curves are modeled to quantitatively assess the extent of learning using a technology rich Virtual Laboratory platform to teach laboratory experimentation. Two specific skills i.e. thinking skills and design skills were characterized in the assessments. Significant improvements were observed not only in the scores but also the response speed demonstrated by students to complete the assessment.

ICACCI--14.4 15:15 Interactive Learning System for the Hearing Impaired and the Vocally Challenged
Hrishikesh N (Amrita School of Engineering, India); Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India)

In our existing education system, teachers primarily engage students verbally in what we call 'chalk and talk' approach. Occasionally, certain learning models are also made use of for the purpose of teaching specific concepts. Smart classroom systems employ PowerPoint presentations, videos and the like. However, lack of sufficient self-interactive models and/or inadequate interaction with them, cause students lose focus. Young children, particularly with disabilities such as those with hearing impairment and vocal dysfunction are prone to it. Our studies showed that students experienced enhanced attentiveness in an environment conducive to self-interactive learning. The word interaction here does not refer to just teacher-student communication rather; it places greater emphasis on interactive self-learning. The student is utmost comfortable when he/she feels to be the center of attention or the teaching is exclusive to him/her. We propose a novel learning system in order to kindle the innate curiosity of students. This article presents an application of the ongoing research on interactive learning. Our system employs both Virtual Reality (VR) and Augmented Reality (AR) to bring about a deeper immersive and effective interactive learning experience to the students. This Interactive VR-AR Learning System (IVRARLS) provides a learning environment with each student being able to independently interact to learn with his or her own virtual learning models in real time. In our scheme, Microsoft Kinect is used for the extraction of interactive gestures of the participant(s). This approach is better suited particularly for the hearing impaired and/or vocally challenged children nevertheless it does not exclusively target them.

ICACCI--14.5 15:30 Auto-annotation of Tomato Images Based on Ripeness and Firmness Classification for Multimodal Retrieval
Priti Sehgal (Keshav College, University of Delhi, India); Nidhi Goel (University of Delhi, India)

An interesting application of machine vision is quality estimation of tomatoes. Here, there is extensive availability of images that needs to be retrieved in an effective and efficient manner. In this paper, a classification based auto annotation (CBA) of tomato images for providing semantic tags is proposed. These tags are derived from their content-based learning. Semantic tags are deduced from the classification of tomatoes on two most important quality parameters i.e. ripeness and firmness. For firmness estimation, an approach is proposed that exploits three texture feature extraction algorithms: two are based on statistical techniques viz. first order statistics (FOS), and, gray level co-occurrence matrix (GLCM), and one is based on transform-based technique viz. wavelet-transform. Multiple linear regression (MLR) analysis has been used to establish the relationship between the instrumental firmness and texture values obtained from texture analysis. Prediction models for three texture feature sets are built, compared, and tested for over fitting using double cross-validation method. Based on the firmness of tomatoes, which is estimated through digital color imaging technique, they are classified into three classes soft, medium, and hard. To select a classifier for CBA, experiments with five learning methods, Naïve Bayes, multilayer perceptron (MLP), support vector machine (SVM), decision table, and random tree have been carried out. Their class-prediction accuracy has been compared using supplied test set. Ripeness classification of tomato images is done based on color using a previously proposed fuzzy rule based classification (FRBC) approach and are classified into three classes unripe, ab2ripe, and ripe. Grounded on classification, multi-labeling methodology is adopted for automatic annotation of tomato images. From the experimental results it is established that RandomTree classifier performs well for the classification of tomato firmness. In addition, presented work takes the advantage of progression in machine vision to address the issue of semantic gap in multimodal retrieval.

ICACCI--14.6 15:45 Fatigue Detection and Estimation Using Auto-Regression Analysis in EEG
Abhinandan Jain, Baqar Abbas and Omar Farooq (Aligarh Muslim University, India); Shashank Garg (Aliagarh Muslim University, India)

Estimation of fatigue is a required criteria in the field of physiology. The estimation of muscle fatigue and its development in the brain signals can provide a level of endurance among athletes and limits of a persons in doing physical tasks.In this paper a technique for detecting and estimating the fatigue development using regression parameters for EEG signals is discussed. The study of 14 subjects was undertaken and analysed for the fatigue development using Auto-Regression(AR) model. The behaviour of the error function obtained is analysed for the prediction of the stages and limits of muscle fatigue development.

ICACCI--14.7 16:00 An Enhancement in Automatic Seed Selection in Breast Cancer Ultrasound Images Using Texture Features
Lipismita Panigrahi (NIT RAIPUR, Chhattisgarh, India)

Automatic seed selection is an important and crucial step toward the boundary detection in ultrasound B-scan images. This paper focuses on a methodological framework that can automatically detect a seed point of an ultrasound image by using texture features. Based on the selected seeds of cluster the ultrasound images are segmented using active contour, K-means and Otsu methods. The comparative analysis of these segmentation techniques is also reported. The proposed method is applied on 116 ultrasound images in which 45 are benign cases and 71 malignant cases. The quantitative experimental results show that the proposed method can successfully find an accurate seed point based on texture features and it has the ability to segment the image with high accuracy of 89.65 %. The proposed method is faster and performs more accurate segmentation than existing algorithms.

ICACCI--14.8 16:15 Intelligent Information Retrieval for Tsunami Detection Using Wireless Sensor Nodes
Deepali Virmani (VIPS-TC COE & GGSIPU, Delhi, India); Nikita Jain (Bhagwan Parshuram Institute of Technology, GGSIPU, India)

Various natural hazards can be detected by well trained sensors deployed at the target sites. This paper proposes a novel framework of a Tsunami detection system using wireless sensor nodes deployed in coastal areas around earth's magnetic poles. The proposed framework implements intelligent information retrieval technique to generate a Tsunami alert. This method makes use of proposed parameters that defines the underwater condition in terms of magnetic field, electric field, wave gradient and heat energy. The proposed framework is further used to extract the intelligent information from above mentioned parameters and study the behavioral as well as physiological pattern changes in marine animals. The proposed method filters the messages, transformed by condition tagging which are further evaluated to generate Tsunami alert. The proposed framework is validated by the help of a case study to analyse the behavioral patterns observed in sea turtles when exposed to evident change in electro- magnetic field intensity.

ICACCI--14.9 16:30 Fine Grained Key Computation Scheme for Secure Data Sharing in Cloud
Deepa Maria Polson (College of Engineering, tvm, India); Sabitha S (College of Engineering, Trivandrum, India); M S Rajasree (IIITMK, India)

Data sharing is an important aspect of cloud storage. It is very imperative to ensure security during this sharing process. The basic idea of data security lies on encryption of data as well as delegation of decryption key. A data owner has to share the encrypted data as well as the decryption key of the corresponding data with another user, so that the latter can view the original data while maintaining the security of the data. This paper proposes a method of secure data sharing which is space and time efficient. In this method, data is encrypted based on a file identifier( id, which is an integer that ranges from 1 to a maximum number of branches in the hierarchical data model) and access policies. Access policies are represented using access tree structure. The key to be shared with the recipient provides fine-grained access to the files based on attributes of the recipient. This key is computed based on id and attributes of the user.

ICACCI--14.10 16:45 QoE assesment of MPEG-DASH in polimedia e-learning System
Laura Garcia, Jaime Lloret, Carlos Turro and Miran Taha (Universidad Politecnica de Valencia, Spain)

The development of Internet has brought the development of many e-learning platforms such as Polimedia (developed by the Polytechnic University of Valencia). It is used by the students to complement their training. In order to provide Polimedia users the best possible quality of experience (QoE), techniques of adaptive streaming over HTTP as MPEG DASH are being included. In this article, first, we show the procedure to setup a server to support MPEG DASH protocol. Then, a subjective QoE study in a controlled environment to evaluate the performance of MPEG DASH is presented. We determine the aspects that are most annoying to Polimedia platform users and we provide some recommendations to improve the QoE of Polimedia users.

ICACCI--14.11 17:00 Ubiquitous Energy Efficient Aquaculture Management System
Sangeetha Rajesh (K. J. Somaiya Institute of Management Studies and Research, India)

Pervasive computing enabled by wireless network technologies span a wide area of applications in modern living. Heterogeneous devices used in Internet of Things are highly constrained in terms of energy efficiency. Aquaculture is a technique of human intervention in the rearing process of aquatic animals. Several natural water parameters affect the productivity of aquaculture farming. This paper proposes an energy efficient ubiquitous architecture based on Internet of Things for an aquaculture environment. The proposed system collects the aqua_data in real-time using sensors. It applies the changes in the water environment at suitable times without human intervention. The algorithm used for expert system in decision making is described. The paper discusses about Thread wireless network protocol which provide longevity to the battery operated heterogeneous devices.

ICACCI--14.12 17:15 Empirical Mode Decomposition vs. Variational Mode Decomposition on ECG Signal Processing: A Comparative Study
Uday Maji (Haldia Instiyute of Technology, India)

Most of the non-stationary signals need adaptive processing technique for denoising, signal processing for feature extraction and analysis. In this regard, signal decomposition methods plays a vital role as selective reconstruction extracts the enhanced version of the signal buried in the noise. Decomposition mode based analysis also becomes popular especially in case of biosignals due to their highly non-stationary nature. Biosignals are better decomposed by a technique where basis function is derived from the signal itself. This data adaptive decomposition of biosignals into different frequency modes is very effective irrespective of multiple periodicities present in the signal or unknown sampling rate. This paper aims to study the performance of Empirical Mode Decomposition (EMD) and the Variational Mode Decomposition (VMD) technique over the popular ECG signal in terms of different periodicities during various cardiac abnormalities. The results highlight the main differences between the methods in range of signal decomposition levels as well as ability of extracting both low and high frequency from the signal.

ICACCI--14.13 17:30 Analysis of Echo Cancellation Techniques in Multi-Perspective Smart Classroom
Ramesh Guntha (Amrita Center for Wireless Networks and Applications, Amrita Vishwa Vidyapeetham University, India); Balaji Hariharan and Venkat Rangan (Amrita University, India)

High quality audio communication is the most important success factor of a live interactive e-Learning system. Echo cancellation technology plays critical role in ensuring high quality audio communication. The current echo cancellation technology works only when the audio goes in and out of a single computer in a given room. But our Smart classroom e-Learning system requires 3 computers for a given classroom to capture and stream HD video from 5 video cameras placed at different angles or perspectives. It is done to achieve gaze alignment across all the remote participants by showing the appropriate perspective at each remote classroom display based on the current teaching mode of either lecturing or interaction [1, 2]. Since the audio is processed through three computers in a given room, we had to augment traditional echo cancellation technology with new techniques to achieve echo cancellation. In this paper we present 3 echo cancellation techniques, evaluate them against the criteria of factors of smart classroom, and analyze their pros and cons along with the user's feedback.

ICACCI--14.14 17:45 Exploring Concept of QR Code and Its Benefits in Digital Education System
Saroj Goyal (MACERC, India); Surendra Yadav (MACERC JAIPUR, India); Manish Mathuria (Rajasthan Technical University, India)

This research paper concentrates on the concept of Digital Authentication using QR Code in Digital Education System. This paper aimed to provide a better solution to the Digital Security. There are two challenges of the work i.e. first one is to explore the usability of QR Code in general life and second is to incorporate QR Code technology with an educational document for security to avoid duplicity. The literature review is done to synthesis digital encoding and decoding technique as well as basics of Bar Code and QR Code. The implementation of QRC (Quick Response Code) for verification is presented where web environment, programming logics, and URL embedding are discussed. The result analysis and testing of experiment are done in the sense to get best quality of QR Code though the information embedded should not affected and the QR Code must easily be decoded the embedded information from common tools. The goal of this research paper is to explore and analyze the best image under the testing of Error Correction Level and Matrix Point Size parameters by calculating the PSNR and MSE values for QR Code images with different image file format (PNG and JPG). The calculated values are compared and the final conclusion of the work found which state that the PNG image with Error Correction Level- L and Matrix Point Size 1 are the best to generate quality QR Code. . The testing and result are presented which state that the QR Code is the best way to compose the identical information of any entity to quickly figure out the originality.

Friday, September 23

Friday, September 23 14:30 - 18:30 (Asia/Kolkata)

ICACCI--19: ICACCI-19: Symposium on Intelligent Informatics (ISI'16)/Symposium on Advances in Applied Informatics (SAI'16)

Room: LT-3 (Academic Area)
Chair: Shakti Awaghad (J. D. College of Engineering, India)
ICACCI--19.1 14:30 Effective Discriminant Function for Intrusion Detection Using SVM
Ramasani Ravinder Reddy (Osmania University, India)

Pinpointing the intrusion from the available huge intrusion data is demanding. Intrusion detection is treated as a data analysis problem. In the process of finding accurate intrusion is typical. For improving accurate identification of intrusions data mining approaches are adopted and proved automatic analysis with improved performance. These techniques enhancing the detection rate of the intrusions which is very effective. Discriminant function is very critical in separating the intrusion and anomaly behavior accurately. The support vector machine based classification algorithm is used to classify the intrusions accurately by using the discriminant function. The effective discriminant function will be accurately identifies the data into intrusion and anomaly. The evaluation of the discriminant is important in the evaluation of the intrusion detection system. Performance of intrusion system depends on the choice of the discriminant function.

ICACCI--19.2 14:45 Environmental Awareness Around Vehicle Using Ultrasonic Sensors
Priya Hosur (B. V. Bhoomraddi College Of Engineering and Technology, India); Rajashekar Basavaraj Shettar (Vishweswaraya technological University, Belgaum.Karnataka State, India); Milind Potdar (KPIT, India)

The objective of this work is to locate objects in the vicinity of moving and/or stationary vehicles using ultrasonic sensors. Parking and driving a vehicle in an inconvenient or crowded location, such as small parking slots, in heavy traffic , or narrow streets needs the driver to carefully monitor around vehicle to avoid damage to the vehicle, its passengers and also to the people and objects around it.. In such scenarios, a driver cannot fully monitor the environment around the vehicle using only rear view mirrors and front mirrors. Blind spots around the vehicle, where the front and rear-view mirrors do not show the obstructions to traffic, leads to the development of around view monitor system as mitigation. The around view system proposed here is based on distance calculation between the object and the vehicle. It is a supporting technology that assists drivers in parking and driving the vehicle more easily by giving a better understanding of the environment around vehicle. The proposed system, makes use of twelve ultrasonic sensors to cover 360 degrees of the vehicle. That means three ultrasonic sensor are used to cover one side of the vehicle with 180 degrees. System development consists of integrating software and hardware modules. The prototype model is developed using Atmel SAMD Xplained pro kit and the developed model is tested on passenger car with real time test conditions. The proposed system uses less number of sensors when compared with the existing systems.

ICACCI--19.3 15:00 Speech Signal Analysis for the Estimation of Heart Rates Under Different Emotional States
Alex James (IIITMK, India); Aibek Ryskaliyev and Sanzhar Askaruly (Nazarbayev University, Kazakhstan)

To reduce the deaths, caused by heart disorders, such as stroke, arrhythmia, heart attack etc, a non-invasive method for the monitoring of heart activity needs to be developed. Human voice can be considered as a means for the estimation of heart rate. In this paper, voice signal analysis methods and heart rate estimation rule are proposed. Followed by the classifier algorithm, to find the correlation between voice signal and heart rate. Additionally, empirical model of heart rate is represented as a linear model. The predication accuracy was tested using the data collected from 15 subjects, it is about 4050 samples of speech signals and corresponding electrocardiograms

ICACCI--19.4 15:15 Exploratory Test Oracle using Multi-Layer Perceptron Neural Network
Wellington Makondo (Harare Institute of Technology, Zimbabwe); Raghava Srinivasa Nallanthighal (DTU, Delhi & Bawana Road, India); Innocent Mapanga and Prudence Kadebu (Harare Institute of Technology, Zimbabwe)

In the context of exploratory testing (ET), human knowledge and intelligence is applied as a test oracle. The exploratory tester designs and executes the tests on fly and compares the actual output produced by the application under test with the expected output in the testers' mind. The shortcoming of human oracle is that humans are fallible, that is exploratory testers do not always detect a failure even when a test case reveals it. Relying on a human testers to assess program behaviour has also some drawbacks such as cost and accuracy. Therefore, in this paper an effort has been made to explore the feasibility of using a multilayer perceptron neural network (MLP-NN) as an exploratory test oracle. The MLP-NN was improved by adding another weight on each connection to perfectly generate reliable exploratory test oracles for transformed different data formats.

ICACCI--19.5 15:30 Analysis of Compressive Sensing For Non Stationary Music Signal
Vivek Upadhyay (MNIT, India); Amit Joshi (Malviya National Institute of Technology, India)

This Compressive Sensing (CS) is the key solution or method to reconstruct the signal with very few number of measurements as compared to conventional methods. According to the conventional Shannon-Nyquist sampling theory, the signal has to sample twice the bandwidth for proper reconstruction. It is required to store a large amount of data with the conventional method. CS helps to resolve this issue with two important parameters such as the measurement matrix and the basis matrix. They should satisfy two properties which are Restricted Isometric Property (RIP) and Independent and Identically Distributed (IID). There are various reconstruction algorithms which are used for the proper reconstruction of the signal after applying the CS technique. The work is carried out on different types of audible signals which are non-stationary in the nature. For the single tone audio signal, the value of SNR is quiet good. Whereas SNR value of the music signal and the instrumental signal degrades because of single tone frequency component. We can also conclude that the value of the RIP constant is varying as we vary the value of number of measurements. The error values are also measured in terms of MSE and RMSE and these values decrease with higher value of compression ratio.

ICACCI--19.6 15:45 LabVIEW Event handling using EPICS PV for ICRH DAC SOFTWARE
Ramesh Joshi and HM Jadav (Institute for Plasma Research, India); Aniruddh Mali (Blazing Arrows Pvt Ltd & GTU Alumina Life Member, India); S Kulkarni (Institute for Plasma Research, India)

Programmable logic controller (PLC) based data acquisition and control (DAC) system has been designed and developed for 45.6 MHz, 100 kW ICRH system using EPICS [1]. DAC can monitor and acquire 16 digital inputs, 16 analog outputs, 16 digital outputs and 32 analog inputs signals. Several python embedded as well as external script have been used in development of the control system software. Graphical user interface has been developed using control system studio (CSS) Operator Interfaces (OPI). WebOPI [2] module can display CSS BOY [3] OPI in web browsers without modification to the original OPI file. WebOPI provides interface between with CSS based user interface and EPICS process variables (PV) using channel access gateway. Data acquisition needs multifunctional data acquisition hardware which comprises digital input, digital output, analog output and analog input channels with counters for synchronization of the system. LabVIEW based software has been developed for the same using proprietary USB based multifunctional DAQ hardware. Procured USB based DAQ module is capable to acquire data sample at 10 kHz rate with software as well as hardware trigger. Module has been integrated with required driver and tested with developed user interface [4]. This paper explains the interface using EPICS PV which is an external variable from LabVIEW can be used to trigger an event in LabVIEW.

ICACCI--19.7 16:00 FPGA based Hardware Implementation of Automatic Vehicle License Plate Detection System
Surbhi Chhabra, Himanshu Jain and Sandeep Saini (The LNM Institute of Information Technology, Jaipur, India)

Automatic Vehicle License Plate Detection System (AVLPDS) is the extraction of vehicle license plate information from an image. Besides the safety aspects this system is used in many applications, viz. electronic payment systems, freeway, arterial monitoring systems for traffic surveillance etc. The purpose of this paper is to present the FPGA algorithmic model of most efficient algorithm among three algorithms: Edge-based, Connected-Component based and Histogram based. Each approach is analyzed on the basis of precision and recall rates to determine the success of each approach. After comparison, we can say Histogram based approach has an advantage of being simple and thus faster. Therefore, in this paper, we have used Histogram based Edge Processing approach to detect the license plate and presented the FPGA implementation of AVLPDS for the same. The whole system is implemented using MATLAB Simulink and Xilinx System Generator(XSG). Use of XSG for image processing effectively reduces the complexity in structural design and it also contributes an additional distinctive attribute for hardware co-simulation. The accuracy of the algorithm is checked for different sets of input images and significant performance improvement has been found, thereby performing an optimal FPGA- based hardware implementation of AVLPDS.

ICACCI--19.8 16:15 An Efficient Method of Detecting Exudate in Diabetic Retinopathy:Using Texture Edge Features
Priyadarshini M Patil (BVB College of Engineering & Technology, India); Pooja Shettar (BVB College of Engineering, Hubli, India); Prashant Narayankar (BVBhoomaraddi College of Engineering and Technology, India); Mayur Patil (Tesco Bengaluru, India)

Ophthalmologists analyze fundus images of eye extensively as a non invasive diagnosis tool for various internal eye defects. Diabetic retinopathy is one such eye defect in diabetic patients, causing damage to retina which may lead to blindness. The major symptoms of this disorder is the presence of exudates, a pus like fluid oozed from damaged blood vessels due to high blood sugar. This hardens on the retina of patient, leading to blindness. In this paper, we propose a methodology for automated detection of exudates. We remove the non exudates like optic disc, blood vessels, and blood clots in two phases using Gradient Vector Flow Snake algorithm and region growing segmentation algorithm. This improves efficiency of detection by masking false exudates. Then, we detect exudates using Gabor filter texture edge detection based segmentation algorithm. To reduce computational complexity, only Gabor filters tuned to two higher frequencies and four orientations are used. We have implemented the proposed methodology on 850 test images. We have obtained a high efficiency of 87% true exudates.

ICACCI--19.9 16:30 The Relationship between IT Adoption, IS Success and Project Success
Thanh D. Nguyen (Banking University of Ho Chi Minh City, Vietnam); Tuan M. Nguyen (HCMC University of Technology, Vietnam); Thi H. Cao (Saigon Technology University, Vietnam)

This paper reviews IS project success related studies of the 1992-2016 period. The relevant academic journal and conferences during the time were scanned. The findings show that while conceptual and empirical articles are dominant, the three theoretical topics, namely IT adoption, IS success and project success are highlighted. Interestingly, although the three streams of research are distinct to each other, they indicate some jointly conceptual constructs. Implications for IS research are also discussed.

ICACCI--19.10 16:45 Efficient User Assignment in Crowd Sourcing Applications
Akash Yadav (Indian Institute of Technology, Patna, India); Ashok Singh Sairam (Indian Institute of Technology Guwahati, India); Rituraj Singh (Univ Rennes/INRIA/IRISA, France)

Recent technological advancement in hand-held computing devices such as smart phones/PDA integrates cyber and physical worlds altogether. This interaction creates a new paradigm called mobile crowd sourcing which takes advantages of human intellect and a large pool on internet users to perform various task more efficiently. In such applications, users are generally recruited in ad hoc manner to do a task and often receive some incentive. Such ad hoc nature provides flexibility for users to do any job but often decreases their reliability and output quality. Quality control is generally done by increasing the workforce which increases cost. To solve this issue, we proposed a user assignment framework which pushes the task to the right set of users which can perform the task in more economically efficient way. These framework first rank users by performing semantic comparison among task's requirements and user's attributes. Then it tries to assign minimum, yet most efficient users for each task in mutually exclusive manner. This framework ensures the allocation of most efficient users for each task while keeping the reward cost as minimum as possible, hence allows application developers and end users to enjoy the power of crowd in more effective way.

ICACCI--19.11 17:00 Image Steganography Method Using K-Means Clustering and Encryption Techniques
Bhagya Pillai (Amrita Vishwa Vidyapeetham, India); Padmamala Sriram (Amrita University, India); Pooja Rao (Amrita Vishwa Vidyapeetham, Afghanistan); Mundra Mounika (Amrita Viswa Vidyapeetham, India)

Steganography involves hiding of text, image or any sensitive information inside another image, video or audio in such a way that an attacker will not be able to detect its presence. Steganography is, many times, confused with cryptography as both the techniques are used to secure information. The difference lies in the fact that steganography hides the data so that nothing appears out of ordinary while cryptography encrypts the text, making it difficult for an outsider to infer anything from it even if they do attain the encrypted text. Both of them are combined to increase the security against various malicious attacks. Image Steganography uses an image as the cover media to hide the secret message. In this paper, we propose an image steganography method which clusters the image into various segments and hides data in each of the segment. Various clustering algorithms can be used for image segmentation. Segmentation involves huge set of data in the form of pixels, where each pixel further has three components namely red, green and blue. K-means clustering technique is used to get accurate results. Therefore, we use K-means clustering technique to get accurate results in a small time period.

ICACCI--19.12 17:15 CMAC: Collaborative Multi Agent Computing for Multi Perspective Multi Screen Immersive e-Learning System
Ramesh Guntha (Amrita Center for Wireless Networks and Applications, Amrita Vishwa Vidyapeetham University, India); Balaji Hariharan and Venkat Rangan (Amrita University, India)

Traditional e-Learning systems do not ensure proper gaze alignment among the remote participants and our research proved that a gaze aligned multi-perspective multi-screen e-Learning system provides much better immersive experience. To provide multiple perspectives of the participants, we are using 5 cameras at each participant location, each one separated by 45 degrees, covering a range of 180 degrees, so that the system can choose appropriate perspective to present to the remote participants based on set of rules. We realized that a single standard computer is not able to capture and stream more than 2 HD video streams reliably, so we had to design a flexible multi-agent system to distribute the video streaming across 3 computers at each classroom. In this paper we present the architecture of our Smart classroom system.

ICACCI--19.13 17:30 Classification of Text Documents Using Association Rule Mining with Critical Relative Support Based Pruning
Saurabh Mathur and Karthik Prabhakar (VIT University, India); Chandrasekhar Uddagiri (H. No 6 2nd cross C Sector V G Rao Nagar Katpadi Vellore TN & VIT University, Vellore, India)

Text document classification is the process of assigning class labels to text documents by training a model using a large enough corpus. It is more complex than other conventional data mining algorithms which deal with known attributes and a finite set of possible values for each attribute. It is because more time and effort are required for pre-processing of textual data. Thorough pre-processing and proper representation of cleaned and reduced data are essential for obtaining effective results from text mining process. In this work, an approach for text classification is proposed using association rule mining (ARM) with critical relative support (CRS) based pruning. CRS is expected to reduce the size of association rule base and improve classification performance without compromising accuracy.

ICACCI--22: ICACCI-22: Symposium on Emerging Topics in Circuits and Systems (SET-CAS'16)

Room: LT-7(Academic Area)
Chair: Rosy Pradhan (Veer Surendra Sai University of Technology, Burla, India)
ICACCI--22.1 14:30 Sparse Channel Estimation Using Orthogonal Matching Pursuit Algorithm for SCM-OFDM System
Rashmi N (BMS Institute of Technology and Management); Mrinal Sarvagya (Reva Institute of Technology and Management)

In this work, we analyzed channel estimation for SC-OFDM system using Compressive sensing technique on frequency- selective multipath channel. The pilot aid based channel estimation techniques are bandwidth inefficient Using compressive sensing which is a bandwidth efficient technique and only fewer pilots are used to estimate channel state information. We simulated channel estimation using Orthogonal Matching Pursuit method obtained results shows 4% reduction in BER .

ICACCI--22.2 14:45 Low- Power and High Performance Clocked Regenerative Comparator At 90nm CMOS Technology
Shweta Gupta (Chandiagrh University Gharuan Punjab, India)

The low voltage clocked regenerative comparator provides maximum speed and power efficiency and is thus required for implementing area efficient and ultra low-power analog to digital converters (ADCs). For an analog and mixed signal design, comparator is the main component in low-power applications. Clocked regenerative comparators have features like zero static power dissipation, high input impedance for better transconductance, good strength against noise and low offset voltage. In this paper, we have presented a research on existing clocked regenerative double-tail comparators in terms of power dissipation, PDP (power-delay product) and slew- rate. Based on the simulation results, a new comparator with two stage signaling is presented for low-power dissipation and high performance. The simulation result at 90nm CMOS technology approves the research purpose. This states that in modified comparator, the power consumption is considerably reduced at 1.2V supply.

ICACCI--22.3 15:00 Design and Implementation of Hybrid 4-bit Flash ADC
Kriti Thakur (Chandigarh University, India)

This research presents low power Hybrid 4bit flash ADC . This 4bit Hybrid Flash ADC is Designed by series of resistance, 2(n-1) comparator and MUX based encoder. The hybrid 4 bit Flash ADC was designed and simulated in 90nm technology using a Cdesigner tool with 1.2 V supply and 1GHz frequency. Designed Flash ADC has better stability and power consumption is less. The simulation results shows that hybrid 4 bit flash A to D converter has a power consumption of 1.4mW and delay 1.8ns.

ICACCI--22.4 15:15 TFET Simulation Using Matlab and Sentaraus TCAD
Rockey Bhardwaj (Chandigarh University, India); Gurinderpal Singh (Chandigarh University, Punjab, India)

TFET (tunnel field effect transistor) is widely used in low voltage operating electronic devices because of the ability to achieve a subthreshold swing lower than 60mV/decade. Lower subthreshold swing is also a root cause for the low power and the high speed behavior of the device. In this paper, Kane-Sze formula based analytical model is used to evaluate drain current. Using this analytic model, the obtained value of drain current is 2.3mA at VDS=0.4V, transconductance (gm) is 160V-1, trans-conductance to drain current ratio is 2200 V-1, output resistance (R0) is 1.E4 ohm, intrinsic gain of the TFET is obtained 1.0E5 V/V. After that fabrication of TFET is done using Sentaurus TCAD tool and output characteristics and tunneling phenomenon has been analyzed.

ICACCI--22.5 15:30 Common Mode Implications of a Dynamic Latched Comparator
Anurag Sharma and Gurinderpal Singh (Chandigarh University, Punjab, India)

A TGC-based dynamic comparator designed and simulated with HSPICE using 32/28 nm integrated CMOS PDK in SYNOPSYS environment in my recent work is further analyzed for common-mode implications to various factors like supply voltage, delay, power and dynamic range. The simulation results shows that the comparator topology is found to be well suited for the input common-mode range of 0.3V to 0.8V and hence found to be sensitive to the differential input signal as weak as 0.5mV. The comparator is then also simulated over different supply voltage ranging from 1.2V down to 0.3V showing a huge power reduction with only a slight rise in delay from 60ps to 75ps approximately.

ICACCI--22.6 15:45 A Novel Design of Micro-gripper with Enhanced Displacement
Vijay Rundla (Rajasthan Technical University, India); Ajay Khunteta (Rajasthan Technical University & Kota Rajasthan India, India)

The paper presents performed design simulation on COMSOL 5.0 of reduced size micro-gripper for micro-robotics and micro-surgery field application. The performance of this proposed reduced size design of bi-morph electro-thermic actuated microgripper is optimized in terms of material optimization as well as dimensional optimization because these are the greatly affecting factors. The investigated results show very good agreement with the electro-thermal actuation operating mechanism. During simulation we limit operating voltage range from 0 to 2 volts.

ICACCI--22.7 16:00 Performance Analysis of MANET Routing Protocols in Smart City Message Passing
Elizabeth Jacob (Amrita Vishwa Vidyapeetham, Amrita University, India); Sivraj P (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, India)

A city wide communication network, catering various applications, can be established using wireless sensor networks which collect data, process it and take relevant actions to provide solutions for smart city implementation. Challenges faced in a smart city network include variation in area covered by the network for different applications, non-uniform node distribution, increase in the number of messages passed, use of multiple communication technologies and also heterogeneity in nodes and messages. The various domains of smart city are studied and a few applications were implemented in each of them. The paper, through survey provides a summary of various sensor network solutions in smart city. The paper brings out challenges in message passing by considering applications like cab sharing, ambulance services and other emergency services of various domains that use real time traffic data service provided by smart traffic domain to navigate efficiently. Message passing for this service is simulated in ns2 using MANET routing protocols and their effectiveness is studied.

ICACCI--22.8 16:15 Design and Analysis of Cross-Fed Rectangular Array Antenna; an X-Band Microstrip Array Antenna, Operating At 11 GHz
Nitin M Varghese (Karunya University, India); Shweta Vincent (Manipal University); Om Prakash Kumar (Manipal Institute of Technology, Manipal Academy of Higher Education, India)

In this paper, an X-Band Microstrip Patch antenna termed as the Cross fed rectangular array antenna has been presented with an enhanced radiation efficiency. The proposed antenna is designed and simulated using FEKO 5.5 suite. It is then fabricated on a 40 x 40mm2 Rogers RT-Duroid 5880LZ dielectric material board. This antenna is composed of a rectangular patch array in a cross fashion with four patches, each with a dimension of 12.3 mm x 9.85 mm excited using a wire-port. The antenna operates at a frequency of 11.458 GHz in the X-Band (8 - 12 GHz). It has achieved stable radiation efficiency of 80.71% with a gain of 7.4 dB at the operating frequency. It is thus inferred that this antenna can be used for X-Band applications such as in maritime navigation and airborne radars.

ICACCI--22.9 16:30 Built in Self-Test Scheme for SRAM Memories
Abhinav Sharma and Ravi V. (VIT University, India)

Due to the continuous miniaturization in the size and increase in the complexity of SRAM circuit causes the SRAM memory more prone to failure due to variations in process parameters which significantly affect and acutely degrading the output of SRAM. To enhance the consistent performance and firmness of SRAM towards parametric failures, fault detection mechanism based on various different algorithms is used to call built in Self-Test. In this paper a different circuit is implemented for the detection of faults based on the transient condition during the write operation of SRAM cell which has the self in test ability. Effectiveness of developed Built in Self-Test circuit is presented in this paper. Simulations are performed against the introduced fault in 6T SRAM Cell. The cadence virtuoso tool is used to design the SRAM cell, differential amplifier level shifter and comparator circuit. All the circuit designed with 180nm technology.

ICACCI--25: ICACCI-25: Symposium on Emerging Topics in Computing and Communications (SETCAC'16)/Symposium on Signal Processing for Wireless and Multimedia Communications (SPWMC'16)

Room: LT-10 (Academic Area)
Chairs: Maryline Chetto (Universite de Nantes CNRS & LS2N Lab, France), Sameep Mehta (IBM India Research Laboratory, India)
ICACCI--25.1 14:30 A Novel Method for Resource Discovery From a Range Based Request in a P2P Network
Krishna Prasad (Amrita Vishwavidyapeetham University, India); G. p. Sajeev (Govt Engineering College Wayanad & Amrita Vishwa Vidyapeetham, India)

Peer to Peer (P2P) systems have increased the curiosity and pathways for people to discover and share the resources. While various methods have been proposed in the discovery of discrete value based resources, there is also a surging interest in discovering a range of resources for a given request. This work is a novel design of a P2P network that adheres to range requests and seeks to discover the resources sought for in the request. The proposed model seeks to find out the range of resource values from within a P2P network of nodes that are in a circular overlay structure. The validation of the design reaches the conclusion that the proposed model increases in efficiency as the number of hubs increases with respect to discovering a range of resources in the hubs.

ICACCI--25.2 14:45 On the Selection of Mobility Optimised Routing Protocol for City Scenario using Multi Interface Car in VANET
Kanu Priya (Guru Nanak Dev University, India)

The continuous progression of wireless communication technologies have resulted in congregating attention of researchers to provide road safety and efficient transport system in VANET(Vehicular Adhoc Network) which is an integral part of ITS (Intelligent Transport Systems). For achieving robust and reliable services in VANETs, routing protocols and speed of the vehicle are the most significant factors. In this paper the speed of Multi Interface car is varied to select the optimum routing protocol and for this the comparative analysis of different routing protocols has been done in Multi Interface Car deployed using 802.11a interface in city scenario on the basis of three performance parameters i.e. Packet Collision Rate, Packet Drop Rate and Throughput Rate with the help of GUI Based tool NCTUns 6.0.

ICACCI--25.3 15:00 An Authentication Framework for Securing Virtual Machine Migration
Santosh Kumar Majhi (Veer Surendra Sai University of Technology, India); Sunil Dhal (Sri Sri University, Cuttack, India)

Virtual Machine (VM) migration has recently received significant attention to address the issue of load balancing, system maintenance, intelligent energy management of the data center. However, the migration trust and security concerns still exist real in the placement of VM. In To resolve these challenging problems, we use the existing authentication schemes and also we propose an authentication framework as a basic and practical component of secure VM migration process. Especially, in the proposed framework, the physical machines (PM) which are distributed in diverse data centers can accomplish mutual authentication, launch the shared session key Diffie-Hellman exchange protocol, with the public session key between PMs and hash based authentications. In this work, a comprehensive security scrutiny shows that the anticipated framework can assure the desired security necessities of VM migration.

ICACCI--25.4 15:15 Decision Support System for Diagnosis and Prediction of CRF Using Random Subspace Classification
Ani R (Amrita Vishwa Vidyapeetham Amrita University Amritapuri, India); Greeshma Sasi, Resmi Sankar U and O s Deepa (Amrita School of Engineering, India)

Chronic Renal Failure (CRF) is one of the major disease which affect the human life. The stages of CRF start with loss of renal functions and gradually it leads to complete failure of all kidney functions. This disease is fatal at its end stage unless a replacement of kidney or a dialysis process which is an artificial filtering mechanism is not done. So an early prediction of disease is very important to save the human life. Machine learning is a part of artificial intelligence that uses a variety of techniques to learn from complex dataset. Machine learning techniques are widely used in medical field for disease prediction and prognosis. The objective of this work is to develop a clinical decision support system using machine learning techniques. In this paper classification techniques like neural network based back propagation (BPN), probability based Naive Bayes, LDA classifier, lazy learner K Nearest Neighbor (KNN), tree based decision tree, and Random subspace classification algorithms are analyzed.The accuracy of each algorithm found is 81.5%, 78%, 76%, 90%, 93% and 94% respectively on a dataset collected from UCI repository which contains 25 attributes and 400 instances.

ICACCI--25.5 15:30 A Random Forest Approach for Rating-based Recommender System
Ajesh A, Jijin PS and Jayashree Nair (Amrita School of Engineering, India)

Recommender system has emerged as an integral part of the online shopping sites as it promotes sales. It recommends intuitive products based on users preference which solves the issue of information overload. Recommender system constitutes information filtering mechanism which filters vast amount of data .Algorithms such as SVD, KNN, Softmax Regression has already been used in the past to form recommendations. In this paper we propose a system which uses clustering and random forest as multilevel strategies to predict recommendations based on users ratings while targeting users mind-set and current trends. The result has been evaluated with the help of RMSE (Root Mean Square Error).Feasible performance has been achieved.

ICACCI--25.6 15:45 Attribute Reduction Based Anomaly Detection Scheme by Clustering Dependent Oversampling PCA
Asha Ashok (Amrita Vishwa Vidyapeetham, India); Smitha S and Kavya Krishna M h (Amrita School of Engineering, India)

Anomaly detection refers to the task of estimating and finding patterns which do not comply with the general behavior of data. Moreover, a range of assumptions are made so as to differentiate between normal and deviated data instances. This paper describes a solution approach to this problem using a two-step phase including an important preprocessing phase and anomaly detection phase. For the preprocessing phase, we have used two methods mainly: Recursive Feature Elimination method (RFE) and Random Forest Ensemble (RF-Ensemble) method. For the next phase of anomaly detection, we have used Clustering based Oversampling PCA (os-PCA) methodology. The k-median clustering approach is utilized for this purpose. The technique was implemented and tested on various standard data sets like Pima, Splice etc. The results were also compared with the existing state of the methods in this field like online Oversampling PCA, Naïve Oversampling PCA, decremental PCA, Local Outlier Factor, Angle Based Outlier detection and Median Based Outlier Detection approaches. The testing results confirm that the proposed approach outperformed all other methods on the basis of accuracy, AUC scores etc.

ICACCI--25.7 16:00 Challenge- Response Generation Using RO-PUF with Reduced Hardware
Abhishek Kumar (Lovely Professional University, India)

Cryptography looks forward to generate secrete key from integrated circuit. A silicon based device having uncontrollable feature in term of measurable output can turned into physically unclonable function (PUF). Multiplexer based PUF works on unpredictable delay difference between two parallel paths, while Ring oscillator based PUF works on frequency deviation generated due to random variation of CMOS fabrication process. Hardware based intrinsic features must include to generate a random response which fuzzified to generate a reliable secret key. Our design incorporates PUF based on circuit variation instead of process variation, PISO shift register reduce the hardware compared to existing RO-OUF. A 4-bit challenge generates 16-bit random response with measured reliability in term of intra hamming distance approx to mid value.

ICACCI--25.8 16:15 Multiuser Detection over Generalized-K Fading Channels using Two-Stage Initialized Teaching Learning Based Optimization
Lakshmi Manasa Gondela (University College of Engineering Kakinada, India); Vinay Kumar Pamula (University College of Engineering Kakinada & Jawaharlal Nehru Technological University Kakinada, India); Tipparti Anil Kumar (Department of Electronics and Communication Engineering SVS Group of Institutions Hanamkonda, India 506015)

A robust multiuser detection technique for direct sequence-code division multiple access (DS-CDMA) systems over generalized-K(GK) fading channels using teaching learning based optimization algorithm with two-stage initialization is proposed. In DS-CDMA systems multiuser detection techniques are applied to combat multiple access interference (MAI) and ambient noise. Experimental results proved that the ambient noise is non-Gaussian and impulse in nature which degrades the performance of the system substantially. The DS-CDMA user signals are often, transmitted over channels that introduce fading and shadowing. In this paper, we develop least-squares (LS), Huber and Hampel M-estimation based multiuser detection technique for joint detection of DS-CDMA signals in the presence of MAI, impulsive noise modeled by Laplace distribution and channel fading modeled by GK distribution. The teaching learning based optimization (TLBO) with two-stage initialization (TSI) method is used to minimize a penalty function that is a less rapidly decreasing function of residuals. Average bit error rate (BER) is computed to assess the performance of the TSI-TLBO detector. Simulation results show that the proposed robust technique offer significant performance gains with increase in signal-to-noise ratio (SNR) and diversity order in the presence of heavy-tailed impulsive noise.

ICACCI--25.9 16:30 MSVM-based Classifier for Cardiac Arrhythmia Detection
Nasreen Sultana (Bhoj Reddy Engineering College for Women & Vinay Nagar, Santhosh Nagar Cross Roads, Saidabad, Hyderabad); Yedukondalu Kamatham (CVR College of Engineering & Vastunagar, Mangalpally V, Ibrahimpatnam M R R Dist Hyderabad, India)

In this paper, the most authentic and efficacious method for cardiac arrhythmia classification using Multiclass Support Vector Machine (MSVM) is presented. The authors have considered classification of 6 beat types such as normal sinus rhythm (N), Premature Ventricular Contraction (PVC), Right Bundle Branch Block (RBBB), Left Bundle Branch Block (LBBB), Tachycardia (TA) and Bradycardia (BR) by implementing MSVM classifier. Radial Basis Function (RBF) kernel with 5 fold cross validation and zero offset value is used for adjusting kernel values. A total of 24 ECG records are used to collect different types of beats. To feed the classifier the features adopted where QRS complex, RR interval, R amplitude, S amplitude and T amplitude. The MSVM classifier performance is measured in terms of accuracy, sensitivity and specificity. The classifier demonstrates its effectiveness and is found to be highly accurate in ECG classification.

ICACCI--25.10 16:45 An approach to localize the blind node and optimizing parameters by using PSO
Marisha Sohar (Chandigarh University, Mohali, India)

In this paper, an optimization technique is proposed in order to optimize various parameters such as energy,attenuation.Localization of blind node is accomplished with minimum error under different operating environment respectively. The problem of node localization is considered with the help of ML estimation technique in wireless sensor networks.In this paper,we consider the problem of high bandwidth utilization due to multilinks between sensors and parameters were not optimized according to signal strength.We introduce an optimization technique in order to localize a blind node in both homogeneous environment respectively.Based on the simulations, we compare performances of ML estimation and optimization technique in both the environments respectively.

ICACCI--25.11 17:00 Reduce the effect of vignetting on MIMO optical wireless system by reducing BER of channel
Pallavi Nag (Chandigarh University, Mohali, India)

Vignetting is the process in which there is darkness at the edges as compared to center of images. Thus the process of vignetting can effect the performance of MIMO optical wireless communication (OWC) systems. The effect of the vignetting on a pixelated image is measured by using spatial OFDM . It is clear that vignetting can leads to the attenuation and ICI in the spatial orthogonal frequency division multiplexing. SACOOFDM(spatial asymmetrical clipped optical) and SDCOOFDM(spatial DC-biased clipped optical) is previous technique which is used to reduce the impact of vignetting on signals. After the simulation it is proved that spatial asymmetrical clipped optical and spatial DC biased clipped optical orthogonal frequency division multiplexing is the most reliable and efficient technique as compared to earlier techniques. SDCO-OFDM is abbreviated for Spatial dc biased optical OFDM. The BER performance can be enhanced by put on two schemes one is estimation and other is equalization scheme of vignetting. Finally, it is shown that equalized spatial asymmetrical clipped optical orthogonal frequency division multiplexing with 16-Quadrature amplitude modulation has the similar total data rate but requires small optical power as equalized spatial DC biased clipped optical orthogonal frequency division multiplexing using 4-Quadrature amplitude modulation. This thesis work investigates the result of vignetting on pixelated MIMO systems with OFDM. In this work, we reduce the effect of vignetting by reducing BER of channel in MATLAB simulator

ICACCI--25.12 17:15 A New Feature Extraction Method for Identification of Affected Regions and Diagnosis of Cognitive Disorders
Vinith Rejathlal (NITC, India); Sarthaj K (National Institute of Technology Calicut, India); Lijiya A and V Govindan (NIT Calicut, India)

Cognitive disorders like Alzheimer's disease pro- gressively disintegrate neurons and their interconnections in the brain; thus gradually deteriorating cognitive functions. Auto- mated diagnosis is very important in the early diagnosis of cognitive disorders. Early diagnosis allows in taking measures helping the person to move on. Clinical diagnosis is inefficient as the symptoms start to manifest only after significant atrophy of the cortical structures. This makes management of the con- ditions difficult. Resent findings have revealed the potential of Neuroimaging as a highly effective tool in the early detection of these disorders as structural changes in the brain set in much prior to the appearance of clinical symptoms. The disorders are a reflection of degeneration of the cortical structures and hence can be detected by analysis of the structural images of the brain. Therefore, analysis of T1-weighted MRI has become a popular method of early diagnosis of AD. The work proposes a feature extraction method that enables simultaneous identification of the afflicted cortical structures and diagnosis of disorders. The method proposed is based on sparse logistic regression and linear discriminant analysis. The results obtained were better than or comparable with many of the works reported in literature.

ICACCI--25.13 17:30 Selection of Routing Protocols for Advanced Metering Infrastructure
Sree Lekshmi S (Amrita Vishwa Vidyapeetham, Amrita University, India); Sivraj P (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, India); Sasi K Kottayil (Amrita School of Engineering & Amrita Vishwa Vidyapeetham, India)

Smart meter network supports various smart grid applications like demand response, dynamic pricing, time based tariff, etc., and so numerous messages flow through this network. Routing protocols are necessary for the efficient delivery of such messages and effective implementation of these applications. This paper discusses the scope of message routing in Advanced Metering Infrastructure (AMI) and studies the possibility of using wireless sensor network routing protocols in AMI. The metrics that decide the selection of routing protocols for a futuristic AMI scenario are brought out and also analysed. The performances of routing protocols like AODV, DSR, DSDV and AOMDV are compared based on selected metrics in a futuristic AMI scenario through simulation in ns2.

ICACCI--25.14 17:45 Comparative Study of two Pseudo Chaotic Number Generators for Securing the IoT
Ons Jallouli and Mohammed Abutaha (University of Nantes, France); Safwan El Assad (École Polytechnique de l'Université de Nantes & IETR Laboratory, France); Maryline Chetto (Universite de Nantes CNRS & LS2N Lab, France); Audrey Queudet (University of Nantes, France); Olivier Deforges (IETR / INSA Rennes, France)

The extremely rapid development of the Internet of Things brings growing attention to the information security issue. Realization of cryptographically strong pseudo random number generators (PRNGs), is crucial in securing sensitive data. They play an important role in cryptography and in network security applications. In this paper, we realize a comparative study of two pseudo chaotic number generators (PCNGs). The First pseudo chaotic number generator (PCNG1) is based on two nonlinear recursive filters of order one using a Skew Tent map (STmap) and a Piece-Wise Linear Chaotic map (PWLCmap) as non linear functions. The second pseudo chaotic number generator (PCNG2) consists of four coupled chaotic maps, namely: PWLCmaps, STmap, Logistic map by means a binary diffusion matrix [D]. A comparative analysis of the performance in terms of computation time (Generation time, Bit rate and Number of needed cycles to generate one byte) and security of the two PCNGs is carried out.

ICACCI--16: ICACCI-16: Fourth International Symposium on Women in Computing and Informatics (WCI-2016) - Regular Papers

Room: LT-9 (Academic Area)
Chair: Inderpreet Kaur (Director IGEN Edu Solutions India & Guru Nanak Dev Engineering College Ludhiana, India)
ICACCI--16.1 14:30 Diffusion Models and Approaches for Influence Maximization in Social Networks
Tejaswi V (NITK, Suratkal, India); Pv Bindu (Government College of Engineering Kannur & National Institute Technology, Karnataka, India); Santhi Thilagam P. (National Institute of Technology Karnataka & Surathkal, India)

Social Network Analysis (SNA) deals with studying the structure, relationship and other attributes of social networks, and provides solutions to real world problems. Influence maximization is one of the significant areas in SNA as it helps in finding influential entities in online social networks which can be used in marketing, election campaigns, outbreak detection, and so on. It deals with the problem of finding a subset of nodes called seeds such that it will eventually spread maximum influence in the network. This paper focuses on providing a complete survey on the influence maximization problem and covers three major aspects: i) different types of input required ii) influence propagation models that map the spread of influence in the network, and iii) the approximation algorithms suggested for seed set selection. We also provide the state of the art and describe the open problems in this domain.

ICACCI--16.2 14:45 How Social Media Can Contribute During Disaster Events?
Neha Pandey (People's Education Society Institute of Technology, India); Natarajan S (VTU, India)

During the time of crisis millions of microblogs are generated in the social media. Specifically large amount of tweet messages are posted by the users. These can be opinion oriented, sentimental tweets or ones that contribute important information. The latter kind of tweets plays a vital role in decision making during a crisis situation. These types of tweets are referred as Situation Awareness tweets. Extraction of situation awareness information from twitter is a non-trivial task as the vocabulary used usually is not formal and presence of short hand words for ease of writing reduces the readability of tweets. Crowdsourcing of data during such a disaster can aid in the task of decision making. In this paper, we propose a technique of extracting situation awareness information using concepts of machine learning along with creating of interactive map to locate the vulnerable areas during and relief areas post a disaster.

ICACCI--16.3 15:00 A Preemptive Hybrid Ant Particle Optimization (HAPO-P) Algorithm for Smart Transportation
Vinita Jindal, Vishwaraj Sharma and Punam Bedi (University of Delhi, India)

Due to exponential increase in number of vehicles, people are facing the problem of extreme congestion in developing countries like India. This congestion wastes the precious time and fuel in unnecessary waiting either at the intersections due to the presence of traffic signals or during the drive on the congested routes. This paper proposes a novel preemptive Hybrid Ant Particle Optimization (HAPO-P) algorithm to minimize overall journey time in VANETs by reducing both waiting at intersections and avoiding the congestion enroute. The aim is to select a path in peak hours with minimum waiting at the intersections with least possible congestion enroute. Here the best path is selected using HAPO algorithm and green light allocation is done by the preemptive algorithm. The proposed HAPO-P algorithm combines the benefits provided by both HAPO algorithm and preemptive algorithm with actual road conditions. The performance of the HAPO-P algorithm is tested on a map of North-West Delhi, India. The result obtained after applying the proposed HAPO-P significantly reduces the overall journey time with the growth in traffic over the existing MACO with assumption; both MACO and HAPO algorithms with actual road conditions used independently.

ICACCI--16.4 15:15 LASER: A Novel Hybrid Peer to Peer Network Traffic Classification Technique
G. p. Sajeev (Govt Engineering College Wayanad & Amrita Vishwa Vidyapeetham, India); Lekshmi Nair (Amrita University, India)

The popularity of Peer-to-peer (P2P) applications have shown a massive increment in recent times, and P2P traffic contributes considerably to the today's internet traffic. For efficient network traffic management and effective malware detection, P2P traffic classification is indispensable. This paper proposes, LASER (LCS-based Application Signature ExtRaction technique) algorithm, a novel hybrid network traffic classification technique which classifies the P2P traffic into malicious P2P and non-P2P traffic. The proposed classifier analyzes the header information for creating a communication module. Further, the signature is extracted from the payload information. We build the classifier by aggregating the information of header and the payload. The proposed hybrid classifier is analyzed for its performance and the results are promising.

ICACCI--16.5 15:30 Preventing Packet Dropping Attack on AODV Based Routing in Mobile Ad-Hoc MANET
Neema Soliyal (G B Pant Engineering College Ghurdauri Pauri Uttarakhand, India); Harvindra Bhadauria (Uttarakhand Technical University, India)

Mobile adhoc network is a wireless network. Which forwards the information in forms of packets either data packet or control packet from source to destination. It is a collection of mobile devices that can move from one direction to other that is the reason the topology is not fixed, and does not required any prefix infrastructure for configuring MANET. All devices under network can communicate with each other within a range. Packet dropping and bandwidth attacks are one of major concern on mobile adhoc network. If enough security measures are not their then the attacker nodes significantly degrades the performance of the network. This paper analysed nature of packet dropping and bandwidth attack based on AODV routing protocol on MANET, and proposed node bypassing technique to detect such kind of attacks.

ICACCI--16.6 15:45 Traffic Analysis for Smart Grid Networks Using Markov Chains with Autocorrelation Function Settings
Christiane Borges Santos (Instituto Federal de Goiás, Brazil); Fabio da Silva Marques (Instituto Federal de Educação, Ciência e Tecnologia de Goiás, Brazil); Sérgio Araújo (UFG, Brazil); Joao Batista Pereira (Instituto Federal de Goias, Brazil)

This paper is motivated by a growing interest in the applicability of power lines as an alternative means of propagation of communication signals, and it provides an analysis of VoIP traffic (Voice over IP) and data transfer networks BPL / PLC (Broadband Power Line / Power Line Communication) using Markov chains adjusting the auto-correlation function of the traffic. It is proposed a model based on MMFM (Markov Modulated Fluid Models) to characterize traffic data and VoIP networks in Smart Grid. Simulations and comparisons were made with other models such as Poisson and MMPP (Markov Modulated Poisson Process). The results were obtained from experiments performed in low voltage PLC networks using bandwidth from 4.3 MHz to 20.9 MHz.

ICACCI--16.7 16:00 Parallel Context Aware Recommender System Using GPU and JCuda
Richa Singh (Central University of South Bihar, India); Punam Bedi (University of Delhi, India)

Recommender Systems (RS) are intelligent applications which predict the preferences for users from their current interests. Collaborative Filtering (CF) is a widely used Recommendation technique. On large scale data, collaborative filtering approach is very time consuming and hence parallel processing can be useful for accelerating the task of recommendation. Context Aware Recommender Systems (CARS) help us to generate precise and relevant recommendations by using specific context of the user. This paper presents Parallel Context Aware CF Recommender Systems using JCuda (PCARS) that involves Graphic Processing Unit (GPU). The proposed algorithm works in two phases: offline and online processing. Two kernels are identified and processed for offline processing which reduce the processing time drastically. Online processing involves with the prediction calculation for the items which is not yet seen by target user and can be recommended to them. Offloading the prediction calculation to GPU will help to increase the overall system performance significantly. A prototype of the proposed system is developed using JCuda, CUDA and Java technologies for the restaurant domain. The performance of the proposed system PCARS is compared to the Collaborative Filtering Recommender Systems (CFRS) without parallel processing using contextual pre-filtering and context post-filtering in terms of processing time. Experimental results demonstrated the tremendous speedup of the system over a single-core CPU based system.

ICACCI--16.8 16:15 Effective Web Personalization System Based on Time and Semantic Relatedness
G. p. Sajeev (Govt Engineering College Wayanad & Amrita Vishwa Vidyapeetham, India); Ramya P t (Amrita School of Engineering, India)

The key aspect in building a Web personalization system is the user's navigational pattern. However, the navigational pattern alone is insufficient to capture the user's interest and behavior. This paper proposes a novel web personalization system that accepts the timing information, semantic information along with the navigational pattern and classifies the users according their interest and behavior on the site. The proposed model is validated by constructing a Web personalization model using the real and synthetic data and the results are promising.

ICACCI--16.9 16:30 Sequencing of Refactoring Techniques by Greedy Algorithm for Maximizing Maintainability
Sandhya Tarwani and Anuradha Chug (Guru Gobind Singh Indraprastha University, India)

Software maintainability is the ease with which a software system can be modified to correct faults, improve performance or other attributes of the source code. Bad smells are symptoms of deeper problem that indicates the need for refactoring which is the process of changing internal structure of the software without affecting its external attributes. Applying different refactoring techniques in different parts of a code results in changed maintainability value every time. Therefore, sequence in which refactoring should be applied is important so that optimal results can be obtained. In this study, we have proposed an approach for evaluating sequence of refactoring by with the help of greedy algorithm. The algorithm selects locally optimal solution at each stage with the hope of finding global optimal solution. Different sequences are generated and applied to the source code to calculate sum of software maintainability values. Greedy algorithm helps in finding the optimal sequence out of all the search space. We have evaluated the approach with source code having god class, long method, feature envy, long parameter list, data clumps, data class, class hierarchy problem, empty catch block, exception thrown in finally block and nested try statement bad smells which are detected manually. Hence our approach is able to identify sequence for refactoring as well as best refactoring which will increase maintainability and enhance software quality.

ICACCI--16.10 16:45 ELM Variants Comparison on Applications of Time Series Data Forecasting
Sachin Kumar and Shobha Rai (University of Delhi, India); Saibal K. Pal (DRDO, India); Ram Pal Singh (DDUC, Delhi University, India)

Extreme learning machine (ELM) which belongs to randomized algorithm categories, is versatile and an emerging learning algorithm. ELM has been developed for different application starting from pattern recognition, function estimation, regression analysis, time series analysis, and big data analysis etc. Unlike feed forward neural networks where slow convergence rate, imprecise learning parameters, presence of local minima are major bottles neck, this paper addresses these problems using different variants of ELM on big time series data analysis. In ELM and its variants, hidden nodes parameters like weights and biases are randomly generated and are fixed during the time of learning process. These parameters together analytically determines the output parameter and their weights for single hidden layer feed forward neural networks (SLFNs). Learning methods based on ELM tend to give good generalization performance with very fast learning speed of the machine learning model. The experimental results obtained on few large time series data show that these new algorithms can mostly produce good generalization performance. These algorithms are showing remarkable efficiency, simplicity, relatively faster speed and having good generalization performance in comparison with other conventional learning methods on benchmark data sets

ICACCI--16.11 17:00 Comparative Analysis of Concentration Control for Nonlinear Continuous Stirred Tank Reactor
Dipti Tamboli (SVERI's College of Engineering, Pandharpurr, India)

Classical linear controllers to handle complex non linear systems dynamics put limitation on their applicability to a constraint range of operations for getting satisfactory performance. To tackle such problems a non linear Continuous stirred tank reactor is investigated with two intelligent controllers namely Self Tuning Regulator (STR) and Model Reference Adaptive Controller (MRAC) along with conventional PID and linear Model Predictive Controller (MPC). The performance such as Peak overshoot and settling time is evaluated for set point concentration control of CSTR. The MATLAB simulation results show that Adaptive controllers specifically Self Tuning Regulator performs well than other controllers.

ICACCI--16.12 17:15 A Novel and Secure Methodology for Keyless Ignition and Controlling an Automobile Using Air Gestures
Sudhir Rao Rupanagudi and Varsha G Bhat (WorldServe Education, India); Supriya R, Riya Dubey, Suma Sukumar and Srabanti Karmarkar (Atria Institute of Technology, India); Amulya K, Ankita Raman, Devika Indrani and Abhiram Srisai (BNMIT, India); Mahathy Rajagopalan, Vaishnavi Venkatesh and Nupur Jain (R V College of Engineering, India)

The past few years has shown a sudden spurt in the field of human computer interaction. The days of using a mouse to control a computer is almost obsolete and people now prefer to use touch screens and more recently, air gestures, for the same. However, the use of gestures is not limited to computers alone. It finds its application in controlling televisions and other home appliances as well. This paper explores one such application where in air gestures could be used to control automobiles. The paper describes a novel method to not only give directions but also password-protect and use special features of the vehicle, using gestures. The complete algorithm was developed using video processing on MATLAB 2011b and was found to be 3.6 times faster than its predecessor algorithms. The same was tested in real-time as well, using a robot prototype and satisfactory results were obtained.

ICACCI--24: ICACCI-24: Symposium on Women in Computing and Informatics (WCI-2016) - Short Papers

Room: LT-9 (Academic Area)
Chair: Rajbir Kaur (The LNM Institute of Information Technology, India)
ICACCI--24.1 14:30 Detection of Fuzzy Duplicates in High Dimensional Datasets
Raksha N and Raj Alankar (Visvesvaraya Technological University, India)

Record duplicate detection is the most crucial task in the industry today. The record contains misspelled words, different formats of text and also repeated data. To overcome this issue of fuzzy duplicates, progressive methods, namely Progressive Sorted Neighborhood Method (PSNM), Progressive Blocking (PB) and record linkage are proposed. However, they face issues with respect to scalability and this is solved using Improved Indexing Technique (IIT) for fast duplicate detection. The result obtained shows the accuracy, precision and recall for each of these techniques and proves that Improved Indexing Technique provides maximum efficiency.

ICACCI--24.2 14:45 Public Health Allergy Surveillance Using Micro-blogs
Kruti Nargund (VTU); Natarajan S (VTU, India)

In recent days, lot of data getting generated in internet especially in micro blogs like twitter. 500 million tweets are getting generated every year and it keeps increasing exponentially. It is observed that people tweet about wide range of subjects which includes politics, weather, sports, health etc. Our study focuses on analyzing health related tweets and the main focus is on allergy disease. We are trying to classify the tweets into two classes, one contains the tweets which are actual incidents of allergy and another class contains awareness tweets. Classification is done using different classifiers such as Naïve Bayes, Naive Bayes Multinomial Modal, Support Vector Machine, K-Nearest Neighbor (KNN) and it is found that KNN classifiers' precision is better than other classifiers. The study also includes fetching the types of allergy from collected tweets. And the trained classifier model is used to classify the live streaming tweets of different locations. Location wise spread of allergy is analyzed by using the geo-code of different geographic location.

ICACCI--24.3 15:00 Survey on Meta Heuristic Optimization Techniques in Cloud Computing
Shishira Sr, Kandasamy A and Chandrasekaran K (National Institute of Technology Karnataka, India)

Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on demand access to shared resources over the Internet in a self service, dynamically scalable and metered manner. To reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map the job to resources that optimize more than one objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes long time to find an optimal solution. In cloud environment, it is best to find sub optimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey on optimization algorithms for cloud environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and a novel technique: League Championship Algorithm (LCA).

ICACCI--24.4 15:15 OSEECH:Optimize Scalable Energy Efficient Clustering Hierarchy Protocol in Wireless Sensor Networks
Shiv Singh and Bhumika Gupta (Gbpec Pauri, India)

Networks that do not include the use of wires or no physical links are required for communication is called Wireless Sensor Networks. Node deployment is done in the network that can also be called as sensor nodes and the responsibility or the task of these nodes is to enable communication in the network. These nodes have some energy value on which the performance of network depends like more is the residual energy of the nodes. The network performance also depends on the round for which first dead node is obtained and the round for which last alive node is achieved. Much advancement has been done in the field of energy efficient protocols to improve the energy efficiency of network. Clustering process was introduced and then techniques for Cluster head selection were developed and still more to come. Two most considered factors for designing energy efficient protocols are distance between the nodes and the energy value of nodes. The conventional protocols like LEACH, HEED, DEEC, SEECH are designed for improving the energy efficiency in WSNs. The effort in the paper is proposing one such protocol for improving energy efficiency in WSNs and this protocol is an upgradation in the SEECH protocol. The weight age factor is introduced in proposed technique for considering the energy value & the distance of nodes. The results are then optimized using Genetic Algorithm. The results are demonstrated and proved using MATLAB.

ICACCI--24.5 15:30 Energy Efficient Protocol for Cognitive Sensor Networks Based on Particle Swarm Optimization
Anuja Rawat and Shashi Verma (G. B. P. E. C. Pauri Garhwal, India)

Increasing demand for bandwidth usage has highlighted the spectrum utilization challenges for wireless sensor networks. Recent advancements in the area of wireless communication have introduced cognitive radio technology in wireless sensor network in order to overcome the spectrum scarcity problem. Cognitive wireless sensor networks, coexisting with wireless sensor networks are also dependent on limited energy resources. Besides typical sensing tasks; spectrum sensing and hand off along with bursty traffic demands extra energy of sensor nodes. Efficient energy utilization is one of the major concerns for these networks. In this paper, Fan Access Protocol (FAP) is introduced for cognitive network. This protocol effectively minimizes energy consumption and thus prolongs network lifetime using multi-hop routing based on optimal shortest path. Simulation results show that FAP performs better in terms of energy utilization and prolonging lifetime of network as compared to Fan-Shaped Clustering (FSC). A theoretical model for cognitive sensor network is also presented which uses coordinator nodes for spectrum sensing and data dissemination.

Saturday, September 24

Saturday, September 24 11:00 - 13:30 (Asia/Kolkata)

ICACCI--27: ICACCI-27: Symposium on Intelligent Informatics (ISI'16)/Symposium on Advances in Applied Informatics (SAI'16)

Room: LT-9 (Academic Area)
Chair: Maryline Chetto (Universite de Nantes CNRS & LS2N Lab, France)
ICACCI--27.1 11:00 Image Convolution Optimization Using Sparse Matrix Vector Multiplication Technique
Bipin B (Amrita University, India); Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India)

Image convolution is an integral operator in the field of digital image processing. For any operation to be processed in images say whether it is edge detection, image smoothing, image blurring, etc. process of convolution comes into picture. In mathematical terms, convolution is nothing but an overlap of 2 signals which thereby produces a 3rd signal. Generally in image processing the convolution is done by using a mask known as the kernel. As the values of the kernel is changed the operation on image also changes. For each operation, the kernel will be different. In the conventional way of image convolution, the number of multiplications will be very high. Thereby the time complexity will also increase. In this paper, a new and efficient method is proposed to do convolution on the images with lesser time complexity. We exploit the sub matrix structure of the kernel matrix and systematically assign the values to a new H matrix. Since the produced H matrix is a spare matrix, the output is realized here by using Sparse Matrix Vector Multiplication technique. Compressed Row Storage format (CSR) is the format that is used here for the Sparse Matrix Vector Multiplication (SMVM) technique. Using the CSR format with Sparse Matrix Vector Multiplication technique, convolution processes achieves 3.4 times and 2.4 times faster than conventional methods for image smoothing and edge detection operations respectively.

ICACCI--27.2 11:15 An Implementation and Performance Evaluation of An Improved Chaotic Image Encryption Approach
Gaytri Bakshi (KITM, India); Shelza Suri and Ritu Vijay (Banasthali University, India)

With the advent of digitization, digital data took over the market with tremendous rate and changed the world into a digital one where digital data is considered to be the atomic mode of communication. Digital data which could be in the form of written matter or image is transmitted over the network within countless arenas such as armed forces, secret agencies, medical sciences, entertainment and many more. Being digital data as a fundamental source to a multimedia platform, today's multimedia generation is the recklessly growing which have blended together technology with skills to form various applications which could be used in the digital devices. But due to some unscrupulous elements of the society various malicious attempts are made to craft a phobia to the mankind. Thus security is a crucial need for today's trend of technology. Ensuring about the current necessities of the society this paper formulates algorithms which could be used to protect the digital images. This paper scrutinized chaotic encryption techniques and articulated algorithms using the approach. Meticulously results in terms of security exploration and implementation are provided.

ICACCI--27.3 11:30 Design of a GPS-Enabled Multi-modal Multi-controlled Intelligent Spybot
Anushka Gupta (Jaypee University of Information Technology, Waknaghat, Solan, India); Shailesh Tanwar (Mukesh Patel School of Technology Management and Engineering, NMIMS, Mumbai, Maharashtra, India); Vishvander Singh (Army Institute of Technology, Pune, Maharashtra, India); Lavanya Matta (Indian Institute of Information Technology, Chittoor, Sricity, A. P., India); Vinay Kumar Mittal (KL University & KLEF, India)

Intelligent robots need to have the ability to adjust their actions, state and behavior (i.e., response), as per the inputs that could be influenced by their immediate environment and/or situation/s. Hence, intelligent robots have diverse applications in domains such as security, automation, industry and agriculture. Such robots when used for surveillance or spying operations by military or police forces are called spybots. This paper aims to highlight the need and importance of developing systems that have the ability to remotely control their functions, response and operation, using multiple-controls and commands given by multiple-modalities. As an example, to give a live physical demonstration of the proposed concepts, a prototype smart spybot is developed. The prototype spybot can be controlled using multi-modalities such as text, touch, speech and gestures. It can be remotely operated using an application running on an Android based smart-phone, or a website, or an interactive program running on a laptop. Each can be connected by wires or wirelessly. The prototype spybot also has decision making capability and obstacle-avoidance feature. It is GPS enabled, and can be used as a tracker also. This intelligent spybot can directly fetch the live-video feeds and the spied data to a website, along with GPS coordinates. Since the prototype spybot is speech and gesture-enabled, it can also be used effectively by the people having restricted physical abilities. The initial performance results and trials are encouraging. The prototype spybot can potentially be used for a range of applications.

ICACCI--27.4 11:45 Evaluation of Mobile Learning in Workplace Training
Muneer Dar (National Institute of Electronics & Information Technology, Srinagar, India); Sameer Ahmad Bhat (Kuwait College of Science and Technology, Saudi Arabia)

At present, the organizations use either wired or wireless devices to exploit the internet services for effectively training their employees to achieve higher progression and production in their businesses. However, buying desktop PCs or mobile devices like laptops or tablet computers etc. for every employee, may exceed the financial plans designed for an organization. This leads us to take advantage of employee owned personal mobile devices, which helps organizations to reduce hardware costs. Now, to evaluate the effectiveness of the owned personal mobile devices, this study evaluates the learning performances of employees, who learn and acquire workplace trainings, through their owned personnel mobile devices, at their workplaces. For the purpose of evaluation, an experiment was conducted on two groups of learners. Observed experiment results indicate that the learning among organizational employees via the mobile devices, significantly promotes learning value, effects and increases approval levels for mobile technology adoption in their learning process. The results are useful for researchers involved in improving the development process of mobile learning projects dedicated to the organizational workplace training needs.

ICACCI--27.5 12:00 Constraint Based Recipe Recommendation Using Forward Checking Algorithm
Kirti Pawar (MUMBAI & RAIT, India)

With the advanced research in medical science and technology, quality of our life has improved, also life span has been increased. With life style, diet habits has changed and work pressure increased which resulted in number of diseases, such as diabetes, Blood Pressure, heart problem etc. These diseases can be controlled to certain extent by avoiding uneven and inappropriate diet. Hence it is important to understand good diet and intake habits. Our objective is to recommend recipes to maintain health condition for people with disease and without disease which will satisfy the user needs. To present the recommendations, recommended system uses user's profile, favorite items and recipe details. This explicit knowledge satisfies the constraints for the recommendation. In this paper we propose a recommendation system for recipe using constraint knowledge based recommendation method (CKRM) and forward checking algorithm. The proposed system suggests recipe for diabetes disease which provides improved precision and recall. This depicts that proposed system is efficient than existing system. The reason to select this method is that it does not have ramp-up problem. The constraints which satisfy user's requirement will be the good recipe recommendation.

ICACCI--27.6 12:15 Transition of Indian ICT Processes to Smart E-Services - Way Ahead
Arpit Jain Singhai (National Informatics Centre, India); Danish Faizan (NIC-INDIA, India)

In the past 10 years, use of Information and Communication Tools (ICT) in governance has increased significantly. From e-Gov 1.0 to e-Gov 2.0, from Mission Mode Projects to Digital India mission, India has witnessed a revolutionary change in the way technology is used for bringing transparency in governance. From long queues at Railway Reservation Counters to over 1.5 million tickets being sold online via IRCTC web portal, we are witnessing a mesmerizing transformation in Indian society. As India progresses towards an era of improved rural internet network and significantly high young population readily accepting and approving modern e-governance tools, India stands on a road of rapid urbanization supplemented and supported by newer e-Governance tools and processes. In this paper, we shall discuss possible progress of e-Governance in India along with challenges that it shall face. This paper builds upon existing e-Gov 2.0 framework and proposes the characteristics of next generation of e-Governance "e-Gov 3.0". A comparative study in first phase and ongoing second phase of digital revolution as compared with leading countries of the world has also been studied and included in this paper for better learning of the contentious gaps and areas of improvement in implementation of e-Governance tools.

ICACCI--27.7 12:30 The Impact of Globalization on the Perception of Beauty Among South Indian Women
Rajiv Prasad (Amrita Vishwa Vidyapeetham University & Amrita School of Business, India); Lakshmi Unnikrishnan (Amrita Vishwa Vidyapeetham University, India)

India is a country which has been influenced significantly by foreign cultural influences due to foreign rulers, interactions through trade, and other cultural exchanges As a result of these influences, the concept of beauty in India has become intertwined with fairness of the skin colour. Recently, since the economic reforms which started in 1992, globalization has started to influence Indian socio-cultural life via increased influx of foreign ideas through various media, increased business and trade with other parts of the world without any barriers etc. Because of the upsurge in the Western influence on India, the various Media spaces are filled with the Western pop stars, actors, models etc. who don't belong to the Indian racial stock, and show unrealistic images of beauty which are influencing the Indian concept of beauty. The people who were born before the economic reforms are less likely to be influenced by this trend as compared to the youngsters born after, as they were exposed more to these global influences in their formative period. This study examined the various factors that influence the beauty perception of South Indian women born both before and after the economic reforms started in India.

This study was conducted in South India as the people here are mostly of a dusky skin tone and medium body size and height. So they are more likely to be influenced if the cultural norm of beauty shifts towards greater fairness of skin tone as well as lean and tall body features. Also, no such studies have been done in the South Indian context on this topic. The data for the study was collected using an online questionnaire which was filled by 333 women from different parts of South India.

ICACCI--27.8 12:45 Effect of Hofstede's Dimensions on Skin Care Advertising At the Micro Level A Content Analysis of Olay's Indian and US Digital Ads
Parvathy Saseendran Nair (Amrita Vishwavidyapeetam, India); Chitra Ramakrishnan (Amrita Vishwa Vidyapeetham, India)

In this era of globalization, organizations are increasingly expanding overseas in search of new markets and greater profitability. Colonizing new markets also means mastering inter cultural communication and the most important tool for strategic communication used by global businesses today is advertising. Since strategies for advertisements are increasingly being developed internationally, it has become imperative to have an understanding of the country's cultural characteristics like language, lifestyle, norms, education, attitudes etc. In this context a map of different national value structures and their consequences for behavior across cultural frontiers was created by Professor Geert Hofstede and the study examines how Hofstede's cultural dimensions affect the individual ad units and how applicable the dimensions are at the micro level. Two ads from the skin care segment released in 2015 as part of a successful global campaign, one from India belonging to Hofstede's family cluster and one from US belonging to the Contest cluster have been chosen for analysis based on the popularity of the brand and sub brand.

ICACCI--27.9 13:00 An approach towards quality assessment of turmeric rhizomes using surface thermal profiles
Navreet Saini (CSIR-Central Scientific Instrumenmts Organisation, India); Amitava Das (CSIO, India)

Thermal imaging techniques is an emerging non invasive method that is increasingly being studied for quality evaluation of agricultural and food products. There is limited reported literature on the quality assessment of turmeric rhizomes using machine vision techniques. In our study, we used passive thermography for obtaining surface thermal profiles of turmeric rhizomes at different time instants after the samples were heated for an hour in daylight conditions. The obtained thermal images were processed and the thermal profiles of the samples were computed. After normalization of the obtained thermal profiles, it was observed that the healthy rhizome samples had significantly different thermal profiles than the shriveled ones. The results are promising and further work is required to correlate the thermal profiles of the turmeric rhizomes with their chemical composition.

ICACCI--27.10 13:15 Automatic classification of turmeric rhimozes using the external morphological characteristics
Arshpreet Kaur (Punjabi University, India); Navreet Saini (Central scientific Instruments Organisation); Ranjit Kaur (Punjabi University, India); Amitava Das (CSIO, India)

Machine vision is a computer based decision making process useful for reducing manual inspections. The accurate classification of agricultural products can be obtained by using external morphological details. This study presents a machine vision system that classifies turmeric rhizomes into three defined classes based on its origin. The objective of this paper was to classify the varieties of turmeric rhizomes using neural network based classifier. The visible images of the sample rhizomes were acquired and imaging techniques such as histogram thresholding and morphological operations were performed on them. The external morphological features such as shape and size showed promising results in automatically classifying the turmeric rhizomes.

ICACCI--27.11 13:30 Identification of Causal Relationships Among Clinical Variables for Cancer Diagnosis Using Multi-Tenancy
Marasanapallekalle Krishna and Sanjay Singh (Manipal Institute of Technology, India)

Cancer causing more deaths than AIDS, tuberculosis and malaria combined. Especially breast cancer killing more than 40,000 women and 440 men every year in U.S.A. Over many years various data mining studies have tried to predict the cancer. There are only few studies on finding causal relationship among clinical variables causing cancer. They also provide theoretical guidance for cancer diagnosis and treatment. As there are many classifiers, learners and techniques to find causal relationships, it is very difficult to find attributes with very strong positive relation that are causing cancer.

In this paper, we have applied Multi-Tenancy strategy based on logical databases, where whole database is divided into four tenants and proposed a graphical structure of key-dependency attributes which are causing cancer. We have used Pearson Product Moment Correlation Coefficient (PPMCC) to measure the strength of linear relationship between attributes and kappa analysis for finding the efficiency of each tenant. The tenant with highest kappa measure is treated as more efficient tenant. The proposed algorithm applies searching algorithm on conditional mutual information matrix to identify attributes which are dependent. This method represents relationships between attributes by using directed acyclic graph. Thus instead of finding general relationships, it is very useful to find very strong positive relationships which improves the accuracy in diagnosing cancer causing attributes.

ICACCI--27.12 13:45 An Empirical Evaluation of Local Descriptors in Object Recognition
Ritu Rani (Indira Gandhi Delhi Technical University for Women, Delhi, India); Ravinder Kumar (HMR Institute of Technology & Management & GGS IPU, India); Amit Prakash Singh (Guru Gobind Singh Indraprastha University, India)

Feature descriptors play a vital role in computer vision applications and object recognition. An empirical evaluation of various local descriptors used for object recognition is presented in this paper. COIL-100 dataset is used to perform the experimental analysis. The descriptors are evaluated on the basis two parameters i.e, the number (size) of features extracted and the run time taken for their execution. Experimental results using these descriptors are shown with conclusion and future research directions

ICACCI--28: ICACCI-28: Fourth International Symposium on Women in Computing and Informatics (WCI-2016) - Regular Papers

Room: LT-1 (Academic Area)
Chair: Inderpreet Kaur (Director IGEN Edu Solutions India & Guru Nanak Dev Engineering College Ludhiana, India)
ICACCI--28.1 11:00 Video Clip Retrieval Using Local Phase Quantization
K P Uma, B H Shekar and Raghurama Holla, K (Mangalore University, India)

The steady rise in use of smart phones equipped with latest features and access to media capturing and editing devices has caused the growth in multimedia content. Videos are a part of multimedia. To harness the information and data present in videos a robust and accurate content based video retrieval system is required. This paper presents a similar video clip retrieval system using texture feature called local phase quantization(LPQ) descriptor. This texture descriptor is formed by taking the histogram of codewords. This codeword is formed by quantizing the phase information computed within a window around each pixel in the image. The use of phase information makes LPQ descriptor robust to illumination and blur variations. Robustness and suitability of the proposed approach is demonstrated by carrying out experiments using videos from TRECVID 2007 dataset.

ICACCI--28.2 11:15 Bioinspired Memory Model for HTM Face Recognition
Alex James (IIITMK, India); Olga Krestinskaya (Nazarbayev University, Kazakhstan)

Inspired from the working principle of human memory, we propose a new algorithm for storing HTM features detected from images. The resulting features from the training set require lower memory than existing HTM training set. The proposed features are tested in a face recognition problem using the benchmark AR dataset. the simulation results show that the proposed algorithm gives higher face recognition accuracy, in comparison to the conventional methods.

ICACCI--28.3 11:30 2L-DWTS - Steganography Technique Based on Second Level DWT
Punam Bedi, Veenu Bhasin and Tarun Yadav (University of Delhi, India)

Steganography is a technique used to communicate a secret message secretly hidden in an innocuous cover. In this paper, a new image steganography technique 2L-DWTS is proposed. This technique, 2L-DWTS embeds secret message in the higher frequency components among the second level DWT components of the cover image. Coefficients in the low frequency components (LL), at both DWT levels, are left untouched improving the image quality. The stego images obtained using 2L-DWTS are very similar to the corresponding cover images, thereby making this technique secure. The message of size as big as 28.12% of the cover image size can be embedded using this technique. The extraction method does not need the original cover image or any separate key for extracting the message. The experiments conducted on monochrome as well as colored BMP and JPEG images show that using this technique a high capacity, secure, less perceptible stego image is formed. Various metrics - MSE, PSNR, NC, NCC, NAE and IF are used to establish this fact.

ICACCI--28.4 11:45 Clustering of Words Using Dictionary-Learnt Word Representations
Remya R.K. Menon (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, Amritapuri, India); Gargi S and Samili S (Amrita School of Engneering, India)

Language is the way of communication through words. This will help to get a better insight of the world. Natural Language Processing (NLP) mainly concentrate on expanding systems that allow computers to communicate with people using everyday language. One of the challenges inherent in NLP is teaching computers to recognize the way humans learn and use language. Word representations give rise to capture syntactic and semantic properties of words. So the main purpose of this work is to find out the set of words which having similar semantics by matching the context in which the words occur. In this work we explore a new method for learning word representations using sparse coding, a technique usually done on signals and images. We present an efficient sparse coding algorithm, Orthogonal Matching Pursuit to generate the sparse code. Based on the input given, sparse codes are generated for the input. The input term vectors are classified based on the sparse code by grouping the terms which have same sparse code into one class. K-Means algorithm is also used to classify the input terms vectors which have semantic similarities. Finally, this paper makes a comparison that gives the best representation from the sparse code and K-Means. The result shows an improved set of similar words using sparse code when compared to K-Means. This is because SVD is used as a part of dictionary learning which captures the latent relationship that exists between the words.

ICACCI--28.5 12:00 Harnessing the Discriminatory Strength of Dictionaries
Remya R.K. Menon (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, Amritapuri, India); Neethu V Kini and Athira Krishnan G (Amrita School Of Engineering, India)

Over the past few years there are many developments in the area of classification in data mining. Classification is a supervised learning method, that maps data into predefined groups or classes. Nowadays classification techniques are extensively used in different applications. In this area most of the research works are done on text, image, signal, text etc . The main goal of this paper is to use a dictionary-based approach to learn, represent and classify documents. We consider dictionary as a collection of documents and documents in the dictionary is represented as a collection of vectors. An algorithm is also implemented to easily locate a class specific document in the dictionary and if it is not present, update the dictionary. The existing method is based on a dictionary learning algorithm which only improves the document representation based on Singular Value Decomposition (SVD) updation. Since SVD will be helpful for discrimination of data, so our proposed algorithm is Linear Discriminant Analysis (LDA) for learning a discriminating dictionary. On applying the proposed algorithm on well known dataset, the overall results obtained shows 90% improvement in accuracy. The advantage is that it can be used for both representation as well as classification.

ICACCI--28.6 12:15 Detection of Suspicious Lesions in Mammogram Using Fuzzy C-Means Algorithm
Mukesh Kumar (Gbpec Pauri, India); Upendra Bhatt (Gbpec, India); V Thakkar (Gbpec Pauri, India); Neema Soliyal (G B Pant Engineering College Ghurdauri Pauri Uttarakhand, India)

Breast cancer is one of the most incurable diseases, which leads to the death of women globally every year. For initial detection of a tumor in the breast, the most useful technique called 'Mammography' is used, which is an x-ray examination of the breast, which can be used to detect or diagnose breast tumor which may lead to breast cancer. Using Mammography, a small lump that may lead to breast cancer can be detected at the initial stage. Sometimes it is not possible to recognize very small tumors because of noisy, blurred, and fuzzy images. Therefore, they need to be enhanced to increase the contrast for better visual perception and reduce the noise from it for better diagnosis. In this work, Fuzzy C-Means algorithm is used to detect the suspicious lesions in a mammogram. To achieve the objective of this work, MIAS (Mammographic Image Analysis Society) and INbreast databases are used, which contain 322 and 412 images of the breast (both left and right breast). In these databases, every image is examined by the expert radiologists. The effectiveness of the algorithm is measured in terms of MSE (Mean Square Error) and PSNR (Peak Signal to Noise Ratio).

ICACCI--28.7 12:30 Object Detection Using Binocular Vision
Neethu S (Amrita Vishwa Vidyapeetham, India)

The prevalent traffic conditions have necessitated the need for automotive industries to do greater research on passenger and vehicle safety systems. The present work addresses one of such obstacle detection system that is based on stereo vision, a cost efficient and accurate sensor for scene construction. Stereo-based obstacle detection system has the ability to provide the distance of obstacle from the car and can warn the driver based on the distance to collision. There are two different approaches for detecting obstacles using stereo vision: disparity based and inverse perspective projection based. The present method is based on disparity approach. In the proposed method, dense stereo disparity information is used for 3D reconstruction of the scene and the triangulated information is used to build the Digital Elevation Map (DEM) for detecting obstacles. The system is implemented in MATLABTM (R2010b).

ICACCI--28.8 12:45 An Automated Design of Pin-Constrained Digital Microfluidic Biochip on MEDA Architecture
Pampa Howladar and Pranab Roy (Indian Institute of Engineering Science and Technology, Shibpur, India); Hafizur Rahaman (Bengal Engineering and Science University, Shibpur, India)

Pin-constrained design of digital microfluidic biochips for general purpose assay operations is receiving much attention as it can reduce the total product cost by simplifying chip fabrication and packaging. Most of the existing works of pin-constrained biochip designs and pin reduction solutions are applicable for specific biotechnology applications. A proficient electrowetting-on-dielectric based architecture namely Microelectrode Dot Array (MEDA) has recently been introduced as a new, highly reconfigurable, scalable and field programmable dot array architecture which is capable of undergoing dynamic configuration. In this paper, we present some essential sufficient pin-assignment architecture on pin-constrained MEDA based biochips and propose a pin assignment technique for multifunctional biochips in order to solve the pin reduction problem. Some random specific benchmarks are used for evaluating our technique to prove its effectiveness.

ICACCI--28.9 13:00 Detection and Prediction of Osteoporosis Using Impulse Response Technique and Artificial Neural Network
Tejaswini E, Vaishnavi P and Sunitha R (Amrita Vishwa Vidyapeetham)

Osteoporosis is an age related disorder manifested by skeletal fractures. This has been recognized as an important health issue mainly in women. Low bone mineral density is the major cause for osteoporosis. Detection and prediction of osteoporosis is a major challenge. Detection of osteoporosis helps in determining the density of the bone and also in the prevention of osteoporotic fractures in the high risk populations. In this study an easy first line method has been proposed to detect and predict osteoporosis. Impulse response test was carried out on the tibial bone for the detection of osteoporosis with the help of LabVIEW. The vibrations which were generated by the periodic impact of surgical hammer were captured by the accelerometer. The recorded analog signal was examined in frequency domain. The natural frequency of the vibration was significantly decreased in osteoporosis subjects which in turn indicate the loss in mechanical strength of the bone and bone mineral density. Prediction of osteoporosis was performed using a decision making system such as ANN in Matlab where factors other than bone mineral density was considered.

ICACCI--28.10 13:15 An Automated Tool for Generating Change Report From Open-Source Software
Ruchika Malhotra and Ankita Bansal (Delhi Technological University, India); Sourabh Jajoria (Netaji Subhas Insitute of Technology, India)

Classes in object-oriented software systems are continuously subject to change. Change prediction is a very important activity in software development. Change data consists of the number of lines of codes added, deleted and modified for each common class between any two versions of a software system. It is important to develop tools to calculate change data and object-oriented metrics that will assist software practitioners in identifying change prone classes in early stages of the software development life cycle. In this paper, we develop a tool, Change Report Generator (CRG) to generate the change report from software source codes of various versions of open-source software. We also extend this tool to automate object-oriented metrics calculation from the source codes of software systems. The generated files store the total number of changes class wise and corresponding values of different object-oriented metrics for each common class between the two versions. This paper gives an overview of some of the applications of the collected data like statistical comparison of two versions and prediction of change-prone classes.

Friday, September 23

Friday, September 23 14:30 - 18:30 (Asia/Kolkata)

ICACCI--17: ICACCI-17: Sensor Networks, MANETs and VANETs/Distributed Systems (Short Papers)

Room: LT-1 (Academic Area)
Chair: Ravi Kishore Kodali (National Institute of Technology, Warangal, India)
ICACCI--17.1 14:30 Joint Routing, Rate Adaptation And Power Control for Multi-Radio Wireless Mesh Networks
Mohammed Moin Mulla (KLE Technological University, Hubli, Karnataka); Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India)

Wireless Mesh networks are emerging as back haul wireless technology to connect various types of networks to the internet. The multi-radio capability of routers in these networks enhances the capacity but increases the interference and affects the QoS. To address this, an efficient routing mechanism is needed to compute the optimal route. Further, the routing decisions are dependent on transmission rate along with the energy efficiency. Towards this, we extend our previous work of joint routing and rate adaptation with energy optimization using cross-layer mechanism. The cross-layer parameters are accessed from PHY, MAC and network layers. In this work, we design Interference Ratio Based Rate Adaptation (IRBRA) mechanism jointly with routing along with energy optimization. We implement this technique using OLSR protocol in NS2. The results reveal that the IRBRA performs better that the frameworks based on modified sample rate and LD-ARF. The performance is evaluated using parameters like throughput, packet loss, energy consumption and delay.

ICACCI--17.2 14:45 Song Year Prediction Using Apache Spark
Prakhar Mishra, Ratika Garg and Akshat Kumar (The LNM Institute Of Information Technology, India); Arpan Gupta (The LNM Institute of Information Technology, India); Praveen Kumar (GRIET, India)

In this paper, we aim to predict the year in which a particular song was officially released. Listeners often have particular affection for music from certain periods of their lives (such as high school), thus, the predicted release year of a song could be a useful basis for recommendation. Furthermore, a successful model of the variation in music characteristics, through the years, could throw light on the long-term evolution of popular music. In our study, different machine learning algorithms available in the Apache Spark Machine Learning library (MLlib) are applied on a sample of Million Song Dataset (MSD). Different learning algorithms were applied for training and prediction purpose. Also, the training times are compared for single and multinode cluster environment using Apache Spark.

ICACCI--17.3 15:00 Implementation of a Web-based Programming Tool for Distributed, Connected Arduino Systems
Manoj Nair, Jishnu R and Rakesh K M (Amrita University, Amrita Vishwa Vidyapeetham, India); Anand Ramachandran (Amrita University, India)

Embedded systems are ubiquitous in day to day life. Many of such systems are (1) numerous, (2) widely distributed over a large geographical area, and (3) often connected to a network. Software enhancements and bug-fixes to software running on such remotely deployed embedded systems might sometimes need to be done. The large numbers of systems that might need to be programmed, and/or the remote/inaccessible location of such systems often poses a major hurdle in the process. Several industrial solutions for re-programming distributed embedded systems do exist, viz., systems that use Remote Terminal Units, Programmable Logic Controllers or Programmable Automation Controllers. However, these systems are both complex and expensive. We propose a method to remotely program microcontroller-based distributed embedded systems. Such microcontroller-based solutions are the systems of choice for low-cost, high-volume, distributed embedded systems. Our approach is simple and is more suitable when a direct network connection to each of the distributed embedded systems is available. We use a simple web-based interface to write programs on a modern browser, and download the program onto a light-weight server attached to the remote embedded system. The server checks the code for errors and then updates the embedded system with the new version of the software. This solution is more amenable for low-cost systems and where each system is directly connected to the Internet. We believe that this model of reprogramming remote, connected embedded systems will help reduce the time to market, cost, maintenance effort and digital footprint of such systems.

ICACCI--17.4 15:15 Ensemble Learning for Network Data Stream Classification Using Similarity and Online Genetic Algorithm Classifiers
Arun Manicka Raja M (Anna University, Chennai, India); Swamynathan S (Anna University, India)

The generation of data is increasing day by day from various sources such as automated data collection tools, database systems, e-commerce and social media websites. There is an explosive growth of data from terabytes to petabytes. Since large amount of data is available, people look for valuable knowledge from the available data. Several mining algorithms are used to extract interesting patterns from the data stored in a repository. After the evolution of data streams, the need arises to think of a new algorithm to process it. In this work, the ensemble of classifiers model has been developed for mining the data streams by combining stream mining classifiers such as Similarity-based Data Stream Classifier (SimC) and Online Genetic Algorithm (OGA) classifier. The performance of ensemble based classifiers show improved classification accuracy and less classification error rate under various circumstances.

ICACCI--17.5 15:30 Rationale Behind The Virtual Sensors and Their Applications
Atrayee Gupta and Nandini Mukherjee (Jadavpur University, India)

Every physical sensor is pre-defined to act on purpose. On the other hand, a virtual sensor is destined to act on application needs, perhaps can perform more even in case of a single sensor device. The basic objective of this paper is to explore the conceptual form of virtual sensor with proper explanation. This paper substantiates the theory of the virtual sensor with implementation, algorithms and use cases. The paper presents a taxonomy of virtual sensors depending on their capabilities and deployment layer. The paper also presents some case studies along with experimental results of the virtual sensors in ehealth and environment monitoring domains.

ICACCI--17.6 15:45 A Realistic Approach for Representing and Scheduling Workflows in Cloud Computing Environment
Karuppiah Kanagaraj (MEPCO Schlenk Engineering College, India); S Swamynathan (Anna University, India)

Workflow consists of set of tasks used to automate any scientific or business process. Workflows are usually represented as a Directed Acyclic Graph (DAG), in which the nodes represent the tasks involved in the process and the edges represents the dependency between the tasks. Workflow scheduling is the process of assigning the tasks to suitable resources for execution. Even though DAG is the efficient format for representing workflow, it cannot be used directly as input to programs that perform workflow scheduling and execution. Hence representing workflow in a suitable format for scheduling is a major challenge for researchers working in this area. This paper proposes a Realistic Workflow Scheduler that accepts the workflow in graphical format, converts it into a suitable text format and perform appropriate scheduling. The scheduler implements the Workflow Definition Converter (WDC) to convert the workflow represented in graphical format to text file format that can be directly given as input to any program or simulator for scheduling and execution. The Critical Path Finder is used to identify the critical path of the workflow. In addition to that the Deadline Constrained (DC) and Cost Constrained (CC) workflow scheduling algorithms have been implemented to schedule the workflow in a suitable cloud computing environment. The correctness of this work is also verified by scheduling popular workflows like Montage and Inspiral using the proposed scheduler.

ICACCI--17.7 16:00 Secrecy Performance of a Dual Hop Cognitive Relay Network with an Energy Harvesting Relay
Siddharth Raghuwanshi (NIT Durgapur, India); Pranabesh Maji (National Institute of Technology, Durgapur, India); Sanjay Dhar Roy (National Institute of Technology Durgapur, India); Sumit Kundu (National Institute of Technology, Durgapur, India)

In this work, the physical (PHY) layer security is investigated in terms of secrecy outage probability (SOP) for a dual hop decode and forward relaying system in a cognitive relay network (CRN) in presence of an eavesdropper. The relay node harvests energy using a power splitting-based relaying (PSR) scheme from the RF signal of primary user as well as a secondary user. The transmit power of the secondary user (SU) as well as relay is limited by the maximum tolerable interference at primary user (PU) receiver due to the cognitive environment as imposed by quality of service (QOS) of PU. The SOP for the network under study has been analyzed. The impact of several parameters such as PU's peak power, SU's peak power, interference threshold for underlay CRN, channel mean power of legitimate and illegitimate communication channels and energy harvesting conversion efficiency on SOP has been indicated.

ICACCI--17.8 16:15 Vehicular Traffic Analysis From Social Media Data
Himanshu Shekhar (B. V. B. College of Engineering and Technology, India); Shankar Gangisetty (KLE Technological University, Hubballi, India); Uma Mudenagudi (B. V Bhoomaraddi College of Engineering and Technology, Hubli, India)

In this paper, we address the problem of vehicular traffic congestion occurring in densely populated cities. Towards this we propose to provide a framework for optimal vehicular traffic solution using social media live data. Typically, the traffic congestion problem addressed in literature focuses on usage of dedicated traffic sensors and satellite information which is quite expensive. However, many urban commuters tend to post updates about traffic on various social media in the form of tweets or Facebook posts. With the copious amount of data made available upon traffic problems on social media sites, we collect historical data about traffic posts from specific cities and build a sentiment classifier to monitor commuters' emotions round the clock. The knowledge is used to analyze and predict traffic patterns in a given location. Also we identify the probable cause of a traffic congestion in a particular area by analyzing the collected historical data. Through our work, we are able to present an uncensored, economical and alternative approach to traditional methods for monitoring traffic congestion.

ICACCI--17.9 16:30 Work in Progress: A Proposal for a Hotspot Metric to Extend the Lifetime in Wireless Sensor Network
Mayada Mustafa (University Putra Malaysia, Malaysia); Borhanuddin M Ali (Faculty of Engineering & Universiti Putra Malaysia, Malaysia); Shaiful Hashim (UPM, Malaysia); Mohd. Fadlee A. Rasid (Universiti Putra Malaysia, Malaysia)

Recently, the revolution of deploying wireless sensor network (WSN) technology has accelerated the need for further development. Thus, numerous researches of different perspectives seeking some enhancements have been proposed. Throughout our extensive study, we have found that there is a phenomenon with adverse side effects known as sink isolation (sink's hotspot zone) which exists due to the sink's neighbor nodes (deputy nodes) running out of energy faster than the others. This draws attention to our hypothesis, energy exhaustion in sink's hotspot zones is worthy of concern more than that of distant zones. Accordingly, we propose a pioneering metric with highly influential factors for forward node selection. The proper selection ends the route of the data traffic in deputy node of least energy exhaustion. As a result, extensively dissipated sensors in the hotspot zones are avoided, sinks are protected from isolation, and lifetime is extended.

ICACCI--17.10 16:45 Finding Minimum Node Density for Energy-efficient In-hop Cooperative Relaying in Industrial WSNs
Debdeep Saha (RCCIIT, India); Somprakash Bandyopadhyay (Indian Institute of Management Calcutta, India)

Industrial wireless sensor networks (IWSNs) employ Trajectory Based Forwarding (TBF) [5], a popular source routing technique, where source precomputes the optimal trajectory, comprising active nodes only, and inserts it in packet header. TBF ensures optimum hop distance to decrease overall energy consumption. To reduce the consumption further, we propose for energy-critical IWSNs the use of multiple cooperative relays within every single hop of TBF. However, this is possible only if the node density exceeds a minimum level, which we determine in this paper. Mathematical analyses prove that in-hop relaying is better than direct forwarding in terms of energy saving, latency and reliability.

ICACCI--17.11 17:00 Developing and Validating Virtualized Transactional Application of Educational Institutes Using InFraMegh
Indu Arora (Panjab University, Chandigarh, India); Anu Gupta (Panjab University, India)

Cloud Computing (CC) is considered a constructive technology for read-intensive analytical applications rather than write-intensive transactional applications. Deploying transactional applications in the Cloud is not considered safe due to requirement of stringent ACID rules. These rules can be enforced in Cloud using middleware tools like CloudTran and Oracle Coherence. It becomes a major challenge to integrate these tools for a transactional application developer. So, an integrated framework for educational institutes named as 'InFraMegh' is developed. It works on top of CloudTran and Oracle Coherence. InFraMegh focuses on Cloud Data Management layer and provides generalized API for managing data in cache and database. InFraMegh speeds up and eases the development efforts of transactional application developer for Cloud. As proof of concept, a transactional application of educational institutes named as Student Registration Return System (SRRS) is used for verifying the functionality of InFraMegh. SRRS is virtualized using XenApp and accessed on client machines. Virtualized SRRS is tested in single-user and multi-user environments. Its performance is found better over traditional client-server architecture. Hence, InFraMegh expedites efforts of developers in developing transaction applications in Cloud. It allows educational institutes to manage data in Cloud.

ICACCI--17.12 17:15 An Approach to Signaling Cost Reduction in Proxy MIPv6 for Mobility Management
Nitul Dutta (Marwadi University, India); Zdzislaw Polkowski (The Lower Silesian University of Entrep and Tech, Poland); Corina Savulescu (University of Pitesti, Romania); Sunil Pathak (Amity University Jaipur, India)

Proxy Mobile IPv6 (PMIPv6) is gaining popularity as a Network based mobility management scheme in the recent years. It manages mobility of nodes through Local Mobility Anchor (LMA) that works in co-operation with Mobile Anchor Gateways (MAGs). The original PMIPv6 suggests to collocate MAG with Access Router (AR) in the PMIPv6 architecture. However, this architecture leads to generation of large signaling cost for mobility management. Because, for movement of mobile host (MH) even to nearby cells mobility management signals need to traverse to LMA. In this paper, we propose an enhancement to PMIPv6 by re-arranging AR and MAG and allowing single MAG to cover multiple ARs. Further, the protocol tunnels leftover packet of MH directly from old to new MAGA to reduce packet delivery cost. A mathematical analysis is carried out to realize the benefits of the proposed scheme. We analyze signaling cost of the proposed model and compare with PMIPv6 basic version and Dynamic Mobility Anchoring (DMA) scheme. The impact of few input parameters on the signaling cost are investigated. The result reveals that proposed scheme outperforms both PMIPv6 basic and DMA.

ICACCI--17.13 17:30 Self Power Analysing Energy Efficient Protocol (SPAEEP ): An Adaptive approach
Nitin Palan (University of Poona, India); Balaji Barbadekar (Kolhapur University, India); Suhas Patil (K. B. P. College of Engineering & Polytechnic, India)

Main challenge in WSN (Wireless sensor network) faced by researcher is - battery life (energy of a node). In this paper general framework of distributed mechanism of multi-hop WN (wireless network) is considered. Cluster based routing protocols like Low Energy Adaptive Clustering Hierarchy (LEACH) , Hybrid Energy Efficiency Protocol (HEEP) , Threshold sensitive energy efficient network Protocol (TEEN) and PEGASIS efficiently manages the energy usage. But still these protocol should be thoroughly studied and revised to achieve more energy efficiency. Paper focuses on LEACH protocol, where cluster Head (CH) selection process termed as 'round' (r). Each round costs setup and steady state phase of the network. Research proposes novel idea of Self Power Analyzing Energy Efficient Protocol (SPAEEP) with energy model (EM). EM is applied to CH which adaptively decides to initiate next round based on balanced energy. Adaptive decision will help in reducing the number of rounds (r). Application of the same to the nodes will lead to fault tolerant network with proper hand over mechanism. Concept is also extended to 802.15.4 protocol, as it follows standard network architecture. Our results reveal that new method suggested will reduce energy consumption of CH and hence the increased life span of network. The outcome is fault tolerant network.

ICACCI--17.14 17:45 A Routing Load Balanced Trajectory Design for Mobile Sink in Wireless Sensor Networks
Amar Kaswan (Indian Institute of Technology(ISM) Dhanbad, India); Kumar Nitesh (Indian School of Mines, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

Unbalanced energy depletion is a well-known problem in wireless sensor networks which is caused by hot spot problem as a result of multi-hop communication to forward data to the base station. This uneven energy consumption can decrease the network lifetime significantly. In the recent years, mobile sink has been introduced towards the solution of this hot spot problem. In this paper, we propose an algorithm for efficient trajectory design of a mobile sink in wireless sensor networks which is applicable for delay bound applications. The algorithm is based on rendezvous points where a mobile sink periodically visits the rendezvous points of a predetermined delay bound path and every sensor node transmits its data to the mobile sink over a single-hop or multi-hop path. The proposed algorithm determines these rendezvous points based on the routing load. Our proposed algorithm is endorsed through wide simulation, and the results validate that the algorithm permits a mobile sink to collect all data within a specified deadline while maintaining the energy expenditure of sensor nodes.

ICACCI--18: ICACCI-18: Security Informatics/SAI'16 (Short Papers)

Room: LT-2 (Academic Area)
Chair: Sandeep Saini (The LNM Institute of Information Technology, Jaipur, India)
ICACCI--18.1 14:30 Extended-HyperWall:Hardware Support for Rollback Secure Virtualization
Priya Chandran and Shubham Shoundic (National Institute of Technology Calicut, India); Payas Krishna (National Institute of Technology Calicut); Vinod Reddy (National Institute of Technology Calicut, India); Bodasingi Jayachandra (National Institute of Technology, India); Lakshit Pande (National Institute of Technology, Calicut, India)

Virtualization is a vital part of computing today. Rollback is an important feature to be supported by virtualization. However, hackers leverage rollback and pose serious security threats to systems running in a virtualized environment. The aim of the paper is to identify such security threats and propose a comprehensive solution. In this paper, we propose Extended-HyperWall architecture as a solution to security of Virtual Machines (VMs) in a fully virtualized environment. Extended-HyperWall architecture is an integration of HyperWall with Rollback Sensitive Data Memory with architecture assistance (RSDM-A). HyperWall is a system that proposes hardware support to ensure confidentiality and integrity of a VM's data, with an assumption that hypervisor cannot be trusted. RSDM-A is an architectural support to a virtualized system that separates rollback sensitive data from rollback nonsensitive data which is one of the major cause of threats that arose due to rollback. Extended-HyperWall integrates CIP-table (Confidentiality and Integrity Table to ensure confidentiality and integrity of the data) and RSDM-table (Rollback Sensitive Data Memory to protect the system from rollback attacks). The paper illustrates the design of Extended-HyperWall, and its implementation on the Xen Hypervisor kernel for testing and analysis.

ICACCI--18.2 14:45 A Novel and Highly Secure Encryption Methodology Using a Combination of AES and Visual Cryptography
Sudhir Rao Rupanagudi, Varsha G Bhat and Sudhir Rupanagudi (WorldServe Education, India); Kavitha Y N, Meghana R and Arpitha Srinath (KSIT, India); Pooja J, Ramya R Pai, Roopashree M and Sandhya Ramesh (Sapthagiri Institute of Technology, India); Anil C, Anuradha S, Arpita V Murthy and Sridevi S (Jyothy Institute of Technology, India)

With the ever increasing human dependency on The Internet for performing various activities such as banking, shopping or transferring money, there equally exists a need for safe and secure transactions. This need automatically translates to the requirement of increased network security and better and fast encryption algorithms. This paper addresses the above issue by introducing a novel methodology by utilizing the AES method of encryption and also further enhances the same with the help of visual cryptography. The complete algorithm was designed and developed using the MATLAB 2011b software. The paper also discusses a hardware approach for deploying the algorithm and elaborates on an area, speed and power efficient design on a Spartan 3E Xilinx FPGA platform

ICACCI--18.3 15:00 Mathematical Analysis of Novel Compression Algorithm used in OFDM for security
Alka Sawlikar (Nagpur University & Rajiv Gandhi College of Engineeringg Research and Technology, India); Zafar Khan (Rajiv Gandhi College of Engineering Research and Technology, India); Sudhir Akojwar (Senior Member IEEE, India)

Today the most promising modulation technique is Orthogonal Frequency Division Multiplexing (OFDM) and have adopted for wireless and wired communication standards. A number of carriers spread regularly over a frequency band in a way that the available bandwidth is utilized to maximum efficiency. A lot of resources are required to send data wirelessly and to save the resources data is to be compressed. When the data is compressed the size of the data is reduced and hence reduced resources will be required to send the reduced sized data. In this paper a new compression algorithm is proposed, based on bit quantization which can compress the data to approximately 50%, requires less encoding and decoding delay and is highly efficient and simple to implement.

ICACCI--18.4 15:15 Multi-Factor Authentication Using Threshold Cryptography
Vishnu Venukumar and Vinod Pathari (National Institute of Technology Calicut, India)

Multi-Factor Authentication is used as a foolproof solution to various issues involved in present day critical authentication systems. However, it comes with the overhead of employing multiple authentication programs to complete the process. Moreover, current multi-factor authentication schemes require all intermediate One Time Passwords(OTPs) to be stored for the lifetime of the authentication process. They also involve security risks whenever an authentication process requires the user's password at a public place like a Point-of-Sale terminal or an open ATM booth. This work proposes a more secure, efficient, convenient and flexible multi-factor authentication technique using threshold cryptography.

ICACCI--18.5 15:30 Multiple Integer Packing for Optimizing Partial Homomorphic Encrypted Databases
Mythily AS (National Institute of Technology, Calicut, India); Vinod Pathari (National Institute of Technology Calicut, India)

Database compromises are increasing nowadays, and database encryption is gaining its importance. Database encryption increases the size of the data, and time for processing the data. Optimizing the underlying encryption schemes helps to improve the performance of the encrypted databases. Processing queries over encrypted data, without decrypting it remained a holy grail until CryptDB. CryptDB is the first practical system which supports almost all queries over encrypted database. It leverages partial homomorphic encryption and SQL-Aware encryption scheme for encrypted query processing. CryptDB also suffers from storage and time issues. We propose packing of integers into a single field as a solution to reduce space issue of CryptDB. Any system which uses additive homomorphic schemes for encryption can adopt this method for storage optimization.

ICACCI--18.6 15:45 Ecosystem of Spamming on Twitter: Analysis of Spam Reporters and Spam Reportees
Pooja Sinha (IGDTUW, India); Oshin Maini, Gunjan Malik and Rishabh Kaushal (Indira Gandhi Delhi Technical University for Women, India)

Lately, there has been a growing trend in the Internet space of online social networks (OSNs) and online social media (OSMs) like Twitter, Facebook etc. which are acting like a huge repository of information. This information, by design, is posted by users of these websites and henceforth is vast, unorganized, unreliable and dynamic. A lot of this unreliability comes from spammers or users with the intent of spreading malicious or irrelevant content. Twitter, being one of the most popular social networking platform, has been chosen by us to conduct our research on spamming. In this paper, we have collected the data of various suspected spammers, i.e. Reportees as well as of the users who reported them, i.e. Reporters and classified them into various categories and tried to study the ecosystem of these reportees and reporters. We have used three Data Mining techniques i.e., Decision Tree Classification, K-Nearest Neighbors and Random Forest Classifier. Finally, we compare these three algorithms on the basis of their accuracy.

ICACCI--18.7 16:00 Design, Implementation and Security Analysis of Bluetooth Pairing Protocol in NS2
Samta Gajbhiye, Sanjeev Karmakar and Monisha Sharma (CSVTU Bhilai, India); Sanjay Sharma (CSVTU, Bhilai)

Secure Simple Pairing (SSP), a feature of the Bluetooth Core Version 2.1 specification[1] was created to address two major concerns among the Bluetooth user community: security and simplicity of the pairing process. It employes Elliptic Curve Diffie- Hellmen (ECDH) protocol for generating keys for the first time in Bluetooth pairing. The time complexity for solving this problem is exponential, and the problem seems to be hard.[2,3]. It provides the security properties known session key security, forward security, resistance to key-compromise impersonation attack and to unknown key-share attack, key control. This paper presents the simulation and security analysis of Bluetooth pairing protocol for numeric comparison using ECDH in NS2. The protocol also employs SAGEMATH for cryptographic functions.

ICACCI--18.8 16:15 Experimental Analysis of DDoS Attack and It's Detection in Eucalyptus Private Cloud Platform
Ashaq Hussain (NIELIT SRINAGAR, India); Beenish Habib (NIT SRINAGAR, India); Farida Khursheed (N I T SRINAGAR, India); M. Tariq Banday (University of Kashmir, India)

Cloud refers to Internet and cloud computing is the new trend of delivering computing resources via Internet. The Cloud is inflicted by many security concerns among which Distributed Denial of Service (DDoS) has an excessive impact. The DDoS affects the systems by infusing huge amount of traffic. The Cloud provides virtual machine images which run via instances. One such Cloud platform is Eucalyptus, which is an Open Source Cloud Platform and stands for Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems. This paper gives a review of experimental evaluation of DDoS performed in Eucalyptus as a cloud. The experiments were performed using open source platform of Eucalyptus as private cloud model. Kali which is Linux based hacking and penetration testing tool acts as the master attacker and two cloned Ubuntu based virtual machines acts as botnets. All of these are set in VMware Workstation and thus form the attacking front. The DDoS attack tools used are hping3 and slowhttptest which floods the cloud with TCP/IP packets and HTTP packets respectively. Finally, to analyse the traffic pattern different cloud based traffic analysis tools have been used.

ICACCI--18.9 16:30 Lightweight Security Algorithm for Low Power IoT Devices
Tarun Goyal (Malaviya National Institute of Technology, India); Vineet Sahula (MNIT Jaipur, India)

In today's technology, more and more electronics applications require secure communication, for example, the Internet of things devices. Elliptic Curve Diffie-Hellman(EC-DH) Algorithm has emerged as an attractive and effective public-key cryptosystem. Elliptic curves are widely used in various key exchange techniques that include the Diffie-Hellman Key agreement scheme. When contrasted with conventional cryptosystems like RSA, ECC offers equivalent security with smaller key sizes which result in lower power consumption, speedier calculations, and also lower memory and transmission capacity (bandwidth) reserve. This is particularly valid and helpful for applications like IoT gadgets, which are regularly constrained regarding their CPU processing speed, power, and area. Our work includes the software and hardware implementation of Diffie-Hellman, Elliptic Curve Diffie-Hellman (ECDH) Key agreement algorithm, and RSA algorithm. Our work also involves analysis of power, performance, area, and their comparisons thereof. The comparison is based on metrics obtained, after implementing the algorithms in synopsys using 90 nm UMC faraday library. The ECDH algorithm is found to be better than others as far as power and area are concerned.

ICACCI--18.10 16:45 Feature Selection for Novel Fingerprint Dynamics Biometric Technique Based on PCA
Ishan Bhardwaj and Narendra D Londhe (National Institute of Technology Raipur, India); Sunil Kumar Kopparapu (Tata Consultancy Services, India)

Fingerprint dynamics is a recently introduced behavioral biometric technique based on the time derived parameters from multi instance finger scan actions. Various related features can be extracted from recorded time stamps. However, not all of them contribute in improvement of classification accuracy and may result in high dimensionality of the data. High dimensionality leads to higher computation cost for calculating the features, and low classification rate. Thus it is crucial to select the best features for efficient system performance. Principal Component Analysis (PCA) is a popular technique for dimensionality reduction and has been applied to a wide number of applications. However conventional PCA based methods have a disadvantage of using all the features for transforming to lower dimensional space. In this paper, we follow a method based on PCA, which selects the most dominating features subset out of the feature pool at hand, without transforming the original features. The performance of selected features is assessed using various classification paradigms. The result ascertain successful selection of dominant feature subsets of fingerprint dynamics using PCA.

ICACCI--18.11 17:00 Energy Efficient Multipath Routing for Wireless Sensor Networks: A Genetic Algorithm Approach
Suneet Kumar Gupta (Bennett University Gr Noida, India); Pratyay Kuila (National Institute of Technology Sikkim, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

Energy efficiency and fault tolerance are the two most important factors that must be considered for deployment of any Wireless Sensor Network (WSN). Multipath routing is an efficient solution for the fault tolerance of WSNs. In this paper, we propose an an algorithm for multipath routing in WSNs which is also energy efficient. The proposed algorithm is based on the popular meta-heuristic technique Genetic Algorithm (GA). In the proposed algorithm, routing schedule is prepared at the base station (BS) which share it with all the nodes of the entire network. The proposed algorithm has an efficient fitness function which is derived with various parameters such as the distance between sender and receiver nodes, the distance between next hop node to the BS and also on the number of hop to send data from next hop node to the BS. The proposed algorithm is tested through simulation and evaluated with various performance metrics.

ICACCI--20: ICACCI-20: Embedded Systems/Computer Architecture and VLSI/Adaptive Systems/ICAIS'16 (Short Papers)

Room: LT-4 (Academic Area)
Chairs: Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India), Sunil Kumar (The LNM Institute of Information Technology (LNMIIT), India)
ICACCI--20.1 14:30 A Synthesis Approach for ESOP-based Reversible Circuit
Chandan Bandyopadhyay and Shalini Parekh (Indian Institute of Engineering Science and Technology (IIEST)Shibpur, India); Hafizur Rahaman (Bengal Engineering and Science University, Shibpur, India)

In recent years, the design of reversible quantum circuits have received immense priorities in the nano scale industry and chip level implementation of such circuits is under investigation. Hence, the efficient synthesis scheme for reversible quantum circuits has gained significant challenges among the research community. In this work, we propose a reversible circuit synthesis scheme that finds best neighbor in the multiple output functions to share its own functional data with that neighbor and design improved circuits. This approach is basically suited for reversible circuits based on ESOP representation. We successfully tested our approach for large functions and significant improvement is achieved. Comparative analysis with related works is presented to validate the proposed scheme

ICACCI--20.2 14:45 Antenna Selection and Transmit Beamforming in MIMO Systems Using Delayed Channel Information At the Transmitter
Yogesh N Trivedi (Nirma University, India)

A multiple input multiple output (MIMO) system, with $N$ transmit and two receive antennas, is considered with transmit beamforming and transmit antenna selection. We select best two out of $N$ antennas using delayed channel state information (CSI). Then, transmit beamforming is carried out using the same delayed CSI. A closed form expression of bit error rate is derived in simple form for BPSK modulation. The expression was derived as a function of of correlation coefficient ($\rho$) between perfect CSI available at the receiver and delayed feedback of it, which is used for antenna selection and transmit beamforming. We observe that the bit error rate (BER) performance degrades, when correlation $\rho$ decreases from one. We present simulation results and compare with the analytical results. We also derive some special cases of the proposed system and compare the results available in the literature.

ICACCI--20.3 15:00 Design and Performance Analysis of Quasi-Asynchronous SC-FDMA-CDMA System Using Quasi Complementary Sequence Sets
Shikha Singh (Indian Institute of Technology Patna, India); Avik Ranjan Adhikary (Southwest Jiaotong University, China); Abdus Samad (Indian Institute of Technology Patna, India); Sudhan Majhi (Indian Institute of Science, India & Indian Insitute of Technology Patna, India)

In this paper, we have proposed a modified hybrid technique which is a combination of single carrier frequency division multiple access (SC-FDMA) and multicarrier code division multiple access (MC-CDMA) called as SC-FDMA-CDMA. The system is designed based on quasi complementary sequence set (QCSS) over a quasi-asynchronous environment. SC-FDMA-CDMA provides lower peak to average power ratio than that of MC-CDMA technology. The QCSS code is generated over the complex roots of unity and has a low correlation property which makes SC-FDMA-CDMA suitable over the quasi-asynchronous environment. In addition, QCSS can support more users than that of perfect complementary sequence sets. A complete transceiver structure has been provided in detail. The system performance has been derived from a simulation and analytical studies. It has been observed that quasi-asynchronous SC-FDMA-CDMA based on QCSS performs better than that of Walsh Hadamard or complete complementary codes based quasi-asynchronous SC-FDMA-CDMA.

ICACCI--20.4 15:15 Blind Symbol Rate Estimation by Exploiting Cyclostationary Features in Wavelet Domain
Sushant Kumar (IIIT Delhi, India); Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India); Sumit Jagdish Darak (IIIT-Delhi, India)

In multi-standard wireless communication receivers, an estimation of the symbol rate is critical to blindly demodulate received signal. Symbol rate estimation at high signal-to-noise (SNR) ratio has been studied extensively in the literature and many computationally efficient methods have been proposed. However, symbol rate estimation at low SNR environment is still a challenging task. In this paper, a new method for accurately estimating the symbol rate of any received signal has been proposed. To the best of our knowledge, proposed method is the first which exploits cyclostationary features of received signal in wavelet domain. Simulation results validate the superiority of the proposed method over others especially at low SNR values. At the end, detailed complexity analysis based on total number of gate counts is presented.

ICACCI--20.5 15:30 Effortless Exchange of Personal Health Records Using Near Field Communication
Sujadevi VG (Amrita Vishwa Vidyapeetham, India); Twintu Kumar and Arunjith A S (Amrita University, India); Hrudya P (Research Associate at Amrita Center for Cyber Security, India); Prabaharan Poornachandran (Amrita University, India)

Personal health records (PHR) either resides in the hospital or resides with the patient in the form of paper-based records. When PHR is in the printed form it is bulky and inconvenient to carry by the user. Since smart phones has become ubiquitous and a necessity in day to day life, and most of them are equipped with contactless near field communication (NFC), we propose to exploit the inherent advantages of NFC or near field communication to transfer the personal health records from health care settings to patient's smart phone and vice versa. This will empower the patient to always carry his or her personal health records with them that comes in handy during any medical emergencies when it is necessary to access the PHR quickly is essential for the immediate first-aid and medical intervention. In this paper we prove that peer-to-peer mode in NFC that uses NFC and Bluetooth technologies can be leveraged for secure and seamless transfer of the personal health records from one healthcare network to another. We have analyzed and presented the advantages and disadvantages of this method and demonstrated the effectiveness of a novel method that uses NFC for sharing Personal Health records.

ICACCI--20.6 15:45 Mitigation of Negative Delay Via Half CP Shift
Ajay Sharma (Parallel Wireless, India); Somasekhar Pemmasani (Parallel Wireless)

The purpose of Cyclic Prefix (CP) is vital necessity in today's 4G technology. 4G Technology such as Long Tern Evolution (LTE) and Worldwide Interoperability for Microwave Access (WIMAX) function on the principle of Orthogonal Frequency Division Multiplexing (OFDM). When transmitted signals arrive at the receiver by more than one path of different length, the received signals are staggered in time; this is multipath propagation. To mitigate the effect of dispersed channel distortion caused by random channel delay spread, Cyclic Prefix (CP) is introduced to eliminate Inter-Symbol Interference (ISI). However in a scenario of positive delay CP does prove to be useful, but in a case of negative delay, the CP does not aid in the mitigation of ISI considering the FFT window to start at the CP and OFDM symbol boundary. Due to this there can be spillover of the OFDM symbols. Hence causing degradation on the system performance. This paper deals with highlighting the impact of negative delay in technology system and a proposed method to mitigate this issue.

ICACCI--20.7 16:00 Design of CPW to CPS Printed Balun for Wideband Applications
Darshan Shah (Universal, India); Ravish Singh (Thakur Educational Trust, India); Shailendra Shastri (Thakur College of Engineering, India)

Development in field of broadband access technologies is on the rise due to increase in demand of high speed data services for portable devices. Despite recent advances in broadband wireless technologies, there remain a number of critical issues to be resolved. One of the major concerns is the implementation of compact antennas that can operate in a wide frequency band. Another important problem affecting the antenna performance is selecting a proper wideband feed network. The balun provides not only balanced fields but also impedance matching to the antenna. Different kinds of balun have been developed over the past decades. From 1994, several CPW (Coplanar Waveguide) to CPS (Coplanar Strip lines) baluns have been reported. However, these baluns are band limited. To overcome these issues, two designs are proposed in this paper using CPW to CPS wideband balun. First design is using symmetric CPW to CPS balun which uses bond wires and radial slots to do away with the discontinuities between CPW and CPS and we obtain a bandwidth of 2.7 GHz between 1 to 3.7 GHz. Second design is using asymmetric CPW to CPS which does away with the discontinuities without using radial slot and bond wires and we obtain a bandwidth of 5.8 GHz from 2.2 to 8 GHz.

ICACCI--20.8 16:15 Predicting Performance of Applications on Multicore Platforms
Priti Ranadive (KPIT Technologies Ltd. India, Principal Scientist, India); Vinay G Vaidya (KPIT Technologies Ltd., India)

Porting sequential applications to multicore platforms is time and cost consuming. Hence, it is desirable to predict what performance benefits one would get if an application were ported to multicore platform, before doing the migration activity. In this paper, we present a method to mathematically model and then predict the performance of sequential applications on real homogenous multicore platforms. The model is created based on several executions of motivational code with varying parameters. The results are validated by predicting the execution cycles for few benchmark codes and the results are 92% accurate on a real homogeneous multicore platform.

ICACCI--20.9 16:30 Soft Sensor for Inferential Control in Non-Isothermal CSTR
Sucheta Singh (NSIT, DELHI, India); Vijander Singh (Netaji subhas institute of Technology, Delhi University, India); Jyoti Yadav (Delhi University & Netaji Subhas University of Technology, India); Asha Rani (NSIT University of Delhi NEW Delhi, India)

The idea of work is to design a soft sensor to be used for inferential control of concentration of Continuous Stirred Tank Reactor (CSTR) under time varying circumstances. The approach is based on artificial intelligence technique in which, various types of Artificial Neural Network (ANN)based soft sensors are designed. The two types of controllers are used namely, proportional controller for temperature control and proportional integral controller with soft sensor for product concentration control. The proposed technique offers numerous advantages like, improvement in product quality, online data collection, analysis based on previous results can be done and also, the concentration values can be known on real-time basis. The simulation results have shown that soft sensor adequately controls the concentration of CSTR.

ICACCI--20.10 16:45 LabVIEW Based Four DoF Robotic Arm
Sagar Giri and A Ravishankar (Savitribai Phule Pune University, India); Rahul Shivaji Pol (Pune University & VIIT Pune, India); V Ghode (Savitribai Phule Pune University, India)

The aim of this research paper is to thoroughly elaborate designing, development and to implement steps involved to make a superior four degrees of freedom (DoF) robot ARM with control that is more organized and low expenditure. A four DoF robotic ARM is a kind of robot (part) usually programmable, with identical functions to a human ARM. The said robotic ARM is designed with four degrees of freedom to perform various associated tasks, such as material handling, shifting which can serves as an assistant for industry. The robot ARM is built with number of servomotor that perform ARM movements concurrently. The controlling action of robotic ARM are manage through graphical coding interface; labVIEW. LabVIEW communicates the appropriate movement angles to the robotic ARM that drives the servomotors having capability of varying position. The robotic ARM runs in three different modes manual mode, semiautonomous mode and autonomous mode. The said paper briefly elaborate all steps involved in design, realization, testing, and validation part of the said robot which results in a properly and more organized control robot ARM.

ICACCI--20.11 17:00 Performance Evaluation of Fuzzy Logic Controlled Voltage Source Inverter Based Unified Power Quality Conditioner for Mitigation of Voltage and Current Harmonics
Anupam Kumar (NIT Srinagar, India); Abdul Bhat (National Institute Of Technology, Srinagar, India); Shubhendra Pratap (NIT Srinagar, India)

This paper presents a comprehensive performance analysis of Fuzzy logic controlled Voltage Source Inverter (VSI) based unified Power quality conditioner (UPQC) to improve the electric power quality at distribution levels. In recent years Unified Power Quality conditioner (UPQC) is being used as a universal active power conditioning device to mitigate both voltage as well as current harmonics in a polluted power system network. Fabrication of UPQC is done implying both current source and voltage source inverter. In the present paper reference and switching signals are derived using Fuzzy logic controller and robust hysteresis band PWM techniques. In this paper, Fuzzy Logic based control scheme has been developed for the production of compensating voltages. The resultant compensation system eliminates voltage as well as current harmonics with good dynamic response. Extensive simulation results using Matlab / Simulink and Simpowersystem software for R-L load connected through an uncontrolled bridge rectifier is presented for performance evaluation.

ICACCI--20.12 17:15 Outage Analysis of Two-way Cooperative Spectrum Sharing Protocol under Nakagami-m Fading
Saloni Mittal (Indraprastha Institute of Technology, Delhi, India); Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India)

This paper illustrates the performance of a two-way cooperative spectrum sharing (CSS) protocol under Nakagami-m fading. In CSS protocol, primary and secondary systems operate over the same frequency band albeit with different priorities. The primary system which has higher priority seeks the assistance of low priority secondary system to improve its quality of service (QoS) in-exchange for allowing the secondary system to access its spectrum. In our proposed framework, we have two primary systems that are required to communicate in deteriorating channel conditions. As a consequence, the secondary system acts as a half duplex relay, thereby assisting in relaying of information between the primary systems. The secondary system in return is benefited by having opportunistic spectrum access (OSA). Closed form expressions for the outage probability of primary and secondary systems are derived by varying the shape parameter (m) and spread control parameter (Ω) of Nakagami-m fading channels. To validate the proposed analysis comparisons between the simulation and theoretical results are also presented.

ICACCI--20.13 17:30 Performance Evaluation of Circuit Level Approaches for Radiation Hardened Primitive Gates
Vaibhav Sharma and Arvind Rajawat (Maulana Azad National Institute of Technology, India)

Integrated circuit technology has made progress by leaps and bounds in general, but in the space environment radiation of ionized particles still remains as an Achilles' heel for errorless working of integrated circuits. With the continuously shrinking feature sizes, combinational logic (CL) is becoming more susceptible to soft errors. This work involves a simulation study for some of the different circuit based approaches adopted for hardening primitive gates, as they are the essential elements of any CL. Further, the vital performance parameters, Power-Area-Delay (PAD) are calculated for the primitive gates designed on CMOS 0.18 µm technology using all the approaches considered. Moreover, further scopes involved in these techniques are discussed briefly.

ICACCI--20.14 17:45 Human Energy Interacting Interface (HEII) (Interface to Transform Environmental Energy Into Human Acquired Energy)
Anil K Dubey (ABES Engineering College Ghaziabad, Uttar Pradesh, India); Mohan Kolhe (University of Agder, Norway); Vikash Singh (I G National Tribal University Amarkantak, MP, India)

The human body acquires energy in many form to manage their activities. This energy enters into human body in the form of food, beverages or any other external source. Few of the vital nutrients are injected into human body via injections, capsules, medicines etc. Few are absorbed by the body automatically from environment eg. Vit D is absorbed from sunlight. Human body absorbs limited energy from environment due to lack of proper transformation of energy aspects of human body. Due to this there is a requirement of an interface that transforms the environmental energy into the human aspect energy. To resolve the above mentioned problem we propose an interface that has the capability to transform the environmental energy into various aspects of energy as required by human body.

ICACCI--20.15 18:00 Experimental Study of a Novel Variant of Fiduccia Mattheyses(FM) Partitioning Algorithm
Mitali Sinha (Veer Surendra Sai University of Technology, Burla, India); Suchismita Pattanaik (Sambalpur University Institute of Information Technology, Burla, Odisha, India); Rakesh Mohanty (Veer Surendra Sai University of Technology Burla); Prachi Tripathy (Veer Surendra Sai University of Technology Burla, India)

Partitioning is a well studied research problem in the area of VLSI physical design automation. In this problem, input is an integrated circuit and output is a set of almost equal size disjoint blocks. The main objective of partitioning is to assign the components of circuit to blocks in order to minimize the numbers of inter-block connections. A partitioning algorithm using hypergraph was proposed by Fiduccia and Mattheyses with linear time complexity which has been popularly known as FM algorithm. Most of the hypergraph based partitioning algorithms proposed in the literature are variants of FM algorithm. In this paper, we have proposed a novel variant of FM algorithm by using pair wise swapping technique. We have performed a comparative experimental study of FM algorithm and our proposed algorithm using two datasets such as ISPD98 and ISPD99. Experimental results show that performance of our proposed algorithm is better than the FM algorithm using the above data sets.

ICACCI--20.16 18:15 Removal of Random-Valued Impulse Noise Using Detection Filters and Group Sparse Modeling
Divya Velayudhan (Khalifa University, Abu Dhabi, United Arab Emirates); Salim Paul (SCT College of Engineering, India)

High quality noise-free images constitute an integral part of all image processing applications. Image acquisition or transmission stages might result in corruption of images with impulse noise, which is of two main types: salt-and-pepper noise and random-valued noise. Among the two types, random-valued noise is the most difficult to be removed due to its randomness and this problem is addressed in the paper. The proposed method takes advantage of two important properties of images - local sparsity and self-similarity. The technique for random-valued impulse noise removal has two stages. The first stage called Impulse Detection stage identifies the outlier candidates affected by impulse noise. The second phase reconstructs the image from the unaffected partial random samples. A robust split Bregman iterative algorithm is used to solve the optimization problem. Experimental results support the effectiveness of the algorithm.

ICACCI--21: ICACCI-21: Artificial Intelligence and Machine Learning/Data Engineering/Biocomputing (Short Papers)

Room: LT-6(Academic Area)
Chair: Punam Bedi (University of Delhi, India)
ICACCI--21.1 14:30 Multi-Agent Framework for Automatic Deployment and State Restoration in Iterative Software Development Process
Ramesh Guntha (Amrita Center for Wireless Networks and Applications, Amrita Vishwa Vidyapeetham University, India); Balaji Hariharan and Venkat Rangan (Amrita University, India)

Building and deploying software for testing and bug fixing happens quite frequently in software development life cycle. Commonly the client-server software needs just one client machine and one server machine to be redeployed before every testing cycle begins. Our Smart classroom [1, 2] e-Learning system requires 3 computers for a given classroom to capture and stream HD video from 5 video cameras placed at different angles or perspectives. It is done to achieve gaze alignment across all the remote participants by showing the appropriate perspective at each remote classroom display based on the current teaching mode of either lecturing or interaction. Most of test cases require us to test across the full sample setup consisting of one teacher classroom, three remote student classrooms; thus requiring us to install the client software on 12 client machines each time. Our measurements showed that it takes 2-4 minutes to walk to a system, stop the running application, insert the USB hard-drive, uninstall the software, reinstall the software, login to the software, choose the camera and audio publishing settings, publish audio/video, and arrange the video screens. So for the 12 client systems it would like close to 30 to 40 minutes for each test cycle, which causes major disruption to the flow of the development and testing process which happens 100s of times in a 3 to 6 month release cycle. In this paper we present our automatic deployment and restoration framework which refreshes all the client software and restores the current publishing and screen settings in less than 30 seconds for all the computers, which resulted in huge productivity improvements in our software development life cycle.

ICACCI--21.2 14:45 Predictive Maintenance for Wind Turbine Diagnostics using Vibration Signal Analysis based on Collaborative Recommendation Approach
Gopi Krishna Durbhaka (Research Scholar); Barani S. (Sathyabama University, Chennai, India)

The application of decision making knowledge based methods is to analyze the system and identify its in-depth diagnosis and fault behavior by simulating the expert knowledge from a similar domain. Rule based systems with predefined conditions have been replaced and or upgraded to expert knowledge based systems and further replaced / upgraded by applying machine learning techniques wherein, association rules, reasoning and decision making processes have been considered similar to expert knowledge in resolution of diagnostics during critical scenarios. The application of condition monitoring technique has been widely applied to analyze the behavior pattern of the system. In this paper, vibration signal analysis is performed to study and extract the behavioral pattern of the bearings. Further, machine learning models such as k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), k-Means have been applied to classify the type of fault. Further, Collaborative Recommendation Approach (CRA) has been applied here to analyze the similarity of all the model results to suggest in advance, the replacement and correction of the deteriorating units and prevent severe system break downs and disruptions.

ICACCI--21.3 15:00 EDA Wavelet Features as Social Anxiety Disorder (SAD) Estimator in Adolescent Females
Vivek Sharma (Lovely Professional University, India); Neelam Prakash (PEC University of Technology, Chandigarh, India); Parveen Kalra (PEC University of Technology, India)

Social Anxiety Disorder(SAD) effects individual's social behaviour and results in excessive self-consciousness, negative judgmental thoughts and uncontrollable fear. It is visible not only in behavior but also pattern of physiological signals (such as electrodermal activity) of individuals as it is associated with autonomic nervous system (ANS). Previous studies have used various features of Electrodermal Activity (EDA) such as Mean SCR, Min SCR, Range, Slope and Max SCL etc to distinguish between groups of anxious and control group subject during rest and anxious task/situations. This research explores the use of EDA wavelet features to estimate the social anxiety disorder of female subjects via Multi Layer Perceptron (MLP). In this study joint time-frequency domain features of EDA signal via wavelet analysis were extracted. The Backward regression model with p<0.05 was used in this study for feature selection. The machine learning algorithm developed in this research was able to classify the SAD with accuracy of 82.3% during training, 85.7% during testing and 80% in holdout cases.

ICACCI--21.4 15:15 Evaluating Student Performance Using Fuzzy Inference System in Fuzzy ITS
Pooja Asopa, Sneha Asopa, Nisheeth Joshi and Iti Mathur (Banasthali University, India)

The concept of intelligent agents has emerged from artificial intelligence and cognitive science. These intelligent agents can act as tutors and support students for problem solving in various domains. Agents which are based on the concept of fuzzy logic are termed as fuzzy agents. They can be used in modeling the uncertain behavior of various complex problems and also for predicting the uncertainty level of the students. Systems which have immense capabilities to provide its learners with step-by-step instructions as per their own learning status using computer based instructions are called Intelligent Tutoring System (ITS). The ITS system which has fuzzy characteristics can be called as fuzzy ITS. In this paper, the fuzzy inference system is developed and evaluated in MATLAB for fuzzy ITS which will help students in enhancing their learning skills.

ICACCI--21.5 15:30 Feasibility Assessment of Neural Network Based Expert System Prototype for Evaluating Motivational Strategies
Viral Nagori (GLS Institute of Computer Technology (MCA) & GLS University, India); Bhushan H Trivedi (GLS Institute of Computer Technology (MCA), GLS University)

The main objective of the study is to check the feasibility of design and implementation of neural network based expert system for evaluating motivational strategies from employees' perspectives on ICT human resource. If feasibility exists, then the second objective of the study is to provide a proof of concept that such full-fledged development of an expert system can be carried out with desired results. The reason to do the study is that very few expert systems are built for HR domain and there is no existing expert system we came across for the domain we are targeting. To check the operation feasibility, we initially implemented the prototype with C++. After the initial success of prototype, we decided to switch over to MATLAB to provide a proof of concept. The reasons for switching over to MATLAB from C++ are mentioned in the paper. We used back propagation algorithm to implement neural network based expert system. We provided comparison of the results for prototype implemented in C++ and MATLAB. Based on the comparison, we decided to develop and implement the full-fledged prototype of our neural network based expert system in MATLAB. We have been able to successfully implement the prototype in two different computer languages. This proves that there exists operational and technical feasibility for the development of neural network based expert system prototype. The proposed prototype showcase that the approach we choose can help HR managers to determine right set of employees centric motivational strategies and may help them to reduce attrition rate.

ICACCI--21.6 15:45 Altruistic Decision Making Approach to Resolve the Tragedy of the Commons
Avadh Kishor, Tarun Garg and Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

The Tragedy of the Commons (TOC) is a social dilemma where rational and self-interested agents utilizing a shared resource of fixed capacity leads to inefficient utilization of the resource. In any society, two crucial aspects: individual and social concerns, are responsible for the ineffective performance of the system as well as the individual. Since these two aspects create a dilemmatic situation for an agent such as whether to contribute or exploit (enjoy own profit without caring about the society). Hence, the proper balancing between individual and social concerns is needed to avoid dilemma. In order to solve this problem, we propose a decentralized approach based on the altruistic behavior of agents, known as \emph{Altruistic Decision Making Approach} (ADA). In ADA, agents communicate with each other and adjust their load in accordance with current context, i.e., the agents are able to dynamically vary their load to balance the individual and social considerations and also work in resource-bounded fashion. To judge the efficacy of the ADA, it is compared with another state-of-the-art decentralized approach on different social conditions and is found better than its competitor. Thus, it is observed that the ADA is simple, efficient and powerful decentralized approach for solving the TOC problem.

ICACCI--21.7 16:00 Automatic Expression Recognition and Expertise Prediction in Bharatnatyam
Pooja Venkatesh (International Institute of Information Technology - Bangalore, India); Dinesh Babu Jayagopi (IIIT Bangalore, India)

Bharatnatyam is an ancient Indian Classical Dance form consisting of complex postures and expressions. One of the main challenges in this dance form is to perform expression recognition and use the resulting data to predict the expertise of a test dancer. In this paper, expression recognition is carried out for the 6 basic expressions in Bharatnatyam using iMotions tool. The intensity values obtained from this tool for 4 distinct expressions - Joy, Surprise, Sad and Disgust are being used as our feature set for classification and predictive analysis. The recognition was performed on our own dataset consisting of 50 dancers with varied expertise ratings. Logistic Regression performed the best for Joy, Surprise and Disgust expressions giving an average accuracy of 80.78% whereas Support Vector Machine classifier with Radial Basis kernel function performed best for Sad expression giving an accuracy of 71.36%. A separate analysis on positive and negative emotions is carried out to determine the expertise of each rating on the basis of these emotions.

ICACCI--21.8 16:15 A Fast Chromatic Correlation Clustering Algorithm
Jaishri Gothania (Rajasthan Technical University, India)

Emerging sources of information like social networks, bibliographic data, and interaction network of proteins have complex relations among data objects and need to be processed in different manner than traditional data analysis. Correlation clustering is one such new style of viewing data and analyzing it to detect patterns and clusters. Being a new field, it has lot of scope for research. This paper discusses a heuristic method to solve problem of chromatic correlation clustering where data objects as nodes of a graph are connected through color-labeled edges representing relations among objects. Proposed heuristic performs better than the previous works.

ICACCI--21.9 16:30 A Novel K-Means Based Clustering Algorithm for Big Data
Ankita Sinha (Indian School of Mines, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)

Data generation has seen tremendous growth in the past decade. Managing such huge amount of data is a big challenge. Clustering can serve as a solution, it divides the data into smaller groups based on the level of similarity among the objects. K-Means is one of the most popular and robust clustering algorithm. However, the major drawback of K-Means is to input the number of clusters which is not known in advance particularly for real world data sets. In this paper, we propose a K-Means based clustering algorithm for big data in which we automate the number of clusters to deal with big data. The algorithm is implemented using Spark, a better programming framework than the MapReduce. The proposed algorithm is simulated extensively with large scale synthetic data set as well as real life data on a 4 node cluster. The simulated results demonstrate better performance of the proposed algorithm over the scalable K-Means++ implemented in MLLIB library of Spark.

ICACCI--21.10 16:45 Cognitive State Classification Using Clustering-Classifier Hybrid Method
J Siva Ramakrishna (M. S. Ramaiah University of Applied Sciences, India); Hariharan Ramasangu (Research, India)

Classification is a familiar technique used to classify objects. Clustering techniques are employed to segment data into multiple groups. Objects present in clusters exhibit similar characteristics. Machine learning classifiers applied on Functional Magnetic Resonance Imaging (fMRI) data facilitate to classify cognitive states. Clustering methods, such as K-means, Hierarchical, Spectral, and Consensus Clustering Evidence Accumulation (CCEAC) are used to segment brain regions effectively using fMRI data. Selection of voxels as features is a challenging task for cognitive state classification. In this paper, a new hybrid method for classification has been proposed by combining both clustering and classifier. K-means, Hierarchical, Spectral, and CCEAC are applied on StarPlus fMRI data, to obtain clusters. The dissimilarity metrics normalized Mutual Information (NMI), Variation of Information (VI) and Rand Index (RI) are used in CCEAC for selection of optimal clusters. The main focus of the present work is to choose few number of voxels as features by using clustering methods and then built classifier based on fewer voxels. The proposed method has been evaluated on standard Starplus data and has shown good classification accuracy for few number of voxels.

ICACCI--21.11 17:00 Measuring Stock Price and Trading Volume Causality Among Nifty50 Stocks the Toda Yamamoto Method
Abinaya P (Amrita School of Business, India); Varsha Suresh Kumar (Amrita University, India); Balasubramanian P (Amrita School of Business, Amrita University, India); Vijay Krishna Menon (GadgEon Smart System Pvt. Ltd., India)

This paper analyzes the existence of a Granger causality relationship between stock prices and trading volume using minute by minute data (transformed from tick by tick data) of Nifty 50 companies traded at the National Stock Exchange, India for the period of one year from July 2014 to June 2015. Since the time series data taken is not integrated in of the same order, the Toda-Yamamoto methodology was applied to test for causality. The results show that 29 companies out 50 companies have two-way (bi-directional) causality between price and volume and 15 companies have one way (unidirectional) causal relationship where price causes volume and volume does not cause price and 6 other companies have no causal relationship in either way. The study suggests that the Efficient Markets Hypothesis does not hold true for these 29 companies during the period of this study.

ICACCI--21.12 17:15 Analysis of Fluidized Catalytic Cracking Process Using Fuzzy Logic System
Aparna Nair (K J Somaiya College of Engineering, India); Rajashree Daryapurkar (KJSCE, Vidyavihar, India)

This paper investigates a procedure for non-linear modelling of the highly complex and dynamic Fluidized Catalytic Cracking process through the use of fuzzy logic controller. Fuzzy logic systems are used to model complex non-linear system with a heuristic approach. It makes use of human knowledge base to define rule set which are also the backbone of the fuzzy system. The myriad range of application of fuzzy logic makes it an efficient tool to model complex chemical processes such as fluidized catalytic cracking. Fluidized catalytic cracking is a process that converts heavy distillates like crude into lighter products like gasoline, LPG etc. The main aim of this paper is to analyze the various process parameters involved in the complex catalytic cracking process using fuzzy logic controller. The results are simulated using Matlab-fuzzy logic toolbox R2013a.The real time process data that has been used in the paper has been obtained from FCCU division of BPCL refinery, Mumbai.

ICACCI--21.13 17:30 A Neural Network Based Breast Cancer Prognosis Model with PCA Processed Features
Smita Jhajharia (Banasthali University, Banasthali & Delhi Technological University, Delhi, India); Harish Varshney (Malaviya National Institute of Technology, India); Seema Verma (University Banasthali Vidyapith, India); Rajesh Kumar (Malaviya National Institute of Technology, India)

Accurate identification of the diagnosed cases is extremely important for a reliable prognosis of breast cancer. Data analytics and learning based methods can provide an effective framework for prognostic studies by accurately classifying data instances into relevant classes based on the tumor severity. Accordingly, a multivariate statistical approach has been coupled with an artificial intelligence based learning technique to implement a prediction model. Principal components analysis pre-processes the data and extracts features in the most relevant form for training an artificial neural network that learns the patterns in the data for classification of new instances. The diagnostic data of the original Wisconsin breast cancer database accessed from the UCI machine learning repository has been used in the study. The proposed hybrid model shows promising results when compared with other classification algorithms used most commonly in the literature and can provide a future scope for creation of more sophisticated machine learning based cancer prognostic models.

ICACCI--21.14 17:45 Accelerative Gravitational Search Algorithm
Aditi Gupta and Nirmala Sharma (Rajasthan Technical University, India); Harish Sharma (Rajasthan Technical University, Kota, India)

Gravitational search algorithm (GSA) is a metaheuristic search algorithm based on the Newton's gravity law. In this article, a new variant of GSA is introduced, namely Accelerative Gravitational Search Algorithm (AGSA). In the proposed AGSA, an acceleration coefficient is introduced in the velocity update equation to control the acceleration of the individual. In the proposed position update process individuals are allowed to explore the search space in early iterations while exploitation in later iterations. The reliability, robustness and accuracy of the proposed algorithm is measured through various statistical analyses over 12 complex test problems. To show the competitiveness of the proposed strategy, the reported results are compared with the results of GSA, Biogeography Based Optimization (BBO), and Differential Evaluation (DE) algorithms.

ICACCI--23: ICACCI-23: Signal/Image/video/speech Processing/Computer Vision; and Pattern Recognition & Analysis (Short Papers)

Room: LT-8(Academic Area)
Chair: Anil K Dubey (ABES Engineering College Ghaziabad, Uttar Pradesh, India)
ICACCI--23.1 14:30 Efficient Signal Processing Algorithms for Radar and Telecommunication Systems
Rostislav Sokolov, Denis Dolmatov and Renat Abdullin (Ural Federal University, Russia)

It is proposed to develop a quasi-optimal RF-pulse signal receiver algorithm based on Markov theory of nonlinear filtering. Evaluating the synthesized algorithm efficiency is founded on a semi-natural experiment of processing the signal mixed with Johnson or white Gaussian noises. The developed algorithm has been simulated in Simulink MatLab. The experiment was implemented on the installation of the wireless communication National Instruments PXIe-1075. The gain in the signal to noise ratio of the nonlinear Markov filtering algorithm compared with the adaptive algorithm is from 2 to 5 dB for white Gaussian noise and various Johnson noises for correct reception error probability equaled to 0.1. The SL Johnson noise has the best masking effect, and the SB Johnson noise has the worst one.

ICACCI--23.2 14:45 A Quantum Parallel Bi-Directional Self-Organizing Neural Network (QPBDSONN) Architecture for Extraction of Pure Color Objects From Noisy Background
Debanjan Konar (SRM University - AP & Indian Institute of Technology Delhi, India); Udit Chakraborty (Sikkim Manipal Institute of Technology, India); Siddhartha Bhattacharyya (RCC Institute of Information Technology, India); Tapan Gandhi (Indian Institute of Technology Delhi, India); Bijaya Panigrahi (Indian Institute of Technology - Delhi, India)

This paper is aimed to propose a suitable real-time pure color image denoising procedure using a self-supervised network referred to as Quantum Parallel Bi-directional Self-Organizing Neural Network (QPBDSONN) architecture. The proposed QPBDSONN replicates the Parallel Bi-directional Self-Organizing Neural Network (PBDSONN) architecture by exhibiting the power of quantum computation. To process three distinct basic color components (Red, Green and Blue) of noisy color image, QPBDSONN comprises trinity of Quantum Bi-directional Self-organizing Neural Network (QBDSONN) architecture at the input layer in parallel mode. Each constituent QBDSONN comprises input, intermediate or hidden and output layers interconnected by 8-connected neighborhood topology based layer of neurons represented by qubits. Each constituent QBDSONN updates weighted interconnections in the form of quantum states through counter-propagation between hidden and output layer to obviate quantum back propagation. Rotation gates are introduced to represent inter-connection weights and activation values. Finally, a quantum measurement operation is performed at the output layer of each constituent QBDSONN and a subsequent fusion operation is carried out at the sink layer of QPBDSONN to concatenate the processed color image components resulting in the true output. A synthetic and real-life pure color spanner image affected with different degrees of uniform and Gaussian noise are used to demonstrate the superiority of QPBDSONN over its classical counterpart in terms of time and shapes.

ICACCI--23.3 15:00 Detection of Tumor in Brain MRI Using Fuzzy Feature Selection and Support Vector Machine
Amiya Halder and Oyendrila Dobe (St. Thomas College of Engineering and Technology, India)

This paper proposes a technique to categorize a brain MRI as normal, in the absence of a brain tumor or as abnormal in the presence of one. Proposed method divided into two steps. First, A set of feature is generated for accurately differentiating between a normal and abnormal MR scan images. Then, these features are reduced using fuzzy c-means (FCM) algorithm, Further, a Support Vector Machine (SVM) is used to classify the scan images into two groups, namely, tumor-free and tumor affected. The proposed method aims to produce higher specificity and sensitivity than the previous methods.

ICACCI--23.4 15:15 An Improved Nucleus Segmentation for Cervical Cell Images Using FCM Clustering and BPNN
Bharti Sharma (Punjabi University Regional Center for IT & Mgmt., India); Kamaljeet Mangat (Punjabi University Patiala, India)

Pap smear test plays an important role for the early diagnosis of cervical cancer in which human cells taken from the cervix of patient are analysed for pre-cancerous changes. The manual analysis of these cells by expert cytologist is labor intensive and time consuming job. In this paper, an improved nucleus segmentation algorithm is proposed using FCM clustering and BPNN. The existing algorithm based on FCM clustering has been improved by finding optimum clusters instead of fixed clusters. Further, shape based features are extracted from each region which act as input to Back Propagation Neural Network (BPNN); to classify regions as nucleus or non-nucleus. Thus false detected regions are removed to produce the accurate segmentation of nucleus regions. The proposed work is evaluated on the public available Herlev dataset. Experimental results show the improvement in performance (precision, recall and Dice Coefficient) of nucleus segmentation by 1%, 7% and 5% respectively compared to existing work.

ICACCI--23.5 15:30 Recognition of Handwritten Devanagiri Numerals by Graph Representation and SVM
Mohammad Idrees Bhat and Sharada B. (University of Mysore, India)

Feature based representation is inadequate in modeling different writing styles, irregularity in size, complicated structural relationships, cursivenes present in unconstrained handwritten devanagiri numerals. In this paper, this insufficiency is eliminated by giving graph based representation. Moreover, graphs are robust to similarity deformations as well. Graph Dissimilarity Space Embedding is explored to extract features from numeral graphs. The feature vector generated was trained on SVM with RBF Kernel. Efficacy of the method is corroborated by carrying extensive experiments on benchmark dataset. From this study graph based representation seems to be robust and powerful in representing handwritten devanagiri numerals.

ICACCI--23.6 15:45 Similar Handwritten Devanagari Character Recognition by Critical Region Estimation
Mahesh Jangid and Sumit Srivastava (Manipal University Jaipur, India)

The recognition of handwritten similar shape characters are still a challenging problem in HOCR system. Almost entire scripts are suffering with this kind of problem. This manuscript is considered the similar shape character problem in Devanagari script. An algorithm is proposed first to estimate the similar character pairs in Devanagari Script and 7 pairs are identified by investigating the confusion matrix. Similar shape characters have very minor difference in shape that's why at the time of recognition (classification) phase, classifier is confused with another similar shape characters. This problem can be solved by estimate that minor difference called critical region in the similar shape characters and used critical region to extract the more features before classification phase. The critical region is estimated by Fisher discrimination function. A new kind of masking techniques used to extract the features from ISIDCHAR (standard Devanagari database) and 95.17% recognition rate is obtained finally by the 81.94% improvement in similar character recognition.

ICACCI--23.7 16:00 Soft Proofing of Images Using Bacteria Foraging Optimization of Color Gamuts
Nayan Malpani (Rajasthan Technical University, India); Roshan Jain (RCEW Jaipur, India); Saurabh Maheshwari (Government Women Engineering College Ajmer & Student Member IEEE, India)

Soft proofing is the method to visualize how an image will look like on the electronic display after printing with a particular printer on a specific paper. It is required to save unnecessary frequent prints and save paper thus money. The display on which a particular image is presented may have support for less number of colors than the original image has. We treat soft proofing problem as color quantization. Use of Swarm Intelligence methods like Ant Colony Optimization and Bacteria Foraging Optimization have been very efficient and produce good resultant quantized images. The problem with these methods is their complexity of implementation. So the main target is to reduce the complexity of the color image quantization. This paper proposes an efficient technique for soft proofing of images using bacteria image foraging optimization where frequently present colors in histogram will be selected for an image. The main aim is to minimize and optimize the total number of available colors in an original image according to the target gamut. Later this method is compared with the older strategies where each pixel of the image is compared to every other pixel to check whether it is part of same color cluster. The method used in this paper is very less complex yet optimal in comparison to other color image quantization based on Bacteria Foraging Optimization. The results of processing time, histogram comparison and number of colors in resultant images are compared with previous works. This is first ever attempt of soft proofing of images using bacteria foraging optimization and it also ensures that the size of original and final image does not changes even after low resolution printing.

ICACCI--23.8 16:15 A Noncontact Vital Sign Monitoring Algorithm Using a Camera
Anusree R, Ranjith R and Ramachandran K I (Amrita Vishwa Vidyapeetham, India)

Currently contact probes are widely used to determine the vital signs. As the technology develops, non-invasive method becomes more feasible. We are using a new method of determining pulse rate (PR) commonly known as video plethysmography technique. In this method, face video is captured using a web camera and from the captured video frames, photoplethysmogram (PPG) signals are obtained from different regions of the face based on the pixel intensity variation.This PPG signals are pre-processed using filtering and noise compensated using dynamic weight factoring based on power spectral density and from this, we are determining the pulse rate by converting it into Fast Fourier Transform (FFT). In this paper, we present a vital sign measuring algorithm which can be easily implemented on portable device or on standard PC. This method of non-contact detection and monitoring of cardiac pulse is more useful in both hospitals and telemedicine.

ICACCI--23.9 16:30 Multilevel RDH Scheme Using Image Interpolation
Geetha Ramalingam (VIT University, India); Geetha S (VIT University Chennai Campus, India)

In Image processing, image interpolation is considered to be a very vital branch of study. It is very popularly used in the digital imaging processes. In case of Reversible Data Hiding (RDH) the original image can be scaled up or down according to the requirement without compromising the originality of the input image and the secret data can be hidden into the interpolated image. After recovering of the secret data, the original image can be recovered by discarding the interpolated points in the image. In the proposed scheme, more than one bit (maximum of 4 bits) can be embedded in the same pixel but with maximum of +/- 1 value change in the pixel value. Even though the additional information involved in restoring the cover image and the payload is more, the distortion (Peak Signal to Noise Ratio-PSNR) introduced due to embedding of undisclosed message is alleviated to the minimum in the marked image since the embeddable points changes only by +/-1. The experimental results with four test images demonstrate that the proposed scheme provides a larger payload and superior image quality than some of the advanced prior schemes based on image interpolation.

ICACCI--23.10 16:45 Satellite Image Resolution Enhancement Using DTCWT and DTCWT Based Fusion
Vineet Naik (V. E. S. Institute of Technology, India); Saylee Gharge (University of Mumbai, India)

To increase the resolution of any image, interpolation is performed. The high frequency components in the low resolution (LR) image are lost when the images are interpolated. To overcome this problem a new satellite image resolution enhancement algorithm based on Dual Tree Complex Wavelet transform (DTCWT) and its rotated version have been proposed. DTCWT and Rotated DTCWT gives 24 subbands which gives 12 different angular information about the LR image which are interpolated by Lanczos Interpolation to preserve the high frequency contents of the image. Non Local Means (NLM) filtering is used to eliminate the artifacts which are generated by DTCWT and rotated DTCWT. To obtain the two enhanced high resolution images inverse transforms are performed over respective subbands. The final two HR images are fused together with DTCWT based fusion to give resolution enhanced HR image. To evaluate the performance of the proposed algorithm three performance parameters namely PSNR, SSIM and Q-Index are evaluated for a database of 60 grayscale images of resolution 256x256. The subjective and objective results are compared with the existing techniques to prove the superiority of the proposed algorithm.

ICACCI--23.11 17:00 Using FPGA-SoC Interface for Low Cost IoT Based Image Processing
Shivank Dhote (Vidyalankar Institute Of Technology, India); Pranav Charjan, Aditya Phansekar and Aniket Hegde (Vidyalankar Institute of Technology, Mumbai, India); Jonathan Joshi (Vanmat Technologies. Pvt. Ltd, India); Sangeeta Joshi (Vidyalankar Institute of Technology, India)

Multifunction image processing systems are typically deployed at the application site, but with the advent of Internet of Things(IoT) the design of such systems that are accessible remotely by the applications over the internet is the need of the hour. These systems, being designed for data heavy applications need to possess a novel architecture design for image filtering and processing. This paper presents a multi-function image processing system that is accessible over the internet and is prototyped using a System on Chip (SoC) and FPGA interface. A pipelined based approach, inspired by a shift register based Random Access Memory design has been implemented for on-the-fly computation and minimal use of on-chip resources. The realization of the system was done using a low cost Spartan 6 FPGA and a Raspberry-pi B+. Data transfer between the FPGA and SoC has been achieved using the UART protocol. Computation time of different frame sizes for the system and standard I.P. software tools have been documented. Chip utilization and delays have also been reported.

ICACCI--23.12 17:15 Image Retrieval Using Extended Bag-of-Visual-Words
Nandita Bhattacharya (Indian Institute of Engineering Science and Technology Shibpur, India); Jaya Sil (Indian Institute of Engineering Science and Technology, Shibpur, India)

Bag of Visual Word (BoVW) is a popular visual content based image retrieval method, applied successfully over the years. However, the standard BoVW representation takes into consideration the quantitative information of visual words discarding semantic and spatial information of the images, crucial for retrieval. Several techniques have been applied so far by the researchers to improve the BoVW using semantic of the images. In this paper, we extend the original BoVW by capturing semantic of the images using eigenvectors obtained from the patches of the training images. The spatial information of eigenvectors are utilized to build the extended BoVW (EBoVW), represented as a projection vector with dimension equal to the number of selected eigenvectors. For each training image the projection of the patches along each eigenvector is evaluated and the nearest eigenvector is identified. This information is used to modify the respective element of the projection vector which has been appended with the original BoVW to obtain the EBoVW. Performance of retrieval has been improved using EBoVW compared to the state-of-the-art image retrieval techniques as demonstrated in this paper using Coil-100 dataset.

ICACCI--23.13 17:30 An Integrated approach of Radon Transform and Blockwise Binary Pattern for Shape Representation and Classification
Bharathi Pilar (Mangalore University & University College Mangalore, India); B H Shekar (Mangalore University, India)

In this paper we propose Radon Transform for shape representation and classification. The Radon transform is robust to noise and needs no normalization. It is capable of capturing the region information. These features are matched using Euclidean Distance. We have also proposed an integrated approach combining Radon transform with our approach Block based Binary Pattern (BBP) for enhancing the accuracy of the shape representation and classification. The BBP takes the local neighborhood of each pixel, replaces the neighborhood block by the single pixel with the pixel value, which is the decimal equivalent of the binary stream of the neighborhood block. The BBP is invariant to rotation and shift of the object and are matched using Earth Movers Distance (EMD) metric. The decision level fusion of these two works well and gives better classification accuracy and experimentally proved to be a suitable choice for shape representation. The BBP found to be rotation invariant and invariant to shift and uniform scaling of the object. Extensive experimentation have been conducted to exhibit the performance of the proposed approach on the publicly available shape databases namely, Kimia-99 and Kimia-216 and MPEG-7 data sets. The Precision-Recall graph has been drawn for MPEG-7 dataset presenting the retrieval accuracy. The experimental results demonstrate that the proposed approach yields significant improvements over baseline shape matching algorithms, representing and classifying binary objects more accurately.

ICACCI--23.14 17:45 A Segmented-Mean Feature Extraction Method for Glove-based System to Enhance Physiotherapy for Accurate and Speedy Recuperation of Limbs
Andrews Samraj (Mahendra Engineering College, India); Kalvina Rajendran (Enability Foundation for Rehabilitation, India); Ramaswamy Palaniappan (University of Kent, United Kingdom (Great Britain))

It is always desirable to have an accurate system that allows fast recovery of patients undergoing physiotherapy in terms of integrated health and cost benefits. The caregivers and medical personnel too gain a lot of expertise through the innovations involved in treatment methodology. This system proposed here was developed successfully with a straightforward Segmented-Mean feature construction method that enables its portability to suit smart biomedical devices. In this work, four different exercises were completed by four different subjects in two sessions and the feedback system was generated from every single trial performance via a visual display in a smart phone. The accuracy of the system's output depends on the precise representation of two important things namely, correct gesture and timings. These two parameters have to be captured from the signals that are generated by the hand glove during the manual physiotherapy as guided by the experts during the teaching (i.e. training) phase. Any deviation from the model should also be captured and reflected in the feedback to align the physio-movements towards perfection to minimise adverse effects. So the feature has to be constructed with complete representation and obviously, as fast as possible. The proposed Segmented-Mean method calculates the mean of data that arrives from the significant electrodes periodically, thus preserving the performance of the subject and is found suitable in estimating the enactment of exercises and required deviations (if any), accurately and as appropriate. The proposed Segmented-Mean method helps the construction of features easily than other conventional methods by reducing the computational complexity and therefore, the response time. Hence, shifting the importance to actual physiotherapy monitoring with an accurate system that works on simple feature construction made feasible

Saturday, September 24

Saturday, September 24 11:00 - 13:30 (Asia/Kolkata)

ICACCI--29: ICACCI-29: Sensor Networks, MANETs and VANETs/Distributed Systems (Short Papers)

Room: LT-2 (Academic Area)
Chair: Sriram Sankaran (Amrita University, India)
ICACCI--29.1 11:00 Web Service Selection with Global Constraints Using Modified Gray Wolf Optimizer
Manik Chandra, Ashutosh Agrawal, Avadh Kishor and Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

Web service composition is the process of assembling new services from the existing individual services. Due to the plenty of web services exhibiting similar functionality but different in terms of Quality-of-Service (QoS) requirements, the selection of an optimal service is a very crucial task. Consequently, the composition of web services with optimal QoS has become an important as well as the challenging problem. In this paper, a population-based optimization technique namely modified gray wolf optimizer (MGWO) is used for selecting the optimal set of services. We test the MGWO over the test case from the public repository of 2500 web services with nine different QoS attribute. The experimental results divulge that: (a) the MGWO is an effective and powerful approach to extract optimal services in diverse situations, and (b) the MGWO performs better than two other state-of-the-art algorithms: GA and GWO, finding the services with optimized QoS.

ICACCI--29.2 11:15 Physical Access Control Based on Biometrics and GSM
Diptadeep Addy and Poulami Bala (St. Thomas' College of Engineering & Technology, Kolkata, India)

In this current age of 21st century security and privacy has become a major challenge to all of us. Whether it is in the private sector or government agencies, there remains a constant threat on privacy and security. Bank accounts are being hacked, private information being stolen now and then, lump sum amounts of money and valuable documents are changing hands overnight. Technology is changing and evolving constantly over the years. Hackers have become more ruthless in their methods and have been successful to make their entries into any system in the world and steal things. There are many vaults all across the country where important valuables and secret documents are kept, which are important for safety and security of the nation. So to answer these concerns we have proposed an idea where we are going to integrate some biometric features along with GSM communication and build a multi layered security system- 4-layer biometric system. Biometrics has emerged as one of the most convenient, accurate, and cost-effective forms of security. Since Biometric techniques are automated for personal recognition based on physical attributes which include face, fingerprint, hand geometry, handwriting, iris, retina, and voice. Each of the techniques is customized for specific applications. Biometric data are considered to be different and distinct from personal information because it cannot be reverse-engineered to recreate any personal information and cannot be stolen to attempt theft.

ICACCI--29.3 11:30 Network Condition and Application Adaptive Fitness Based Vehicular Intelligent Routing Protocol
Rajesh Purkait (Indian School Of Mines, India)

Growing use of vehicles in the city has increased the number of accidents and reduced the safety level to the passengers. Alerting drivers about the condition of road, traffic and related aspect is crucial to safety and for the regulation of vehicle flow. To achieve this in timely manner, an accurate information is needed. So an efficient, robust and scalable VANET (Vehicular Ad-hoc Network) routing protocol will play a key role to achieve this. This protocol have a potential to protect the generation by incorporating the new generation wireless network into vehicle. This paper represent a fitness based VANET routing protocol, which is operable in dynamic network conditions and compatible with both communication modes like V-I (vehicle-Infrastructure) and V-V (vehicle-vehicle) communication. It also focuses on both multi-path and single-path in the multi hop routing based on delay sensitive and delay non-sensitive data in a dense and sparse environment. It also provide a cost efficient, intelligent routing strategy by sending a safety packet delivery assurance by optimally measuring the fitness in terms of distance, velocity vector, and link quality metric in V-V and V-I communication. The performance of our proposed protocol is evaluated by Ns2 simulator. The proposed protocol is compared to existing protocols like AODV (Ad-hoc On demand Distance Vector Routing), GPSR (Greedy Perimeter Stateless Routing) and prove the safety and efficiency of our protocol in term of Road Safety Assurance, and Packet Delivery Ratio.

ICACCI--29.4 11:45 Throughput and Outage of a Wireless Energy Harvesting Based Cognitive Relay Network
Binod Prasad (IIT Guwahati, India); Sankararao Adduru U G (NIT Durgapur, India); Sanjay Dhar Roy (National Institute of Technology Durgapur, India); Sumit Kundu (National Institute of Technology, Durgapur, India)

This paper evaluates the throughput and outage performance of a secondary user (SU) in a cognitive relay network with an energy harvesting decode-and-forward (DF) relaying scheme. A potential relay, selected from a set of relays using proactive relaying scheme, harvests energy. Next, the relay node uses the harvested energy to forward the decoded source information to the destination node. The power transmitted by source and relay nodes are constrained by a tolerable interference threshold of the primary receiver. Two schemes of relaying, a time switching based relaying (TSR) and a power splitting based relaying (PSR) are studied. Considering wireless energy harvesting and cognitive constraint at the relay node, we analyse the achievable throughput and outage performance of a cognitive DF relaying network. We study the impact of several system parameters such as energy harvesting time, power splitting ratio, number of relays and tolerable interference threshold limit of PU on the throughput and outage performance of SU.

ICACCI--29.5 12:00 Towards Automated Real-Time Detection of Misinformation on Twitter
Suchita Jain and Vanya Sharma (IGDTUW, India); Rishabh Kaushal (Indira Gandhi Delhi Technical University for Women, India)

Online Social Media (OSM) in general and more specifically micro-blogging site Twitter has outpaced the conventional news dissemination systems. It is often observed that news stories are first broken in Twitter space and then the electronic and print media take them up. However, the distributed structure and lack of moderation in Twitter compounded with the temptation of posting a news worthy story early on Twitter, makes the veracity of information (tweet) a major issue. Our work is an attempt to solve this problem by providing a approach to detect misinformation/rumors on Twitter in real-time automatically. We define a rumor as any information which is circulating in Twitter space and is not in agreement with the information from a credible source. For establishing credibility, our approach is based on the premise that verified News Channel accounts on Twitter would furnish more credible information as compared to the naive unverified account of user (public at large). Our approach has four key steps. Firstly, we extract live streaming tweets corresponding to Twitter trends, identify topics being talked about in each trend based on clustering using hashtags and then collect tweets for each topic. Secondly, we segregate the tweets for each topic based on whether its tweeter is a verified news channel or a general user. Thirdly, we calculate and compare the contextual and sentiment mismatch between tweets comprising of the same topic from verified Twitter accounts of News Channels and other unverified (general) users using semantic and sentiment analysis of the tweets. Lastly, we label the topic as a rumor based on the value of mismatch ratio, which reflects the degree of discrepancy between the news and public on that topic. Results show that a large amount of topics can be flagged as suspicious using this approach without involvement of any manual inspection. In order to validate our proposed algorithm, we implement a prototype called The Twitter Grapevine which targets rumor detection in the Indian domain. The prototype shows how a user can leverage this implementation to monitor the detected rumors using activity timeline, maps and tweet feed. User can also report the rumor as incorrect which can then be updated after manual inspection.

ICACCI--29.6 12:15 Performance Analysis of Cooperative Communication in Wireless Sensor Network
Ankit Kumar Jain (National Institute of Technology, Kurukshetra, India); Kaushlendra Pandey (Indian Institute of Technology, Kanpur, India); Soumitra Mehrotra (Jaypee University of Information Technology, India)

Multiple input multiple output (MIMO) technology is difficult to adopt in a wireless sensor network (WSN) due to its distinct characteristic features. By allowing the neighboring nodes of WSN to cooperate in accordance with principles of MIMO technology and it may be then referred to as virtual MIMO (V-MIMO) technique. This work presents a novel scheme that develops cooperation in a heterogeneous WSN. Under the considered simulation framework the results indicate that the total energy consumption in a WSN with 2 cooperative nodes is found to be better than single input single output (SISO) system by 9.5µJ per unit distance. However, the trend does not continue when the number of cooperative node increased further.

ICACCI--29.7 12:30 Automated Classification of Security Requirements
Rajni Jindal, Ruchika Malhotra and Abha Jain (Delhi Technological University, India)

Requirement engineers are not able to elicit and analyze the security requirements clearly, that are essential for the development of secure and reliable software. Proper identification of security requirements present in the Software Requirement Specification (SRS) document has been a problem being faced by the developers. As a result, they are not able to deliver the software free from threats and vulnerabilities. Thus, in this paper, we intend to mine the descriptions of security requirements present in the SRS document and thereafter develop the classification models. The security-based descriptions are analyzed using text mining techniques and are then classified into four types of security requirements viz. authentication-authorization, access control, cryptography-encryption and data integrity using J48 decision tree method. Corresponding to each type of security requirement, a prediction model has been developed. The effectiveness of the prediction models is evaluated against requirement specifications collected from 15 projects which have been developed by MS students at DePaul University. The result analysis indicated that all the four models have performed very well in predicting their respective type of security requirements.

ICACCI--29.8 12:45 Feedback-based Adaptive Speedy Transmission (FAST) Control Protocol to Improve the Performance of TCP Over Ad-Hoc Networks
Aniket Deshpande and Ashok Kaushal (Mewar University, India)

The main purpose of the research is to develop the performance improvement solution for use of TCP over ad-hoc networks. This research is implemented in network simulator2. This research analyzed the different the major techniques proposed in the past by different researchers including TCP-DOOR, End-to-End Approach, Feedback Scheme and Adaptive Back-off Response approach protocols. Based on the review of existing works, the authors propose to combine the feedback and adaptive approach to make it adaptable to most major scenarios. Next to that, to further enhance the overall performance, it is proposed to change underlying behavior of Standard TCP to its HS-TCP or its variant MX-TCP (depending on network conditions). From the findings of the research, it was found that MX-TCP is more appropriate for lossy links such as driving on highway or 2G and accessing public hotspots or internet and so on. It was also observed that HS-TCP is more appropriate and useful in high bandwidth scenarios like fourth generation Mobile Data Network with lower latency. Thus the modified approach of combining the Feedback Scheme with Adaptive Back-off based Response was observed to help TCP perform better compared to working with any of these protocols in isolation. Finally, HS-TCP/ MX-TCP addressed the throughput limitations, thereby delivering the maximum performance and enhanced end-user experience. Future scope for this research can be to bring in the intelligence wherein the protocol can detect the network conditions as well as the external environmental factors like power and then select whether to use MX-TCP or HS-TCP.

ICACCI--30: ICACCI-30: Symposium on Women in Computing and Informatics (WCI-2016) - Short papers

Room: LT-3 (Academic Area)
Chair: Anil K Dubey (ABES Engineering College Ghaziabad, Uttar Pradesh, India)
ICACCI--30.1 11:00 Abnormal Activity Detection for Bank ATM Surveillance
Rajvi Rajesh Nar (The LNM Institute of Information Technology, India); Alisha Singal (The LNM Institute of InformationTechnology, India); Praveen Kumar (GRIET, India)

Posture recognition is one of the interesting fields in computer vision because of its numerous applications in various fields. The problem we face with simple camera can be answered by the usage of 3d camera. In this project, we explore a technique of using skeleton information provided by Kinect 3D camera for posture recognition for effective real-time ATM intelligent monitoring. To get posture recognition we can use kinect to track joint and their position. By analyzing position information, the system detects abnormal behaviors.

ICACCI--30.2 11:15 Energy Efficient Approach to Send Data in Cognitive Radio Wireless Sensor Networks(CRSN)
Ashish Semwal (GBPEC, PAURI & Uttarakhand, India)

The adoption of cognitive radio wireless sensor networks be supposed to preserve developing, essentially in such a fields as a healthcare, logistic, scientific and a military applications .Yet, a sensor size or dimension represents a consequential constraint mainly in expressions of an energy autonomy and as a result of a life period for the batteries which is so too tiny. Thus, the reason why a concentrated research being conducted at the present time on how to control energy consumption in sensor within a network, taking communications into account as a precedence. For this intention, we propose an algorithm or method to send data from source to sink with a less energy consumption within cognitive wireless sensor networks, according to the number of nodes, data flow rate, and the distance between them . Moreover, we have succeeded in sinking energy utilization within a linear sensor network through up with nodes featuring with differing data flow rates between them.

ICACCI--30.3 11:30 Robotics and Programming: Attracting Girls to Technology
Christiane Borges Santos (Instituto Federal de Goiás, Brazil); Deller Ferreira (Federal University of Goias, Brazil); Maria Carolina Borim do Nascimento R Souza and Andressa Rodrigues Martins (MNT, Brazil)

This article presents the Metabotix project: Teaching Robotics and Programming for girls, which is being developed since 2014 at Luziânia Campus of Instituto Federal de Goiás, using free/open hardware and software, and recyclable electronic materials. The project counts on interdisciplinary methodology for learning robotics. Students apply the knowledge acquired in high school classes aiming to grow girls interest in STEM (science, technology, engineering and mathematics) areas.

ICACCI--30.4 11:45 Image Hashing by SDQ-CSLBP
Varsha Patil (University of Mumbai & SIES GST, India); Tanuja K. Sarode (Thadomal Shahani Engineering College, India)

— Approach for image hashing is to use powerful feature descriptor which captures essence of an image. Applications of image hashing lies in the area of content authentication, structural tampering detection, retrieval and recognition. Hashing is a compact summarized information of an image. Center Symmetric Local Binary Pattern (CSLBP) is one of the powerful texture feature descriptor which captures the smallest amount of change. Using CSLBP, appressed hash code can be obtained for an image. If CSLBP feature is weighted by a boost factor, it will enhance success rate of an image hashing. The proposed method of SDQ-CSLBP extract texture feature using CSLBP with standard deviation as weight factor. Standard deviation which represents local contrast is also a powerful descriptor. Resultant histogram of CSLBP is of 16 bin for each block of an image. Further it can be compressed to 8 bin by using the flipped difference concept. Without a weight factor, compressed CSLBP has low discrimination power. Experimental results show that the proposed method is robust against content preserving manipulation and sensitive to content changing and structural tampering.

ICACCI--30.5 12:00 Statistical Feature Extraction for Small Infrared Target
Neha Pokhriyal and Shashikant Verma (Aicte, India)

Feature extraction and selection have become necessary steps for 'low loss dimension reduction'. Machine learning, data mining and pattern recognition are the respective fields to use this methodology. In small target infrared image many false alarms may occur due to different clutters. In machine learning, for preprocessing set of relevant features of the target is required and for dimensionality reduction selection of most appropriate feature subset is done for the classification purpose. To reduce false alarm rate, this paper focuses on extraction of relevant features of small infrared target where each feature is analyzed statistically and selection of relevant feature subset is done by using forward feature selection approach and there is reduction in false alarm rate by a factor of 2.3.

ICACCI--31: ICACCI-31: NLP'16/Natural Language Processing and Machine Translation (Short Papers)

Room: LT-4 (Academic Area)
Chair: Shakti Awaghad (J. D. College of Engineering, India)
ICACCI--31.1 11:00 Use of Fuzzy Logic and WordNet for Improving Performance of Extractive Automatic Text Summarization
Jyoti Yadav (Malaviya National Institute of Technology, Jaipur, India); Yogesh Kumar Meena (Malaviya National Institute of Technology, India)

Text Summarization produces a shorter version of large text documents by selecting most relevant information. Text summarization systems are of two types: extractive and abstractive. This paper focuses on extractive text summarization. In extractive text summarization, important sentences are selected based on certain important features. The importance of some extractive features is more than the some other features, so they should have the balance weight in computations. The purpose of this paper is to use fuzzy logic and wordnet to handle the issue of ambiguity and imprecise values with the traditional two value or multi-value logic and to consider the semantics of the text. Three different methods: fuzzy logic based method, bushy path, and wordnet based method are used to generate 3 summaries. Final summary is generated by selecting common sentences from all the 3 summaries and from rest of the sentences in union of all summaries, selection is done based on sentence location. The proposed methodology is compared with three individual methods i.e. fuzzy logic based summarizer, bushy path summarizer, and wordnet based summarizer by evaluating the performance of each on 95 documents from standard DUC 2002 dataset using ROUGE evaluation metrics. The analysis shows that the proposed method gives better average precision, recall, and f-measure.

ICACCI--31.2 11:15 Online Recognition of Malayalam Handwritten Scripts - A Comparison Using KNN, MLP and SVM
KB Baiju and Sabeerath K (University of Calicut, India)

This paper experiments writer independent OHCR for Malayalam script using KNN, MLP and SVM classifiers. The key aspect of the work is to identify the best classifier based on recognition accuracy for a set of features. The system is trained using a database of 44 character classes with 100 samples per class. Accurate Dominant Points, Aspect Ratio, Start-End Octants and Intersections are used as features. The feature vector has been reduced to minimize training and testing time. The system reported an average recognition rate of 95.12 for SVM with RBF kernel, 93.17 for ANN with MLP and 90.39 for KNN. And the results interprets that, SVM with RBF is the best classifier for the features selected in the work.

ICACCI--31.3 11:30 Ranking Scholarly Work Based on Author Reputation
Kajol Kardam, Akshita Kejriwal and Kirti Sharma (IGDTUW, India); Rishabh Kaushal (Indira Gandhi Delhi Technical University for Women, India)

Who is an authoritative author in a particular field is a common question that every learner or researcher wants to know. The widely available scholarly work from authors of different fields over the Internet is indexed on various platforms like Google Scholar, DBLP, etc, each platform using various ranking algorithms on the criteria of relevance, time and citation counts. As part of our work in this paper, our first contribution is that we propose a novel method of ranking authors based on not just their citation count but also on the reputation of authors who have been citing their work. Intuitively, this means that an author ought to have a better rank (or reputation / authority) if he or she is cited by other reputed authors than the author who is cited by less reputed authors. In order words, citation index alone is not sufficient to rank author's authority or to ascertain their reputation in a field. Further, our second contribution is that our proposed author rank is used to rank scholarly articles, which until now has been based on citation counts and other metrics, when a user searches using a search keyword. A scholarly work is given higher ranking if it is written by reputed authors. Results show that authors with higher citation count may receive a lower rank than those with lower citation count due to the quality of the citations (reputation of authors citing them) that is being received. Our work can immensely help students (learners and researchers) find authoritative authors and papers in a field by employing our proposed ranking metric which is based on the reputation of authors who are citing a research work.

ICACCI--31.4 11:45 Bayesian Learner Based Language Learnability Analysis of Hindi
Sandeep Saini, Nitin Gupta, Shivin Bhogal and Shubham Sharma (The LNM Institute of Information Technology, Jaipur, India); Vineet Sahula (MNIT Jaipur, India)

Language acquisition is one of the first task performed by human brain. Linguistics have always been debating whether learning a native language is language dependent process or universal rules can be applied to learn all the languages. In this work, we analyse the learnability of Hindi based on Bayesian learner models. Considering the relation between theories of knowledge representation and theories of knowledge acquisition, Bayesian models are developed which can be modified to suit universal rules or language based rules. We verify the learnability of Hindi based on Unigram and Bigram models of learning. It is found that Hindi is better learnable using constrained learner models and shows better results in Bigram approach. We discuss implications for both theories of representation and theories of acquisition.

ICACCI--31.5 12:00 Extraction of Syllabically Rich and Balanced Sentences for Tigrigna Language
Hafte Abera (Ethiopia, Addis Ababa University, Ethiopia); Climent Nadeu (Universitat Politècnica de Catalunya, Barcelona, Spain, Spain); Sebsibe H/Mariam (Addis Ababa University Addis Ababa Ethiopia, Ethiopia)

The Tigrigna language lacks text and speech corpora for developing speech technologies. In this work, after considering the phonetic nature of Tigrigna, we have gathered and pre-processed an initial and relatively large corpus of sentences. Using the syllable as basic phonetic unit, two different sub-corpora are developed from that initial text corpus, one that is phonetically rich and the other that is balanced. Two different methods are used for that purpose, which are variants of already published methods. Finally, the frequencies of occurrence of the syllabic units in the balanced corpus are contrasted with a previously reported study of Tigrigna phonetics, observing consistency between both.

Index Terms: Speech corpus production, sentence selection, syllabically rich, syllabically balanced.

ICACCI--31.6 12:15 A Comparative Study of Noise Reduction Techniques for Automatic Speech Recognition Systems
Kanika Garg and Goonjan Jain (Jawaharlal Nehru University, India)

Automatic speech recognition systems are greatly influenced by the noise. Noise generated in the environment or the channel tends to degrade the performance of speech recognition systems. Such unwanted noise signals may alter the main characteristic features of the voice signals and corrupt the quality of the speech signal and information contained in it. This cause a significant harm to human-computer interactive systems. Noise processing of these signals for speech recognition systems is generally articulated as a digital filtering process in which noisy speech is passed through linear filter to obtain the clean speech estimation. This paper focuses on noise estimation, removal and speech enhancement techniques. In this paper, initial findings support Gammatone filters instead of conventional Weiner filters and Line Enhancers. Spectral subtraction also showed promising results.

ICACCI--31.7 12:30 Optimizing Feature Extraction Techniques Constituting Phone Based Modelling on Connected Words for Punjabi Automatic Speech Recognition
Arshpreet Kaur Chauhan (Chitkara University, Punjab, India)

Punjabi phoneme sounds are tonal in nature which dissent in most regions of Punjab. Recent research works reveal less significant work done towards developing a speech recognition system in Punjabi. The work done will feature out variability in the correctness and accuracy of various feature extraction techniques. Following paper objects the application of Automatic Speech Recognition on connected words instituting HTK toolkit modelled on Hidden Markov Model (HMM) to build the system. Back-end of the system was braced for 150 distinct Punjabi words from 16 distinct speakers from noise-free corpus and 12 speakers were indulged for the collection of noisy corpus including both male and female. In the phrase of speech recognition, the proposed Feature Extractor we use Front end techniques as "power normalized cepstral coefficients (PNCC)", "Mel frequency cepstral coefficients (MFCC)" and "Perceptual Linear Prediction (PLP)" following a statistical comparison based on the accuracy and correctness of results attained. To attain a higher rate of accuracy level 34 phones for Punjabi language are used to break each word into small sound frames. Hence, the comparison based on the nature of training and testing environment will aid in framing a vital speech recognition system for Punjabi language.

ICACCI--31.8 12:45 An Efficient English to Hindi Machine Translation System Using Hybrid Mechanism
Jayashree Nair (Amrita School of Engineering, India); Amrutha K and Deetha R (Amrita Vishwa Vidyapeetham Amritapuri, India)

Machine translation from English to Hindi is a research area to enhance the knowledgeable society of Indians without any language barrier. India is considerable as a polyglot country; different states have the different territorial languages. English is a universal and Hindi is the National Language of Indians. It is not mandatory that each one understands English language and the speakers are able to convey their ideas to listeners . Machine Translation is a challenging field of Natural Language Processing that focuses on translating text from one language to another. We propose a system design which uses English as the source language and Hindi language as the target. We describe and compare the approaches of Statistical and Rule based machine translation over English to Hindi Language. Hybrid Machine Translation (HMT) approach which is the joined form of Rule Based and Statistical technique for translating text in English to Hindi using declensions.

ICACCI--31.9 13:00 Named Entity Recognition in Assamese: A Hybrid Approach
Padmaja Sharma (Tezpur University, Tezpur, Assam, India)

Most NER systems have been developed using one of two approaches: Rule-based or Machine-Learning, with their strengths and weaknesses. In this paper, we propose a hybrid NER approach which is a combination of both rule-based and ML approaches to improve the overall system performance for a resource poor language like Assamese. Our proposed hybrid approach is capable of recognizing four types of NEs: Person, Location, Organization and Miscellaneous. The empirical results obtained indicate that the hybrid approach outperforms both rule-based and ML when processed independently. The hybrid Assamese NER obtains an F-measure of 85%-90%.

ICACCI--32: ICACCI-32: Embedded Systems/Computer Architecture and VLSI/Adaptive Systems/ICAIS'16 (Short Papers)

Room: LT-6(Academic Area)
Chairs: G Sharma (LNM Institute of Information Technology, Jaipur, India), Jitesh Ramdas Shinde (Vaagdevi College of Engineering & TMVLSI, India)
ICACCI--32.1 11:00 Low Power and Temperature Compatible FinFET Based Full Adder Circuit with Optimised Area
Jiwanjot Kahlon (Amity University, Uttar Pradesh, India); Pradeep Kumar (Amity University Uttar Pradesh, India); Anubhav Garg (Amity University, India); Ashutosh Gupta (Amity University Uttar Pradesh, India)

This paper presents an implementation of low power and temperature compatible FinFET based Full Adder circuit with optimized area. The proposed circuit is designed and implemented using FinFETs at 45 nm technology. The proposed adder circuit consists of 9 transistors which is called as 9-T adder cell. FinFETs are new emerging transistors which can work in nanometer range and overcome the short channel effects. The simulation of proposed circuit is done using Tanner tool version 13.0 using FinFET model files. On the basic of obtained simulation results power, power delay products with supply voltage have been compared. The result is also checked for temperature points from -5 to 35 degrees at 0.3V. Finally the performance of the proposed circuits has been compared with the existing circuits reported in literature and significant power reduction has been observed.

ICACCI--32.2 11:15 Multi-objective Optimization Domino Techniques for VLSI Circuit
Jitesh Ramdas Shinde (Vaagdevi College of Engineering & TMVLSI, India); Suresh Salankar (G H Raisoni College of Engineering Nagpur, India); Shilpa Shinde (Researcher, India)

The Domino logic circuits are often preferred in high performance designs because of the high speed and low area advantage it offers over CMOS static logic design. But in integrated circuits, the power consumed by clocking gradually takes a dominant part, and therefore research work in this paper is mainly focused on to study the comparative performance of various domino logic based techniques proposed in last decade and to evaluate the performance of the different domino techniques in terms of delay, power and their product (figure of merit) on BSIM4 model using Agilent Advanced Design System tool on 0.18µm CMOS process technology. The main focus of this work was to find the best Domino logic based technique that would provide best possible trade off to optimize multiple goals viz. area, power and speed at the same time to meet the multi-objective optimization goal for VLSI circuits.

ICACCI--32.3 11:30 Electro-Thermo-Mechanical FE Simulations of OFHC Cu Material for Electric Contact with DC/AC Currents
Atul S Takalkar (VIT University, India); Lenin babu M C (VIT University)

This paper investigates the effect of structural, electrical and thermal coupled field phenomena on electric contact used in the electric switches. An axis symmetric two dimensional elastic-plastic finite element (FE) model for sphere and plane is developed to predict the contact behavior with smooth surface. Structural analysis of finite element analysis is performed and the FE model is coupled with electric-thermal analysis model through COMSOL multi-physics software. The model is simulated with different boundary conditions and contact area to analyze the temperature fall at the point of contact. Electro-thermo-mechanical coupled field simulation is carried for alternative current (AC) with frequency of 50 Hz. The comparison between analytical and numerical results is to predict the behavior of electric contact for two different types of currents DC and AC for proper selection of material for electric switches.

ICACCI--32.4 11:45 Characterization of Valveless Micropump for Drug Delivery by Using Piezoelectric Effect
Atul S Takalkar (VIT University, India); Lenin babu M C (VIT University)

This paper deals with design of nozzle/diffuser and the use of piezoelectric effect for the actuation of diaphragm of valve-less micro pump which has application in medical field for drug delivery. A three dimensional FE model of nozzle/diffuser and actuator is used for numerical simulation. Fluid flow analysis of nozzle/diffuser is performed to calculate their efficiency and frequency. The simulation is performed for variable converging and diverging angle by varying their length and width to calculate steady flow rate. Analysis of actuator unit is also carried out by using the COMSOL multi-physic software. The simulation of actuator unit depends on mechanical properties of material such as Young's modulus, Poisson's ratio. The numerical result used to predict the actual behavior of actuator unit for higher frequency range which helps in proper selection of material. The comparison between analytical and numerical results is done which helps in predicting the flow rate and actual working of micro pump.

ICACCI--32.5 12:00 Simulation & Design of Maximum Current Point Tracking Controller for 2 MHz RF H- Source
Priya Khachane (MPSTME NMIMS & RRCAT, India); Dharmraj Ghodke and Vinod Senecha (Scientific Officer, India); Vaishali Kulkarni (NMIMS University, India); Satish Joshi (O S, India)

This paper presents a Maximum Current Point Tracking (MCPT) Controller for SIC MOSFET based high power solid state 2 MHz RF inverter for RF driven H- ion source. This RF Inverter is based on a class-D, half-bridge with series resonance LC topology, operating slightly above resonance frequency (near to 2 MHz). Since plasma systems have a dynamic behavior which affects the RF antenna impedance, hence RF antenna voltage and current changes, according to change in plasma parameters. In order to continuously yield maximum current through antenna, they have to operate at their maximum current point despite the inevitable changes in the antenna impedance due change in plasma. A MCPT controller simulated using LT-spice, where in the antenna current sensed, tracked to maximum point current in close loop by varying frequency of voltage controlled oscillator. Thus impedance matching network redundancy is established for maximum RF power coupling to the antenna.

ICACCI--32.6 12:15 Multi-Agent Trajectory Control Under Faulty Leader: Energy-Level Based Leader Election Under Constant Velocity
B. K. Swathi Prasad and Aditya G Manjunath (M. S. Ramaiah University of Applied Sciences, India); Hariharan Ramasangu (Research, India)

A multi-agent flocking control algorithm consisting of leader and agents moving at constant velocity with a method for leader election is proposed in this paper. It is found that leader election is essential due to loss of connectivity between leader and agents leading to divergence from trajectory path. A leader election algorithm is developed to replace a faulty leader, thereby regain control of the agents. Leader election algorithm utilizes energy level of each agent. All agents and leader have a certain energy based on theoretical and measured control inputs. The elected leader has least utilized energy.

ICACCI--32.7 12:30 System Analysis for Optical Inter-Satellite Link with Varied Parameter and Pre-Amplification
Guddan Kumari (Government women engineering college ajmer); Chetan Selwal (Government Women Engineering College Ajmer, India)

Inter-satellite optical wireless communication (Is-OWC) provides high date rate long distance outdoor links free of licensing issue. There is a line of sight communication which is affected by the alignment of satellite in space. In this paper, the system is designed with post-amplification in the inter-satellite link to combat the losses. The system bit error rate (BER) and Q-factor is tracked for the performance evaluation by varying the parameters such as power, bit rate, distance etc. Inter-satellite OWC where BER targets as low as 10-9 and Q-factor greater than 7 are typically aimed to achieve.

ICACCI--32.8 12:45 Electrocardiogram Waveform Denoising Using a Modified Multi-Resolution Filtered-X LMS Algorithm Based on Discrete Wavelet Transform
Roop Kanwal and Sangeet Pal Kaur (Punjabi University Patiala, India)

This paper presents adequate method that uses modified multi-resolution filtered-x LMS algorithm based on wavelet transform to obviate the noise from contaminated ECG signal by various noises, namely, power line interference, electromyogram (EMG) noises and synthetic interferences. The objective results are measured qualitatively and quantitatively by application of proposed method on some of MIT-BIH Arrhythmia Database waveforms. The simulation results show that the proposed method effectively removes the noise from polluted ECG signal with faster convergence and less computational complexity of the algorithm deployed.

ICACCI--32.9 13:00 Bluetooth RSSI Based Collision Avoidance in Multi-robot Environment
Lijina P and Nippun Kumaar Aa (Amrita Vishwa Vidyapeetham, Amrita School of Engineering, India)

Multi-robot system is gaining its importance in robotic research. One critical issue in multi-robot system is collision among the mobile robots while sharing same workspace. This paper deals with the collision-free path planning for multiple mobile robots using Bluetooth RSSI value. In the proposed collision avoidance algorithm a decentralized approach with fixed priority level for robots is considered. A variable speed technique based on the RSSI value of robots is used and the obstacle avoidance is implemented based on State based Obstacle Avoidance Algorithm. Proposed algorithm is implemented and tested using Webots 3D simulator.

ICACCI--32.10 13:15 A Suboptimal Scheduling Scheme for MIMO Broadcast Channels with Quantized SINR
Ashok Panda (International Institute of Information Technology, Bhubaneswar, India); Swadhin Mishra (in National Institute of Science and Technology, Berhampore, India); Sasanka Sekhar Dash (Amdocs Development Center, India); Neha Bansal (NIST, Berhampur, India)

A sub-optimal scheduling scheme for a multiuser MIMO (MU-MIMO) system is considered where the main objective is to reduce the feedback load. In the proposed scheduling scheme Mt transmitter selects Mt most favorable users in terms of received signal-to-noise-plus-interference-ratio (SINR) at each user. To reduce feedback load in the reverse channel, we quantize the SINR values into few bits and the quantized bits are feedback to the transmitter side instead of the real values of SINR. We consider two approaches of using the threshold in determining the quantization levels in the process. In the first approach we consider a fixed threshold for all set of users where as in the second approach an adaptive threshold technique is used in which the threshold values are determined depending on the number of users. The achievable throughput using full SINR feedback and quantized SINR feedback are compared. The variation of the achieved throughput with the number of quantization bits is also presented. The comparison between the proposed quantized SINR based scheduling scheme with the full SINR feedback scheme in terms of the feedback load is also presented.Although the proposed scheme compromises on the achieved throughput but it has a remarkable advantage in terms of reduced feedback load.

ICACCI--32.11 13:30 Design and Simulation of Microstage Having PZT MEMS Actuator for 3D Movement
Kiran Junagal (Rajasthan Technical University Kota Rajasthan India, India); R. s. Meena (UCE, KOTA, India)

This paper presents design and simulation of a silicon and microstacked PZT hybrid XYZ microstage whose base material is silicon while PZT is an actuator. The microstacked PZT MEMS actuator, integrated into a silicon microstage with dimension of 20mm×20mm×3.16mm is proposed. The microstage is capable to generate large stage area, considerable displacement with high accuracy. The silicon microstage contains moonie structure which helps in further amplify actuation of PZT actuator. Three directional movements are carried out by respective microstacked PZT actuators. The microstacked PZT actuator has a benefit of having high precision and resonance frequency. The structure is designed and simulated on COMSOL Multiphysics software based on Finite element method.

ICACCI--32.12 13:45 An Analytical Model for a Resource Constrained QoS Guaranteed SINR Based CAC Scheme for LTE BWA Het-Nets
Arijeet Ghosh and Iti Misra (Jadavpur University, India)

This paper proposes an effective signal-to-interference plus noise ratio (SINR) based Call Admission Control (CAC) scheme with constrained resource that guarantees Quality of Service (QoS) requirements for admitted users in a 3GPPLTE Heterogeneous Broadband Wireless Access network (Het-BWA-Nets) scenario. It also develops an analytical model based on Continuous Time Markov Chain (CTMC) to evaluate the performance of the proposed CAC scheme in terms of some popular QoS parameters like New Call Blocking Probability (NCBP), Handoff Call Dropping Probability (HCDP), Connection Outage Probability (COP) and Bandwidth Utilisation (BU). The analytical model along with CAC scheme is implemented in MATLAB platform to generate results. The novelty of the CAC scheme is to consider the interference level for each connection type that helps to get COP with required QoS. It is seen that the proposed SINRCAC scheme in 3GPPLTE Het-BWA-Nets not only generates commendable improvements in NCBP, HCDP, and COP but also provides 76.70 % improvement in overall BU of the system.

ICACCI--32.13 14:00 Functional Flow Diagram(FFD): Semantics for Evolving Software
Vaishali Chourey (Medicaps Institute of Technology and Management, India); Meena Sharma (IET - DAVV Instituute of Engineering & Technology, DAVV, Indore India, India)

In the current scenario, component-based system development approaches have led to reuse based development and espousal of large-scale software systems. To analyze the design of such systems and assess its performance is not a trivial task. Model-based testing tools have maturely signified the success of functional testing of such systems. An urge to spawn the testing of non-functional behavior from development models is further expected. Reliability assessment is one such aspect. However, the design models have not been employed in the prevailing non-functional assessment techniques. Assuming the strength of such models and the need to devise assessment measures, an attempt to formalize architectural models to this context is presented in the paper. Our paper focuses on deriving an intermediate notation for non-functional evaluation of the software systems. The concept of modeling, annotating constraints and making a visual of component interaction patterns is the scope of the work. The new model thus generated has features similar to the System's Engineering "Functional Flow Diagram" and will be used with same definitions for the software components.

ICACCI--33: ICACCI-33: Signal/Image/video/speech Processing/Computer Vision; and Pattern Recognition & Analysis (Short Papers)

Room: LT-7(Academic Area)
Chair: J Siva Ramakrishna (M. S. Ramaiah University of Applied Sciences, India)
ICACCI--33.1 11:00 Human Face Detection and Recognition in Videos
Swapnil Tathe (Sinhgad College of Engineering, Pune); Abhilasha Sandipan Narote (Smt. Kashibai NAvale College Of Engineering, University of Pune, India); Sandipann Pralhad Narote (Government Residence Women Polytechnic, Tasgaon, Sangli)

Advancement in computer technology has made possible to evoke new video processing applications dealing with human faces. Examples of these applications are face detection and recognition applied to surveillance systems, gesture analysis applied to user-friendly interfaces etc. The first step in practical face analysis systems is real-time searching for face in sequential images containing face and background. In this paper a system is proposed for human face detection and recognition in videos. Efforts are made to minimize processing time for detection and recognition process. To reduce human intervention and increase overall system efficiency the system is divided into three steps- motion detection, face detection and recognition. Motion detection reduces the search area and processing complexity of systems.

ICACCI--33.2 11:15 Optimistic Bilateral MAP-MBD Based Color Restoration Using GA
Manavjot Dhaliwal (Guru Nanak Dev University, India)

Restoration of image by using maximum a posterior-multichannel blind deconvolution (MAP-MBD) strategy is found to powerful repair approach among accessible techniques. It improves awareness of insight image in such a way that the output image has lesser items than earlier in the day techniques. Though MAP-MBD has shown very substantial benefits than accessible techniques but it's neglected many issues. MAP-MBD strategy has unnoticed the consequence of noise in images, due to remodel domain strategies. The MAP-MBD is prescribed to medical images solely. It's not made for colored observed pictures. The Genetic Algorithm appeared to reproduce operates with regular plan needed for evolution. GA works effective exploration in globally areas to obtain a optimum solution. GA is far better in the particular contrast development and develop picture using regular distinction. The genetic algorithm is employed to boost the image quality of the colored images at good extend and also bilateral filtration is used for removing noise. By using these parameters (PSNR), (MSE), (RMSE), and (BER) results are quite better in proposed method.

ICACCI--33.3 11:30 Optimized Multi-Layered Unequal Error Protection of SPIHT Coded Images Using 64-HQAM
Anam Mobin, Athar Ali Moinuddin, Ekram Khan and Mirza S. Beg (Aligarh Muslim University, India)

In view of increasing use of video transmission over wireless channels, a lot of research efforts have been made to trade-off the error resiliency and transmission bandwidth requirements. This in turn has led to development of various unequal error protection (UEP) schemes. In this paper, the performance of optimized 2-layered and 3-layered UEP schemes using hierarchical 64 QAM (64-HQAM) for wavelet-transformed images coded with set partitioning in hierarchical trees (SPIHT) algorithm and transmitted over additive white Gaussian noise (AWGN) channel is investigated. A look-up table (LUT) based approach to select the optimal values of modulation parameters of 64-HQAM for different channel conditions is suggested so as to reduce the computational complexity. Simulation results show that 3-layered optimized 64-HQAM-based UEP outperforms corresponding non-optimized UEP. Also, 3-layered optimized UEP scheme using 64-HQAM performs slightly better as compared to optimized 2-layered UEP scheme using 64-HQAM.

ICACCI--33.4 11:45 Cascaded Speech Enhancement Technique in Highly Nonstationary Noise Environment
Deeleia S (Visvesvaraya Technological University & Bangalore Institute of Technology, India); J C Narayana Swamy (Bangalore Institute of Technology, India); K D N V S Prasad (M-(SRS), CRL, Bharat Electronics Limited, Bangalore, India); D Seshachalam (BMS College of Engineering, India)

This paper discusses on the enhancement of noisy speech in highly nonstationary noise environment. The concept of Minimum Mean Square Error (MMSE) inspired Data driven noise power (DDNP) estimation and geometrical approach to Spectral Subtraction (GASS) is used in cascade in order to obtain a better Segmental Signal to Noise Ratio (SSNR) and good Perceptual Evaluation of Speech Quality (PESQ) score. This proposed method gives better results for low SNR speech signals compared to iterative enhancement of the individual GASS and DDNP algorithms.

ICACCI--33.5 12:00 Use of OBIA for Extraction of Cadastral Parcels
Ganesh Khadanga and Kamal Jain (Indian Institute of Technology, Roorkee, India); Suresh Merugu (CMR College of Engineering and Technology & Indian Institute of Technology, Roorkee, India)

With the increased availability of High Resolution Satellite Imagery (HRSI), the Object based Image Analysis (OBIA) has now become an indispensable tool for analysis and modeling in Remote Sensing technology. The OBIA basically consist of image segmentation, object attribution and classification. The OBIA analysis has better results than the traditional pixel based analysis because the OBIA analysis is not only based on the spectral signature of the pixels but also on the statistical, geometric and topographical feature of the objects. A high resolution imagery of the study area is taken up and the analysis of segmentation, object attributation and classification and the result was exported as a vector layer in .shp format. The same was further exported to QGIS and the .shp file clearly indicates the shape of the individual land parcels. The extracted parcels were comparable with the original cadastral vector layer of parcels. Thus OBIA can be used as an automated procedure to extract the cadastral land parcels from a high resolution imagery.

ICACCI--33.6 12:15 Modeling Basal Ganglia Microcircuits Using Spiking Neurons
Chaitanya Medini (Amrita Vishwa Vidyapeetham ( Amrita University), India); Anjitha Thekkekuriyadi and Surya Thayyilekandi (Amrita Vishwa Vidyapeetham, Amrita University, India); Manjusha Nair (Amrita Vishwa Vidyapeetham, Amritapuri, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Basal ganglia and cerebellum have been implicated in critical roles related to control of voluntary motor movements for action selection and cognition. Basal ganglia primarily receive inputs from cortical areas as well as thalamic regions, and their functional architecture is parallel in nature which link several brain regions like cortex and thalamus. Striatum, substantia nigra, pallidum form different neuronal populations in basal ganglia circuit which were functionally distinct supporting sensorimotor, cognitive and emotional-motivational brain functions. In this paper, we have modelled and simulated basal ganglia neurons as well as basal ganglia circuit using integrate and fire neurons. Firing behaviour of subthalamic nucleus and global pallidus externa show how they modulate spike transmission in the circuit and could be used to model circuit dysfunctions in Parkinson's disease.

ICACCI--33.7 12:30 Performance Improvement in HEVC Using Contrast Sensitivity Function
Sini Simon M (NIT Calicut, India); Abhilash Antony (Muthoot Institute of Technology and Science, India); Sreelekha G (National Institute Of Technology, India)

High efficiency video coding (HEVC), the most recent video compression standard, offers about double the compression ratio over its immediate predecessor H.264/AVC at the same level of video quality or substantially higher video quality at the same bit-rate. Careful refinement of existing tools, as well as the introduction of a variety of parallel processing tools, helps the HEVC to attain the same. In HEVC, quantization is one of the key processes that decides the coding efficiency and the quality of the reconstructed video. Adaptive quantizers based on human visual system (HVS) models can be incorporated into the HEVC anchor model to improve the performance of the HEVC anchor. However, the major limitations of such schemes are the complexity involved. This paper presents an encoder, which uses a mathematical model of the contrast sensitivity function (CSF) to remove the visually insignificant information before the quantization without much impact on the visual quality of the video. The proposed method provides an average bit rate reduction of 2.75% for intra main configuration for a quantization parameter (QP) value of 22.

ICACCI--33.8 12:45 Shot Boundary Detection Using Correlation Based Spectral Residual Saliency Map
B H Shekar, K P Uma and Raghurama Holla, K (Mangalore University, India)

Information present in videos are used by variety of applications like surveillance, intelligent business analysis, high tech education etc. Segmenting the video to its basic architectural unit is required to harness this vital information. This paper proposes an approach for shot identification based on spectral residual. The spectral residual of a video frame is obtained by analysing the log spectrum of the frame. The ability of spectral residual to represent the innovative or novelty part of the frame and to provide consolidated representation of the scene makes it promising for video segmentation. The spatial domain repesentation of spectral residual gives the saliency map. At the shot boundaries there will be variations in the innovative regions of the adjacent frames. Using the correlation between the saliency map of adjacent frames shot boundaries are detected. Experiments conducted on videos from TRECVID 2001 and TRECVID 2007 test dataset show that the proposed approach is simple, fast, reliable and robust for shot boundary detection.

Wednesday, September 21

Wednesday, September 21 15:50 - 17:30 (Asia/Kolkata)

ICACCI--08: ICACCI-08: Poster Session I

Room: Lawns(Academic Area)
Chairs: Anil K Dubey (ABES Engineering College Ghaziabad, Uttar Pradesh, India), Santosh Kumar Majhi (Veer Surendra Sai University of Technology, India)
ICACCI--08.1 Analysis of Nutritional Deficiency in Citrus Species Tree Leaf Using Image Processing
Chitra Anil Dhawale (Amravati University, India); Sanjay Misra (Suite 11, Covenant University & Covenant University, Nigeria); Sonika Thakur (S. G. B. Amravati University, India); Navin Dattatraya Jambhekar (SGB Amravati University Amravati & S. S. S. K. R. Innani Mahavidyalaya Karanja Lad, India)

Citrus trees are the nutrition food for humans as well as animals. However, due to the uncertain climatic conditions, it will prone to different pathological disorders because of the nutritional deficiency. In Vidarbha regions, citrus suffer from certain deficiencies of essential elements, in which plants gain from the soil. The segmentation of disease symptoms in citrus leaf images can be a valuable aid for the detection of nutritional deficiencies and disorders. In this research, different digital image segmentation techniques have been employed which analyses the regions of the citrus leaf caused by some diseases such as spots and wavy structure. This paper investigates the abnormalities in citrus leaf caused by the diseases by the segmentation methodologies. The nutritional deficiency of the citrus tree is directly reflected on its plants. If any temporal part of the symptom is disconnected then, it can be segmented to its original part using the clustering technique. The disease spots are identified by the clustering where the wavy disorders are segmented by the Kirsch operator. This proposed system analyzes the disorder of the citrus tree by analyzing its leaf using the segmentation technique with the integrated use of clustering feature and the Kirsch operator.

ICACCI--08.2 Factors Affecting the Consumer Preference of Non-Packaged Non-Branded Rice in South India
Rajiv Prasad (Amrita Vishwa Vidyapeetham University & Amrita School of Business, India); Umesh Sivasubramanian (Amrita Vishwa Vidyapeetham University, India)

Indians eat many cereals as a part of their staple food. Rice is one among them. Rice is obtained in two forms in the market - namely loose unpacked form and packaged form in fixed quantities. Rice is amongst the most consumed commodities in South India. It has been observed that many South Indian families purchase rice in loose quantities. The purpose of this study was to find the reasons for the South Indian rice consumers choosing loose unpacked rice as compared to the branded packaged rice. A structured survey questionnaire was used to collect data from 171 respondents within South India to know about their pattern of rice purchase and consumption as well as the reasons behind their preferences for loose unpacked as well as packaged and branded rice. The important factors which were found to be important for preferring loose unpackaged rice were perceived safety, convenience to purchase in needed quantities, and easy availability. Quality of the rice was another important parameter while purchasing. Some interesting findings also emerged from the study such as the perception among some respondents about presence of chemicals in packaged rice, less availability in remote areas, own cultivation by many rural families etc. Some important suggestions are made to the packaged and branded rice players based on the findings of this study.

ICACCI--08.3 Internal crack detection in kidney bean seeds using X-ray imaging technique
Surbhi Sood (National Institute of Technical Teachers Training and Research (NITTTR), Chandigarh & Panjab University, Chandigarh, India); Shveta Mahajan (AcSIR, CSIR-CSIO, India); Amit Doegar (National Institute of Teachers Training and Research, India); Amitava Das (CSIO, India)

Seed quality testing is a contemporary research area that is motivated towards increasing agricultural productivity. For accurate quality assessment of seeds, internal morphological characteristics should be thoroughly examined in addition to the external examination. The soft X-ray imaging technique enables the visualization of the internal morphological attributes of agricultural seeds and grains in a non-destructive manner. The objective of this paper was to study the efficacy of using the X-ray imaging technique to detect the internal cracks in kidney bean seeds. The X-ray images of the sample seeds were acquired and image processing techniques such as histogram thresholding and morphological operations were applied on them. The segmented seed images were further processed and features were extracted. The extracted features were utilized for automatic detection of internal cracks, if present. The obtained results clearly indicated the usability of X-ray imaging techniques for automatic non-destructive detection of internal cracks in kidney bean seeds, as an essential component of their quality assessment.

ICACCI--08.4 A Planning Based Approach For Satisfying User Request in Context Aware Settings
Sujata Swain and Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

In order to meet a user's request more than one services may be required. In real world scenarios satisfying a user's request depends on context. Services change according to the context. In this paper we consider Cricket Commentary Domain (CCD) as a context aware application. Service composition in this setting is modeled as a planning problem. We use Blackbox planner to obtains the plan.

ICACCI--08.5 Named Entity Recognition and Classification for Gujarati Language
Komil Vora (V. V. P Engineering College, India); Avani Vasant (Babaria Institute of Technology & Science, India); Rachit Adhvaryu (Gujarat Technological University, India)

Named Entity Recognition (NER) is a method to search for a particular Named Entity (NE)[1] from a file or an image, recognize it and classify it into specified Entity Classes like Name, Location, Organization, Numbers and Others Categories. It is the most useful element of the technique known as Natural Language Processing (NLP) which makes text extraction very easy [2]. In this paper, we focus on using Hidden Markov Model (HMM) based techniques to recognize the Named Entity (NE) for Gujarati language. The main aim of using HMM is that it provides better performance and can be easily implemented for any languages. A remarkable amount of work has been carried out for many languages like English, Greek, Chinese etc. But, still a wide scope is open for Indian Origin Languages like Hindi, Gujarati, Devanagari etc. As Gujarati is not only the Indian Language, but a language that is most spoken in Gujarat. Thus, in this paper, we emphasis on proposing a NER based scheme for Gujarati Language using HMM.

ICACCI--08.6 An automated system for measuring hematocrit level of human blood from total RBC count
Ratnadeep Dey, Kaushiki Roy, Debotosh Bhattacharjee and Mita Nasipuri (Jadavpur University, India); Pramit Ghosh (RCC Institute of Information Technology, India)

Hematocrit level measure of human blood is a crucial pathological test to diagnose anemia, bone marrow disorder, colon cancer and ulcer. In the pathological laboratory, many techniques are available to measure hematocrit level of human blood. However, those processes have some limitations like manual counting errors, requirement for expert pathologists etc. In this paper, an automated system for measuring hematocrit level is proposed. This system uses a new approach to estimate hematocrit level. In this method hematocrit level is estimated from total red blood cell count. A comparison between the results obtained from traditional system and proposed system is included. It is shown that results are nearly equal to each other. So, the proposed approach can be used to measure hematocrit level more accurately.

ICACCI--08.7 Summarization of Customer Reviews for a Product on a website using Natural Language Processing
Akkamahadevi Hanni (B V Bhoomaraddi College of Engineering and Technology, India); Mayur Patil (Tesco Bengaluru, India); Priyadarshini M Patil (BVB College of Engineering & Technology, India)

In the recent past, e-commerce sites have made rapid growth. There are thousands of products and various websites sell these products. Massive growth in the number of reviews and their availability along with the advent of opinion-rich review forums for the products sold online, choosing the right one from a large number of products has become difficult for the users. HELP-ME- BUY APP is an android application that assists buyers in online shopping. It is imminent for buyers to verify for genuineness and quality of products. What better way is there than to ask people who have already bought the product? This is when customer reviews come into picture. The major hitch here is popular products have thousands of reviews, we do not have the time or patience to read all thousands of them. Hence, our application eases this task by analyzing and summarizing all reviews, which will help the user decide what other buyers have experienced on buying this product. We carry out this process by a number of modules that include feature extraction and opinion extraction, which improves the process of analysis and helps in the formation of an efficient summary.

ICACCI--08.8 Distributed threshold k-means clustering for privacy preserving data mining

Privacy preserving is important in wherein data mining turns into a cooperative assignment among members. In data mining, a standout amongst the most capable and often utilized systems is k-means clustering. In this paper, we propose an efficient distributed threshold privacy-preserving k-means clustering algorithm that use the code based threshold secret sharing as a privacy-preserving mechanism. Construction involves code based approach which allows the data to be divided into multiple shares and processed separately at different servers. Our protocol takes less number of iterations to converge than the proposed protocol by Upamanyu [8] and it do not require any trust among the servers or users. We also provide experiment results with comparison and security analysis of the proposed scheme.

ICACCI--08.9 A Survey on Extracting Frequent Subgraphs
Susanna Thomas (Amrita School of Engineering, Amritapuri, Amrita Vishwa Vidyapeetham, Amrita University, India); Jyothisha J Nair (Amrita Vishwa Vidyapeetham, India)

Mining on graphs has become quiet popular because of the increasing use of graphs in real world applications. Considering the importance of graph applications, the problem of finding frequent itemsets on transactional databases can be transformed to the mining of frequent subgraphs present in a single or set of graphs. The objective of frequent subgraph mining is to extract interesting and meaningful subgraphs which have occurred frequently. The research goals in the discovery of frequent subgraphs are (i) mechanisms that can effectively generate candidate subgraphs excluding duplicates and (ii) mechanisms that find best processing techniques that generate only necessary candidate subgraphs in order to discover the useful and desired frequent subgraphs. In this paper, our prime focus is to give an overview about the state of the art methods in the area of frequent subgraph mining.

ICACCI--08.10 Internet of Things: A Survery Related to Various Recent Architectures and Platforms Available
Sharwari Solapure (GIT Belgavi, VTU, India); Harish H Kenchannavar (Gogte Institute of Technology & Visveswaraya Technological University, India)

IP-based Internet is the largest network in the world therefore; there are excessive steps towards connecting Wireless Sensor Networks (WSNs) to the Internet. It is popularly known as to IoT (Internet of Things). IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks such as IoT. Development of application requires standardized architecture and platform for design and analysis of new ideas. This paper provides a brief awareness about recent IoT architectures and platforms. It also discussed some of the gaps issues of the platforms related to usability of user. This helps researcher to select a particular platform according to nee

ICACCI--08.11 Customizable Holter Monitor Using Off-The-Shelf Components
Neena Goveas, Sreekesh S and Abhimanyu Zala (BITS Pilani K K Birla Goa Campus, India); Keerthi Chavan G (BITS PILANI & Goa Campus, India)

A Holter Monitor is a device which is worn by patients and is used to collect Electrocardiography (ECG) data for around 24 hours. This device is then connected by a physician to a computer to download the data. An analysis of this data can reveal abnormalities which may not appear in a clinical measurement of shorter duration. Cost of a Holter monitor is a crucial parameter, as this is used by patients for longer duration, unlike other clinical instruments. Availability of devices restricts the number of patients which can be monitored and treated by hospitals. In addition patients require customization of measurements which most of the available device do not offer. The high cost of this device is deterring doctors from long term monitoring use like athletic performance of sportsmen, elderly patients for independent living, patients taking medications causing side-effects etc. In this paper we present a low cost Holter Monitor which can be assembled using off the shelf components. It can be customized for individual needs and is extendable. The software architecture and communication framework developed can be used for any data intensive measurement and transmission.

ICACCI--08.12 DEPTA: An Efficient Technique for Web Data Extraction and Alignment
Rahull Lokhande (SRTMU Nanded & SGGSIE&T NANDED, India)

Many web database contains the data in the form of structured, semi structured and in unstructured format. This paper studies the issue of extracting these data records from online web database. The main motto of this paper is to recognize the data region which contains the data records, divide these data records, mine the data value from them and keep these extracted record in a structured format. This arrangement of extracted data is useful for many application like knowledge discovery purpose etc. Existing system has some data records arrangement problem which does not arrange dynamically generated web data properly. The proposed system is based on identification of data records, extraction of data values and arranging these data values in a database .The proposed system uses the partial tree alignment method for giving the better alignment outcome.

ICACCI--08.13 Usage of Lowpass Filters for Miniaturization of Microstrip Branch-Line Hybrid Couplers
Denis Letavin, Yury E Mitelman and Victor Chechetkin (Ural Federal University, Russia)

The article presents the ability of using low-pass filters for creating the miniaturized microstrip line based microwave devices. Numerical simulation and experimental research were made for branch-line hybrid coupler with 3 dB power division at 2000 MHz. Isolation bandwidth with the level of 20 dB is 800 MHz. Dimensions of the miniaturized device are 15.5 mm × 23.9 mm = 370.45 mm2, that is 66% less than of the conventional one.

ICACCI--08.14 Miniature Microwave Bandpass Filter with Two Circular Spiral Resonators
Denis Letavin and Victor Chechetkin (Ural Federal University, Russia)

A novel design of microstrip bandpass filter with two spiral resonators is presented in this paper. The proposed filter has good frequency selectivity and its dimensions are smaller in comparison with conventional designs. A numerical simulation of the filter with the central frequency of 1500 MHz was made to receive the S-parameters. A prototype with dimensions of 21 x 11.1 mm2 was implemented on a substrate with a dielectric constant ε = 4.4, dielectric loss tangent tgδ = 0.02 and thickness h = 1.5 mm. The measured characteristics are in good agreement with the results of the electromagnetic simulation.

ICACCI--08.15 Statistical Analysis of Load Demand Distribution At Banaras Hindu University, India
Manish Kumar and Cherian Samuel (Indian Institute of Technology (BHU), Varanasi, India)

There are various segments of energy sources like coal, oil, gas and renewable energy to generate electricity. Owing to the reason of being carbon free energy sources, utilization of renewable energy resources are increasing exponentially. It is clear from the recent past that role of renewable distributed generation technologies are the most demanded one. Optimal utilization of renewable energy sources and continuous electricity supply at any location needs statistical analysis to predict the randomly distributed electricity load demand of that location. For the statistical analysis, we have collected hourly load demand data of Banaras Hindu University (BHU), India region of year 2014 from the Electricity & Water Supply Service (EWSS) centre, BHU. In this research work, we proposed Lognormal, Weibull and Rayleigh probability distributions to estimate the randomly distributed load demand data of BHU region. We had taken help of STATISTICA software for the statistical analysis and comparative study between these three probability distributions. With the help of Kolmogrov- Smirnov, Anderson-Darling and Chi-Square Goodness-of-fit tests, we identified, Lognormal distribution is the best fitted one.

ICACCI--08.16 Allocating Resources in Cloud Computing When Users Have Strict Preferences
Anjan Bandyopadhyay and Sajal Mukhopadhyay (NIT DURGAPUR, India); Uttam Ganguly (Mallabhum Institute of Technology, India)

The economic settings of allocating resources in cloud computing have been studied, so far, by the concept of mechanism design with money. However in some applications resources can be disseminated by the service providers to some users free of cost. The problem of allocating resources in this charitable environment predominantly becomes challenging if the users try to pick their favorite service provider. To the best of our knowledge, this highly possible framework of allocating resources when every user has strict preference ordering of the service providers, is not studied in cloud computing environment. First time this framework is studied in this paper and a DSIC mechanism is rendered, which satisfies the core allocation.

ICACCI--08.17 Identification of Escalations During Product Maintenance
Shivam Dhar (PES Institute of Technology, India); Pooja Tata (PES Institute Of Technology, India); Sachit Nayak (PES Institute of Technology, India); Subramaniam Kalambur (PES University, India); Dinkar Sitaram (Pes University, India); Atanu Dasgupta (Hewlett Packard Enterprise, India)

Software defects have a major impact on the market acceptance and profitability of computer systems. Identification of bugs or issues that could lead to escalations during the maintenance phase of the products is vital and helps in increasing customer satisfaction. The objective of this paper is to identify the possible escalations and alert the engineering team to pay attention to these even before the impact of the issue is experienced. We analyze all the customer raised issues for a product by Hewlett-Packard Enterprise to identify potential escalations. We demonstrate that using machine learning techniques it is possible to identify these escalations in the product maintenance cycle with a 60% accuracy.

ICACCI--08.18 A Framework to Identify Influencers in Signed Social Networks
Vanita Jaitly (Chitkara University, India); Pradeep Chowriappa (Louisiana Tech University, USA); Sumeet Dua (Lousiana Tech University, USA)

Social networks are be defined as a graphical data structure which captures complex social interactions between users of a social network. Signed social networks are weighted representations of the social network with the emphasis of capturing both positive and negative interactions (edges) between actors of the network. Ad-hoc communities in a social network, as a corollary can be treated as the logical grouping of social actors that share common interests, ideas, or beliefs. In this work, we leverage these known constructs in social networks to effectively identify influencers (i.e. a subset of actors that exert their influence over a community), aka, seeds. Traditional approaches largely rely on degree of connectivity in identifying influencers of a community. We hypothesize that there are other measures to identify influences. In this work our objective is therefore explore and propose a technique using Principal Component Analysis (PCA) to identify the smallest set of influencers with increasing the possibility of adopting a product. Furthermore, we validate our finding by evaluating the potential of these influencers to identify positive communities in a social network. We believe our approach is novel in choosing our influencers (seeds) and thus by using these seeds positive and negative edges are established. We exploit resulting positive and negative edges to mine ad-hoc communities of interest.

ICACCI--08.19 Blind and Adaptive Reconstruction Approach for Non-Uniformly Sampled Wideband Signal
Himani Joshi and Sumit Jagdish Darak (IIIT-Delhi, India); Yves Louet (CentraleSupelec, France)

In 5G, wideband communication receivers (WCRs) capable of digitizing signal ranging from 400 MHz to few GHz are desired to support various data intensive services. The design of such WCRs is a challenging task due to huge area, high cost, limited speed and dynamic range of analog-to-digital converters (ADCs) as well as poor reconfigurability of analog front-end. Recently, non-uniform (or sub-Nyquist) sampling techniques have been envisioned to digitize sparse wideband signal using existing ADCs. However, subsequent digital reconstruction works well only when the number of active users in the received signal are known i.e. they are not blind. To overcome this, a new Adaptive Orthogonal Matching Pursuit (AOMP) blind reconstruction approach has been proposed in this paper. The term adaptive means that the parameters of the AOMP are dynamically tuned based on learned spectral occupancy and channel quality statistics. The novelty of the AOMP is the use of online learning algorithm to estimate spectrum occupancy (or sparsity). Extensive simulation and complexity results indicate the superiority of the proposed AOMP approach over existing approaches for wide range of SNRs and different levels of sparsity. Numerically, AOMP offers as high as 64.5% improvement in normalized mean square error over existing approaches with slight penalty in terms of computational complexity. In the end, performance comparison of various reconstruction approaches for automatic modulation classification application is presented.

ICACCI--08.20 RFID-Cloud Smart Cart System
Alex James (IIITMK, India); Yerlan Berdaliyev (Nazarbayev University, Kazakhstan)

The main purpose of this work is in reducing the queuing delays in major supermarkets or other shopping centers by means of an Electronic Smart Cart System which will introduce an intellectual approach to billing process through RFID technology. Smart Cart System is a cooperative performance of three separate systems: a website developed for the shopping market, electronic smart cart device and anti-theft RFID gates. This project focuses on developing the electronic smart cart device itself. It involves an embedded electronic hardware that consists of an OLED display, Arduino Mega 2560 board, a specifically designed PCB, a Wi-Fi module, 13.56 MHz HF RFID reader, a power supply and a shopping cart.

ICACCI--08.21 A Real Time Speech to Text Conversion System Using Bidirectional Kalman Filter in Matlab
Neha Sharma and Shipra Sardana (Chandigarh University Gharuan, Mohali, India)

A real time speech to text conversion system converts the spoken words into text form exactly in the similar way that the user pronounces. We created a real time speech recognition system that was tested in real time noiseous environment. We used the design of a bidirectional nonstationary Kalman filter to enhance the ability of this Real time speech recognition system. Bidirectional Kalman filter has been proved to be the best noise estimator in nonstationary noiseous environment. Real time speech to text conversion system introduces conversion of the uttered words instantly after the utterance. The purpose of this project was to introduce a new speech recognition system that is computationally simple and more robust to noise than the HMM based speech recognition system. We have used our own created database for its flexibility and TIDIGIT database for its accuracy comparison with the HMM based speech recognition system. MFCC features of speech sample were calculated and words were distinguished according to the feature matching of each sampled word. System was tested in different noise conditions and we obtained overall word accuracy of 90%

ICACCI--08.22 Electrooculogram-based Virtual Reality Game Control Using Blink Detection and Gaze Calibration
Devender Kumar and Amit Sharma (IIT Kanpur, India)

This paper describes an Electrooculogram (EOG) and gaze based hands-free natural interaction system design for virtual reality (VR) games which enhances the immersive VR experience. The traditional interfaces like joysticks, mouse, keyboards, hand-worn data-gloves when used with VR HMD peripherals are obtrusive for virtual experience. This is a step further towards building see-and-play user interaction in VR games. This natural interface provides an enhanced gaming experience in which the user's virtual environment is interacting as per the eye movements. Electrooculogram based online eye blink detection and gaze calibration were carried out with average efficiency of 96% and 80% respectively. Based on the calibrated eye movements, the virtual environment in the "VRrailSurfer" is adjusted and interactions with the virtual game objects are carried out. Ten subjects (8 male, 2 female ) were asked to play 5 trails of our prototype VR game "VRrailSurfer". The average real-time game control accuracy across subjects was found to be 78%. The feasibility of obtaining EOG based VR game controls and the subjective analysis of user's immersive VR experience are also discussed.

ICACCI--08.23 Emotion Recognition on the Basis of Audio Signal Using Naive Bayes Classifier
Sagar Bhakre (Vishvakarma Institute of Information Technology pune, India); Arti Bang (University of Pune, India)

In this paper we have studied and implemented the classification of audio signal into four basic emotional state. For that we have considered different statistical features of pitch, energy, and ZCR (Zero Crossing Rate) MFCC (Mel frequency cepstral coefficient) from 2000 utterances of the created audio signal database. In that, Pitch feature is extracted by AMDF (average magnitude difference method) and energy is calculated by sum of square absolute value of magnitude spectrum. And MFCC is calculated by taking DCT of its energies spectrum by keeping the DCT coefficients 1-14 and discarding the rest. In statistical modeling, regression analysis is a statistical process for calculating approximately the variables. It comprise many techniques for modeling and analyzing several variables. In this paper Naïve Bayes Classifier is used to classify the audio signal into four different emotions. Speech signal is random signal so we have to predict the future sample and Naïve Bayes Classifier is totally probability based classifier so in speech analysis for accurate prediction we are using Naïve Bayes classifier. In the speech signal for recognition of signal classifier require millions of dataset. The advantage of Naïve Bayes classifier is that it recognizes the signal with minimum dataset

ICACCI--08.24 Gesture recognition based relative intensity profiler for different light sources
Sabarna Choudhury and Shreyasi Bandyopadhyay (St. Thomas' College of Engineering and Technology, Kolkata, India); Kanik Palodhi (University of Calcutta, India)

In this conference proceeding, a novel intensity profile measuring systems is presented that is controlled by gesture recognition principles. This low-cost device works much faster and can be used to measure profiles of wide variety of light sources oriented in elevation and azimuth. Presently, it provides relative intensity measurement but can potentially be used for absolute intensity, if suitably modified. From, the profiles of the standard sources clearly the faulty light sources can be identified and therefore, can save a lot effort.

ICACCI--08.25 Improved GA using population Reduction for Load Balancing in Cloud computing
Ronakkumar R Patel (Sardar Pate College of Engineering & Gujarat Technological University, India); Swachilkumar Patel and Dhaval Patel (Sardar Patel College of Engineering, India); Tushar T Desai (Parul Polytechnic Institute)

Cloud computing is a new hype in computer industry, which has different thoughts by different researchers. But beyond those thoughts, cloud has some limitation also which needs to be more focused. Basically cloud is based on use par pay scenario identified by user's services. But for each and every rewarding, that services cloud needs some predefine requirement circumstances to follow which affect different parameters like response time, resource utilization, balancing load, indexing of resources as well as jobs & etc. Lots of soft computing techniques like genetic, honey bee, stochastic hill climbing, and ant colony, throttled and other algorithm are used to please those parameters to improve the scheduling of resources as well as jobs in cloud environment. Our proposed work focused on utilization of resource and response time based on genetic algorithm but we modified that genetic algorithm with the help of partial population reduction method that will help to satisfy the request of user services.

ICACCI--08.26 Image Enhancement of Ultrasound Images using Multifarious Denoising Filters and GA
Prabhpreet Kaur (Gndu, India); Gurvinder Singh (G N D U, India); Parminder Kaur (Guru Nanak Dev University, Amritsar, India)

Medical images are often of low contrast and noisy (lack of clarity) due to the circumstances they are being captured. De-noising of ultrasound images is a difficult task as compare to other medical images because of noise, blurring of edges and artifacts. The Bayesian shrinkage method has been selected for thresholding based on its sub-band dependency property. The spatial domain based de-noising filtering techniques, using soft thresholding method are compared with the proposed method using Genetic Algorithm (GA). A proposed technique includes GA and results are compared with existing spatial domain based de-noising filtering techniques. The proposed algorithm provides enhanced visual clarity for diagnosing the medical images. The proposed method based on GA assesses the better performance on the basis of the quantitative metric like Peak Signal-to-Noise Ratio (PSNR) and Fitness value. The overall simulated result shows that proposed technique outperforms the prevailing de-noising filtering methods in terms of edge preservation and visuality.

ICACCI--08.27 Unmanned Aerial Vehicle based Bomb Detection
Jasmine Priyadarsini (VIT University, Vellore, INDIA)

In any Bomb rescue operation, time factor plays a vital role. As Bomb can explore at any moment, Our task is to reduce the time consuming to detect the Bomb when compared to manual detection. Detecting Bomb manually is highly risky and much time consuming process. In this paper we propose a new technique using "Quadcopter" nothing but UAV (Unmanned Aerial Vehicle). It is an air vehicle which has multiple rotors (in our paper it is 4) and is controlled by the RC transmitter (joysticks). An attempt has been made to detect the bomb by using Geiger counter module. Geiger module can detect any type of radioactive radiation emitted from the Bomb. Bomb location information is sent to the user by making use of GPS and GSM modules. We integrated Geiger module, GPS module, GSM module, Arduino-UNO as a payload to the quadcopter. We need to pass the quadcopter on the area in which the operator wants to examine and if any Bomb is present, then Geiger tube detects it. Then immediately Bomb location is sent to the operator's mobile by GPS and GSM modules. The main objective of this paper is to ensure the safety of the operator and to reduce the detection process time when compared to manual detection.

Thursday, September 22

Thursday, September 22 16:15 - 17:30 (Asia/Kolkata)

ICACCI--15: ICACCI-15: Poster Session II

Room: Lawns(Academic Area)
Chairs: Kusum Lata (The LNM Institute of Information Technology, Jaipur, India), J Siva Ramakrishna (M. S. Ramaiah University of Applied Sciences, India), Santosh Kumar Majhi (Veer Surendra Sai University of Technology, India)
ICACCI--15.1 A study of peak to average power ratio for different companding techniques in VLC-OFDM system
Arushi Singh (Rajiv Gandhi Prodyogiki Vishwavidyalaya, Bhopal, India); Anjana Jain (Shri G. S. Institute of Technology & Science, India); Prakash Vyavahare (S G S Institute of Technology and Science, India)

VLC-OFDM is proposed to be used as the transmission technique in 5G mobile communication. VLC-OFDM produces the non-linear distortion due to non linear characteristics of LEDs and the PAPR of multiple sub-carrier modulation. The mitigation of non-linear distortion in VLC system is presently focused by many researchers, as it degrades system performance. Since the technique used in VLC system is OFDM so the system suffers the drawbacks of OFDM as well as VLC. Therefore, the non-linearity in the VLC-OFDM is quite severe. Hence, three companding techniques(A-law, Mu-law, advanced A-law) can be implemented to remove the PAPR issue. Comparatively, advance A-law provides lower PAP ratio. This paper proposes these three non-linear companding techniques for reducing PAPR in VLC-OFDM system.

ICACCI--15.2 Need of Integration of Second Generation Wavelets in Medical Image Compression
Harshal Gosavi and Rajendra Talware (Vishwakarma Institute of Information Technology, Pune, India)

Medical image compression plays a vital role in tele-pathology or remote expert diagnosis. Medical images are huge in size as well content are of complex nature. Current lossy or lossless compression techniques like EZW, SPIHT, EBCOT and SPECK using wavelet transform, though perform good for compression and offer better reconstruction, lack in biological relevance and image adaptive compression. In order to address the issue, a paper demonstrates implementation of state-of-art compression algorithm with performance measures as proof-of-concept. It further underlines need of integrating some content preserving pre-processing and second generation wavelets for better split, predict and update. Integration of lifting algorithm with EZW and SPIHT is proposed. This may also lead in improving performance in term of PSNR and MSSIM as well preservation of critical medical data contained in image.

ICACCI--15.3 Low Power Circuit Techniques for Optimizing Power in High Speed SRAM
Navneet Kaur Saini (IIT DELHI & Solid State Physics Laboratory under DRDO, India); Aniruddha Gupta (IIT DELHI & NXP Semiconductors, India); Ravija Prashar and Parul Gupta (IIT DELHI, India)

As we are migrating toward low supply voltages, the threshold and supply voltage fluctuations will begin to have larger impacts on the speed and power specifications of SRAMs. Here, we present different techniques which minimize the effect of operating condition's variability on the speed and power of SRAM. A 2MB SRAM is designed with umc90nm technology having power supply of 1V. Firstly, the floor plan of SRAM uses hierarchical and divided word line approach which helps in reducing power by switching on only that part of SRAM which is being accessed. Secondly, SRAM major power is consumed by sense amplifiers, so replica based circuits are used which have replica memory cells and bitlines used to create a reference signal whose delay tracks that of the bitlines. This signal is used to generate the sense clock with minimal slack time and control wordline pulsewidths to limit bitline swings. We implemented the replica circuits by using bitline capacitance ratioing and compared it with standard chain of inverters techniques. Furthermore, a power down technique is also implemented in local word driver which also reduces power. This SRAM is also tested at various process corners.

ICACCI--15.4 Alarm Notification to Tower Monitoring System in oneM2M Based IOT Networks
Pankaj Kumar Dalela, Arun Yadav and Smriti Sachdev (C-DOT, India); Saurabh Basu (C-DoT, India); Anurag Yadav (CDOT, India); Vipin Tyagi (C-DOT, India)

Internet of Things (IOT) is a new paradigm of connected devices. IOT architecture ensures that devices are connected to each other and able to communicate as and when required. OneM2M is one of the standards of IOT that defines the architecture of connected devices. C-DOT Tower Monitoring Solution (TMS) [1] monitors status of different sources for power in Telecom Tower. TMS is now being updated according to oneM2M standard. Under this new architecture, bidirectional connection is required to send data and alarm notification from centralized server/gateway to TMS devices. However, in GSM-GPRS SIM based devices, IP address of the device changes every time they are connected to the network, so the server is not aware of the current device IP address and cannot send data to the device. Also, in our oneM2M network, devices of different capabilities are connected, so a single approach does not work for all the devices. In this paper we have discussed about different methods which can be used by GSM-GPRS devices & devices of different capabilities to establish two way communication with centralized server. In this paper we have proposed an algorithm based solution for this challenge.

ICACCI--15.5 Robust Features for Spoofing Detection
Sathya Ashok (Amrita Vishwa Vidyapeetham, India); Swetha Jayaprakash (Amrita Vishwa Vidyapeetham); Arun Das K, Kuruvachan K George, Santhosh C Kumar and Aravinth J (Amrita Vishwa Vidyapeetham, India)

It is very important to enhance the robustness of Automatic Speaker Verification (ASV) systems against spoofing attacks. One of the recent research efforts in this direction is to derive features that are robust against spoofed speech. In this work, we experiment with the use of Cosine Normalised Phase-based Cepstral Coefficients (CNPCC) as inputs to a Gaussian Mixture Model (GMM) back-end classifier and compare its results with systems developed using the popular short term cepstral features, Mel-Frequency Cepstral Coefficients (MFCC) and Power Normalised Cepstral Coefficients (PNCC), and show that CNPCC outperforms the other features. We then perform a score level fusion of the system developed using CNPCC with that of the systems using MFCC and PNCC to further enhance the performance. We use known attacks to train and optimise the system and unknown attacks to evaluate and present the results.

ICACCI--15.6 Improving K-Means Through Better Initialization and Normalization
Akanksha Choudhary and Prashant Sharma (Gurukul Institute of Engineering and Technology, Kota, India); Manoj Singh (Gurukul Institute of Engineering and Technology, India)

K-means is still a popular clustering algorithm and active research area. The research is majorly focused at improving efficiency and effectiveness of the method. This paper proposes combined approach of a ranked initialization and normalization of data values with k-means. Three variations of a score based initialization approach is proposed. Experiments are performed on normalized data to prove the superiority of the proposed algorithm.

ICACCI--15.7 Discrete Event System Framework for Analysis and Modeling of Job Shop Scheduling System
Om Shukla (Malaviya National Institute of Technology Jaipur, Rajasthan, India); Sujil A (MNIT JAIPUR, India); Gunjan Soni (Malaviya National Institute of Technology Jaipur, Rajasthan, India); Rajesh Kumar (Malaviya National Institute of Technology, India)

The Job shop scheduling in manufacturing is one of the most important industrial activities. To solve this type of complex scheduling problem, various approaches have been proposed incorporating discrete event simulation methodology. The purpose of this paper is to optimize job schedule in discrete event job-shop scheduling system by modeling and simulating the manufacturing system. The example of the multi machines job shop manufacturing system model created by the SimEvents toolbox of software tools MATLAB is illustrated.

ICACCI--15.8 Facial Recognition Using Discrete ABC
Akshay Kallianpur, Aditya Kalyani and Subramanya Naligay (MSRIT, India); Jagadish S Kallimani (Associate Professor, Department of Computer Science and Engineering, M S Ramaiah Institute of Technology, Bangalore)

Facial recognition is a topic of interest for research as it has room for improvement in the accuracy of the recognition rate. To achieve this, either the recognition algorithm is modified or more efficient preprocessing techniques are used. This paper proposes a novel and optimized Artificial Bee Colony (ABC) algorithm, to perform facial recognition. Although the database being used here is Labeled Faces in the Wild (LFW) , it is also tested on Carnegie Melon databases to ensure consistent results. Applying the concept to perform facial recognition and obtaining satisfactory results is sublime. The discretization of the ABC algorithm serves many applications in the fields of pattern recognition and image analysis. This version of the ABC algorithm contains certain elements from Particle Swarm Optimization (PSO), hence yielding a hybrid algorithm that contains the best of both its contributors. This paper primarily focuses on the application of the proposed technique onto facial recognition. Standard data sets are used to test and quantify the efficacy of the algorithm. Given that the requirement is an extremely swift pattern recognition software that does not compromising the efficiency of the recognition rate, the proposed algorithm upholds both these criteria and is a robust technique.

ICACCI--15.9 Best Offer Recommendation Service
Kiran Javkar and Siddharth Vora (Samsung R&D Institute India - Bangalore, India); Joy Bose (Ericsson, Bangalore, India); Amit Rodge and Hitesh Sharma (Samsung R&D Institute India - Bangalore, India)

There are multiple online offer aggregators, which can aggregate deals, coupons and offers from multiple parties. However, these aggregators generally cannot determine the best deal/s available among the existing deals, or which is best suited for a given customer's requirements. Many offers are region specific or user profile specific. Moreover, many deals or coupons specify the percentage of discount or cashback on purchase of the item using the user's credit card, where the percentage varies between different cards. This makes it difficult to determine the best offer for a given credit card. Moreover multiple offers could be combined while purchasing a given item, and the system should be able to identify the best offers accordingly. In this paper we propose a service which can determine which offers would be relevant for a user with a given profile and/or online payment mechanism. The cloud server extracts and stores relevant data about available offers from sellers and aggregators using crawlers and publicly available APIs, and given a desired product determines the best set of coupons or offers available given a user profile and payment mechanism such as credit card. This enables the service to recommend the best deals to the user and also ways to split the purchase efficiently so as to gain most using the available offers. We have implemented a simple proof of concept for our service using a cloud server component and a component that is part of the web browser application on a device. We also discuss a revenue sharing model for our service.

ICACCI--15.10 Classification of Pulmonary Crackles and Pleural Friction Rubs Using MFCC Statistical Parameters
Sibghatullah I Khan (Sreenidhi Institute of Science and Technology Hyderabad & JNTU Hyderabad, India); Vasif Ahmed (Babasaheb Naik College of Engg., Pusad, India)

Pulmonary crackles and plural friction rubs are adventitious lung sounds that provide valuable information on underlying lung diseases. Due to similarities in their time domain characteristics, there exists the need to distinguish these two sounds to help novice medical doctors and to aid automated diagnosis in telemedicine applications. This paper focuses on analyzing these two sounds using Mel frequency cepstral coefficients (MFCC) speech analysis technique. The MFCC's were calculated for crackles and pleural friction rub lung sound and four basic statistical parameters of MFCC were calculated. The Standard deviation of MFCC shows maximum linear separability among other parameters. Therefore, the standard deviation of MFCC can be used as potential feature for classifying adventitious lung sounds pertaining to pulmonary crackles and pleural friction rubs.

ICACCI--15.11 Intelligent Imputation Technique for Missing Values
Tahani Aljuaid and Sreela Sasi (Gannon University, USA)

Missing value is a widespread problem for data quality because most of the statistical procedures require a value for each variable. The missing value may lead to biased parameter estimates, and may result in degradation of data quality. Imputation has been used to replace the missing data by plausible estimation. This research combines the Hot-Deck and Expectation Maximization imputations with the C5.0 classifier technique to estimate missing values and to improve the data quality. It would fit numeric, categorical, and continuous data sets. The Hot-Deck imputation deals with categorical and mixed data types. The Expectation Maximization imputation is a best method for numerical data and to increase association with other variables. The C5.0 classifies the data in lesser time with minimum memory usage. It has a higher accuracy compared to other classifiers. This new embedded 'Intelligent Imputation Technique for Missing Values' is used for the main process of acquiring knowledge from data. This technique is compared with the original C5.0 algorithm and the results are presented.

ICACCI--15.12 DTMOS based low voltage high performance FVF-OTA and Its application in MISO Filter
Niharika Narang (IIT DELHI HAUZ KHAS & IIT Delhi, India); Bhawna Aggarwal (Netaji Subhas Institute of Technology, India); Maneesha Gupta (Netaji Subhas Institute of Technolgy & University of Delhi, India)

This paper presents a high performance Dynamic Threshold MOSFET (DTMOS) based low voltage Flip Voltage Follower-Operational Transconductance Amplifier (FVF-OTA). The proposed DTMOS based FVF-OTA combines the low voltage and high transconductance properties of DTMOS with high input impedance and wide output current range characteristics of FVF-OTA. Furthermore, high linearity along with high bandwidth is achieved through DTMOS technique. To show enhancement in the transconductance of the proposed circuit, small signal analysis is carried out. Moreover, Multi Input Single Output (MISO) filter is realized using the proposed DTMOS based FVF-OTA. Simulations of proposed OTA and its MISO filter application are done in Eldospice (Mentor Graphics) in TSMC 0.18μm technology. In the proposed circuit, a transconductance of 314.73μS is achieved at an operating voltage of ± 0.5V and power dissipation of 70.19μW. Moreover, MISO filter realized using the proposed OTA attains a bandwidth of 43.91MHz. To validate the robustness of proposed circuit against variations in temperature and aspect ratios of MOSFETs, temperature and Monte Carlo analysis have been done.

ICACCI--15.13 Cloud Service Orchestration Based Architecture of OpenStack Nova and Swift
Pragya Jain (University of Delhi, India); Aparna Datt (University of Delhi & PGDAV College, India); Anita Goel (Dyal Singh College, University of Delhi, India); Suresh C Gupta (Institute of Information Technology (IIT), Delhi, India)

OpenStack Nova and Swift are responsible for provisioning and management of compute and storage resources, respectively, in a need based manner. These cloud software are widely adopted by several popular organizations for deploying their compute and storage services. To have a cloud platform that is easily upgradable, modifiable and maintainable, there is a need for a well defined detailed architecture. For this, there is a need of identifying the internal components and processes required for the functioning of the software. In this paper, we present architecture of OpenStack Nova and Swift in conformance to layered architecture defined for service orchestration component of the NIST cloud computing reference architecture. The proposed architecture identifies the components, sub-components and their interaction, at the layers defined for service orchestration component and provides insight into internal working and associated processes of Nova and Swift. The proposed architecture is beneficial for cloud provider and system administrator in maintaining and improving quality of the system.

ICACCI--15.14 Design and Characterization of Analog Multiplexer for Data Acquisition System in Satellites
Venkata Sai krishna Vallury, K S Saikiran and Guditi Nagaraja (Amrita School of Engineering, Bangalore, India); Padmapriya K (ISRO Satellite Centre, Bangalore, India); Kavitha N Pillai (Amrita School of Engineering, Bangalore, India)

The design and characterization techniques of an improved analog multiplexer using CMOS analog switches is proposed in this paper. Suitable architectures of analog multiplexers are designed and compared by selecting a proper analog switch. Test circuits for on-resistance of an analog switch, crosstalk and break-before-make switching condition for an analog multiplexer are proposed. Further, parametric analysis is performed to study the effect of temperature and process variation. The design procedures has been applied using CMOS transmission gates where the analog input is 20 KHz with 3.3 V supply in CMOS 180 nm technology. Cadence Virtuoso Analog Design Environment is used for implementing the test circuits along with Hspice simulator. The major contribution of this paper is the implementation of break-before-make switching logic with only 4 transistors for a single switch.

ICACCI--15.15 Development of Low Cost EMG Data Acquisition System for Arm Activities Recognition
Sidharth Pancholi (Malaviya National Institute of Technology, India); Ravinder Agarwal (Thapar University, India)

Electromyography (EMG) signals are becoming continuously more important in many fields, including bio medical/clinical, prosthesis, human machine interaction and rehabilitation devices. In the present study, to meet the requisites of EMG data acquisition systems, a high resolution, and highly competitive eight channel system has been developed, which is cost efficient and compact as compared to commercially available systems. To validate the developed system, EMG signals have been acquired from various muscles for different arm activities and also machine learning techniques have been utilized for activity recognition. For the current study 8 Male and 4 Female healthy subjects have been selected. For classification purpose, various time and frequency domain features have been extracted and a comparative study of different classification techniques is presented. The classification accuracy ranges from 43.64% to 92.61% for different classification algorithms. For this piece of work MATLAB 15a is utilized for signal processing and machine learning.

ICACCI--15.16 Multi-objective Moth Flame Optimization
Vikas Choudhary and Satyasai Jagannath Nanda (Malaviya National Institute of Technology Jaipur, India)

Mirjalili in 2015, proposed a new nature-inspired meta-heuristic Moth Flame Optimization (MFO). It is inspired by the characteristics of a moth in the dark night to either fly straight towards the moon or fly in a spiral path to arrive at a nearby artificial light source. It aims to reach a brighter destination which is treated as a global solution for an optimization problem. In this paper, the original MFO is suitably modified to handle multi-objective optimization problems termed as MOMFO. Typically concepts like the introduction of archive grid, coordinate based distance for sorting, non-dominance of solutions make the proposed approach different from the original single objective MFO. The performance of proposed MOMFO is demonstrated on six benchmark mathematical function optimization problems regarding superior accuracy and lower computational time achieved compared to Non-dominated sorting genetic algorithm-II (NSGA-II) and Multi-objective particle swarm optimization (MOPSO).

ICACCI--15.17 A Comparative Study of QoS Ranking Prediction Techniques in Cloud Services
Shirish Nagar (Manipal University, Jaipur & Poornima College of Engineering, India); Jyotirmoy Karjee (Indian Institute of Science, India)

Quality of Service (QoS) is developing as an important parameter to describe web services for cloud users and service providers. The analysis of QoS issues in web services by service providers and user is essential for optimization and improvement. As the static prediction approaches (like arithmetic and average value methods) are incapable of capturing non-linearity in QoS data, the dynamic prediction methods like collaborative filtering, similarity measure and multi-dimensional weighting are used. This paper aims to present a comparative study of different intelligent techniques employed to improve the prediction of QoS Ranking for a cloud service. The simulation of the experiment and validation of the observations and results is conducted through MATLAB® toolbox ANFISEdit.

ICACCI--15.18 A Secure Software Design Methodology
Rajat Goel, Mahesh Chandra Govil and Girdhari Singh (Malaviya National Institute of Technology Jaipur)

Present times demand security to be an inevitable part of almost any software. To achieve this, the security requirements of the software ought to be efficiently elicited and modeled. The modeling languages available today like Unified Modeling Language (UML) are not efficient enough to model such requirements. Within this research work, a new modeling scheme is presented that first, effectively gathers and analyzes the security concerns of all stakeholders and then represents them through novel diagrams.

ICACCI--15.19 Driver's Distraction Detection Based on Gaze Estimation
Shweta Maralappanavar (B. V. Bhoomraddi College Of Engineering and Technology, India); ReenaKumari Behera (KPIT Technologies Ltd., India); Uma Mudenagudi (B. V Bhoomaraddi College of Engineering and Technology, Hubli, India)

Drivers easily get distracted by the activities happening around them such as texting, talking on mobile phone or talking to the neighbouring person. All these activities take driver's attention away from the road which may lead to accidents, cause harm to the driver, pedestrians and also other vehicles on the road. In this paper, a method is proposed to estimate the gaze of the driver and determine whether the driver is distracted or not. Driver's gaze direction is estimated as an indicator of his attentiveness. The driver's gaze estimation is done by detecting the gaze with the help of face, eye, pupil, eye corners and then the detected gaze is then categorized as whether the driver is distracted or not. The algorithm is developed in OpenCV and tested on a CPU platform (Intel core with 4 GB RAM). The processing time taken for the execution of a single frame is around one second. The gaze detection accuracy obtained is 75%.

ICACCI--15.20 Minimally Supervised Sound Event Detection Using a Neural Network
Aditya Agarwal, Syed Munawwar Quadri and Savitha Murthy (P. E. S. University, India); Dinkar Sitaram (Pes University, India)

This paper proposes a sound event detection system that is trained using a minimally annotated data set of single sounds to identify and separate components of polyphonic sounds. The system uses a Feed Forward Neural Network with a single hidden layer that is pre-trained using an autoencoder. Single sounds, represented as Mel Frequency Cepstral Coefficient (MFCC) feature vectors, are used to train the neural network using back propagation. Polyphonic sounds are preprocessed using Principal Component Analysis (PCA) and Nonnegative Matrix Factorization (NMF) to obtain source separated sounds. These source separated sounds are then tested for sound classification using the feed-forward algorithm. Our system is able to achieve reasonable accuracy of source separation and detection with minimal training set. The ultimate goal of our system is to bootstrap from minimal data and learn new sounds leading to a better sound detection system.

ICACCI--15.21 An Advanced Authentication System for Multi Server Environment with Snort
Matta Divya Sai (St Martins Engineering College, India); R. Ch A Naidu (JNTU Hyderabad, India); Voni Sudharani V (ST MARTIN'S COLLEGE OF ENGG, India); M Sai Krishna Murthy (JNTUH, India); Meghana K (JNTUK, India)

The usage of systems has been increased rapidly; Due to the busy schedule of the people everyone doing transactions in internet. Authentication is playing a major role in the internet to do authorised transactions. When we are doing transactions in network hackers and intruders would like to hack our system. In this paper we implemented a new scheme which will provide the authentication for the users and resource providers through the registration server. We tried to provide an advanced authentication scheme for a multiserver environment with better security and efficiency. Our scheme not only resists potential attacks but also satisfies various additional requirements as well.

ICACCI--15.22 Role of Codec Selection on the Performance of IPsec Secured VoIP
Emmanuel Antwi-Boasiako (University of Electronic Science and Technology of China, China & Ghana Institute of Management and Public Administration, Ghana); Eric Kuada (Ghana Institute of Management and Public Administration, Ghana); Kwasi Boakye-Boateng (Canadian Institute of Cybersecurity (CIC), University of New Brunswick, Canada)

Current research works have looked at improving the IPsec secured VoIP by arbitrarily increasing bandwidth which is a very limited resource and cannot just be increased in real environments except under laboratory conditions. In most earlier works, codec has been kept constant and the IPsec impact analysed. The results of such works undoubtedly show the devastating impact IPsec has on VoIP which cannot be generalized because much attention has not been given to bandwidth utilization of the IPsec secured VoIP through the proper choice of codecs. In this paper, we have quantitatively justified the impact of IPsec on VoIP in terms of packet loss, jitter and MOS percentages in scenarios where a high bandwidth consuming codecs such G.711 is used as well as scenarios for Speex, a low bandwidth codec. Our results show that irrespective of the impact of IPsec on a VoIP network in terms of packet loss and jitter, a better choice of codec enhances quality of voice output and there would not be the need to arbitrarily increase bandwidth.

ICACCI--15.23 Measurement Results for Direct and Single Hop Device-to-Device Communication Protocol
Vibhutesh Kumar Singh (University College Dublin, Ireland); Hardik Chawla (BITS Goa, India); Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India)

In this paper, we present the measurement results for direct and single hop device-to-device (D2D) communication protocol. The measurement results will further argument the development of D2D communication and will also help in understanding some of the intricate design issues which were overlooked during theoretical or computer simulations. The measurement were taken on a proof-of-concept experimental testbed by emulating a cellular scenario in which a network control authority (NCA) and many D2D enabled devices coordinate and communicate with each other to select an optimum communication range, transmit parameters, etc. Through the measurement results the relationship between RSSI and distance has been analyzed. It has also been observed that D2D communication can significantly reduce the power consumption of cellular networks.

ICACCI--15.24 Mid-infrared Supercontinuum Generation in Ge11.5As24Se64.5 Based Chalcogenide Photonic Crystal Fiber
Sandeep Vyas (Jaipur Engineering College & Research Centre, India)

In this paper, we have numerically investigated a Ge11.5As24Se64.5 based chalcogenide photonic crystal fiber and simulated 1-10 µm mid-infrared supercontinuum generation. This mid-infrared broadband supercontinuum is achieved for 100 mm long photonic crystal fiber pumped with 85 femto-second laser pulses operated at 3.1 µm and peak power pulse is 3 kW. A broad and flat dispersion profile with two zero dispersion wavelengths of Ge11.5As24Se64.5 photonic crystal fiber combined with the high nonlinearity and generate ultra flat broadband supercontinuum.

ICACCI--15.25 Analysis of Outage Performance of Opportunistic AF OFDM Relaying in Nakagami-m Channels
Piyush Kumar (Indian Institute of Technology Patna, India); Sudhan Majhi (Indian Institute of Science, India & Indian Insitute of Technology Patna, India); Youssef Nasser (American University of Beirut, USA)

In this paper, an outage probability of opportunistic amplify-and-forward (AF) orthogonal frequency-division multiplexing (OFDM) relaying is analyzed over Nakagami-m fading channel. A closed form outage probability of the proposed system is derived at high SNR regime. Results show that the increment of relay nodes, number of multipath (L) and Nakagami fading parameter (m) improves the outage performance. The asymptotic outage probability is validated through the Monte-Carlo simulations. The result is compared with other existing relaying schemes.

ICACCI--15.26 Performance Evaluation of LTE Network: An Energy Saving and Capacity Gain Perspective
Sapna Thapar (Indian Institute of Technology Jammu, India); Purnendu Karmakar (The LNM Institute of Information Technology, India)

Efforts to increase capacity gain and energy saving in LTE networks has recently attained substantial interest to the researchers. The idea behind LTE heterogeneous network is about enhancing the system capacity of the network in order to face with exponentially increasing demand for mobile services. Using a combination of macro and micro base stations, heterogeneous network is believed to provide a regular broadband experience to mobile subscribers in the network. Besides capacity improvements, lowering power consumption of the network is also essential from the positive environmental and economical perspective. In this paper, we focus on the development of a base stations deployment strategy that can provide novel solution towards more capacity and less power consumption. We introduce a random cellular network deployment model with an introduction of base stations positioning algorithm that follows K-means clustering approach and users density based approach for deployment of macro and micro base stations respectively, and compared the performance of proposed network architecture with regular network deployment model using power consumption and system capacity as performance metric.

ICACCI--15.27 Design And Gain Enhancement Of A CPW-Fed Dual Band Slot Antenna Using A Metamaterial Inspired Superstrate
Sk Islam (Indian Institute of Engineering Science and Technology, Shibpur, India)

The design of a Dual band slot antenna using complementary LC resonators is presented. Two complementary dual band LC resonators are used to achieve the response. A metamaterial inspired superstrate is then used to achieve gain enhancement at both the operating frequencies. The paper discusses the steps followed to realize the final design and presents the simulated results for the proposed structure. The novelty of this work lies in the simultaneous gain enhancement performance observed at both the bands of the dual band slot antenna structure using the superstrate.

ICACCI--15.28 Network Processor - A simplified approach for transport layer offloading on NIC
Geetanjali Gadre, Shreeya Badhe and Kedar Kulkarni (C-DAC, India)

High performance Network Interconnect is the most important component in High Performance Computing systems. The Network Interconnect mainly consists of three components: Network Interface Card, Switch fabric, and Light weight protocol stack. In order to achieve high bandwidth and low latency, transport protocol is offloaded to Network Interface Card. In transport offload model, the Network Interface Card is not only involved in data transfer but also involved in protocol processing. Protocol processing is a very complex task that involves multiple variables and intricate functionality. Therefore, it would be very beneficial if the Network Interface Card can provide enough flexibility to support multiple protocols. Enhancing support for multiple functionalities will empower the Network Interface Card to support multiple applications. Traditionally, the fixed custom logic in FPGA or ASIC was used for protocol processing. But, it is very difficult to achieve flexibility in the fixed custom logic. To overcome this issue, Network Processor, which provides a lot of flexibility, has emerged as an alternative for conventional hardwired logic. In this paper, we present a novel architecture that makes use of a RISC processor as the Network processor for high-speed network interfaces. We focus on the use of Network Processor in protocol processing. We also share how useful Network Processor is for supporting additional features, even after the complete NIC architecture was finalized. We also explain the debugging strategy, which is very helpful for debugging complex protocol processing.

ICACCI--15.29 DESH: Database Evaluation System With Hibernate ORM Framework
Gunasingh Gabriel (SSN College of Engineering); Chitra Babu (SSN Engineering College, India)

Relational databases have been the predominant choice for back-ends in enterprise applications for several decades. JDBC - a Java API - that is used for developing such applications and persisting data on the back-end requires enormous time and effort. JDBC makes the application logic to become tightly coupled with the database and consequently is inadequate for building enterprise applications that need to adopt to dynamic requirements. Hence, ORM frameworks such as Hibernate, became prominent. However, even with ORM, the relational back-end often faces a drawback of lack of scalability and flexibility. In this context, NoSQL databases are increasingly gaining popularity. Existing research works have either benchmarked Hibernate with an RDBMS or with one of the NoSQL databases. However, it has not been attempted in the literature to test both an RDBMS and a NoSQL database for their performance within the context of a single application developed using the features of Hibernate. This kind of a study will provide an insight that using Hibernate ORM solution would help us to develop database-independent applications and how much performance gain can be achieved when an application is ported from an RDBMS back-end to a NoSQL database backend. The objective of this work is to develop a business application using Hibernate framework that can individually communicate with an RDBMS as well as a specific NoSQL database and to evaluate the performance of both these databases.

ICACCI--15.30 An Effective Emitter-Source Localisation-based PUEA Detection Mechanism in Cognitive Radio Networks
Dikita Salam and Amar Taggu (North Eastern Regional Institute of Science and Technology, India); Ningrinla Marchang (North Eastern Regional Institute of Science and Technology, Arunachal Pradesh, India)

Cognitive Radios (CRs) aim at improving the efficiency of spectrum utilisation by making use of the spectrum holes in the licensed radio spectrum which otherwise would remain unused leading to underutilisation of spectrum resources. The idea is to utilise the unused licensed radio frequencies in the absence of the licensed users also called as Primary Users (PUs) and leave the same when the licensed users return. However, there are several security threats holding back the successful realisation of CR networks. Primary User Emulation Attack (PUEA) is one such major threat in CR networks, in which the attackers disguise themselves as PUs by utilising the signal characteristics of the PUs, in order to make the CRs to erroneously identify the attackers as PUs and vacate the spectrum. It is obvious that such attacks will cause many CRs to lose access to the network services leading to overall performance degradation in the entire network. In this paper, we propose a Hyperbolic-based Transmitter Localisation technique using TDOA to defend against PUEA. The results show successful detection of the PUEA attackers along with accurate positions of the attackers.

ICACCI--15.31 Adopting Ant Colony Optimization for Supervised Text Classification
Mohammed Wajeed (Jyothi Engineering College & Kerala, India); T Adilakshmi (Vasavi College of Engineering, India)

Different electronic gadgets have become indispensable part of human life in the era of information technology, as a result abundant data is generated which is growing in exponential order. The data generated is generally stored in dumped repositories, with the sole purpose of verification as a proof. If the data is stored in classified repository, accessing the required data at a later time or navigation can be done easily. Treating the classified repository as a resource efficient decision making can be made easily. Ant Colony Optimization belongs to meta-heuristic class of optimization algorithms. An individual ant plays no role, but as a colony they are very powerful in solving optimization problems based on the probabilistic techniques. The paper attempts in classifying the textual documents using Ant Colony Optimization in supervised learning paradigm. The results obtained are encouraging.

ICACCI--15.32 The Current State of Voice Over Internet Protocol in Wireless Mesh Networks
Mohammad Tariq Meeran (Tallinn University of Technology & Kabul University, Estonia); Yannick Le Moullec (Tallinn University of Technology (TalTech), Estonia); Paul Annus (Tallinn University of Technology, Estonia)

This paper focuses on the current state of voice over Internet protocol in wireless mesh networks. Mesh network formation is dynamic, unpredictable and selfhealing. Thus, voice traffic transportation in such networks requires special care in each mesh node and in the overall mesh network infrastructure so as to prevent the voice quality from degrading, possibility to the point where voice over Internet Protocol services become unusable. Worldwide, researchers are actively working to improve the quality and reliability of voice over Internet Protocol in such networks; as a result, various solutions have been proposed in the scientific literature. In this paper we survey some of the latest developments related to routing protocols, quality of service mechanisms, packet aggregation techniques, and new wireless standards that have been proposed for dealing with the above issue. We conclude with suggestions regarding new directions to address voice quality in wireless mesh networks domain.

ICACCI--15.33 Detection of Distributed Denial of Service Attacks in Software Defined Networks
Lohit Barki, Amrit Shidling and Nisharani Meti (B V Bhoomaraddi College of Engineering and Technology, India); Narayan D. G. (BVB College of Engineering and Technology, Hubli. Karnataka, India); Mohammed Moin Mulla (KLE Technological University, Hubli, Karnataka, India)

Software Defined Network (SDN) architecture is a new and novel way of network management. In SDN, switches do not process the incoming packets. They match for the incoming packets in the forwarding tables and if there is none it will be sent to the controller for processing which is the operating system of the SDN. A Distributed Denial of Service (DDoS) attack is a biggest threat to cybersecurity in SDN network .The attack will occur at the network layer or the application layer of the compromised systems that are connected to the network.In this paper we discuss the DDoS attacks from the traces of the traffic flow. We use different machine learning algorithms such as Naive bayes, K-Nearest neighbour, K-means and Kmedioides to classify the traffic as normal and abnormal. Then these algorithms are measured using parameters such as detection rate and efficiency.The algorithm having more accuracy is chosen to implement Signature IDS and results of it are then processed by Advanced IDS which detects anamolous behaviour based on open connections and provides accurate results of the hosts specifying which hosts is involved in the DDOS attack

ICACCI--15.34 Beacon Controlled Campus Surveillance
Gaurav Saraswat (Guru Gobing singh Indraprasth Universitg & Maharaja Agrasen Institute of Technology, India); Varun Garg (Guru Gobind Singh Indraprasth University, India)

In Academic institutions, Handling administrative tasks and interaction with the student is an arduous task. By proximity analysis of Bluetooth beacons, Developed tool will provide a real-time surveillance of an institution which will automate the administrative operations with establishing working discipline and it can also send Web page links and notifications in order to communicate with students.

Friday, September 23

Friday, September 23 15:55 - 17:30 (Asia/Kolkata)

ICACCI--26: ICACCI-26: Poster Session III

Room: Lawns(Academic Area)
Chairs: Poonam Gera (The LNMIIT, India), Viral Nagori (GLS Institute of Computer Technology (MCA) & GLS University, India)
ICACCI--26.1 RFID and Android Based Smart Ticketing and Destination Announcement System
Prasun Chowdhury (St. Thomas' College of Engineering & Technology, Kolkata); Poulami Bala, Diptadeep Addy, Sumit Giri and Aritra Ray Chaudhuri (St. Thomas' College of Engineering & Technology, Kolkata, India)

In India, the most widely used public transport system is the ready-to-go- bus facility. However, this 'ready-to-go' facility is not as smooth as the need of the hour, particularly in today's congested metropolitan cities. Standing in long queues at bus stands, quarrelling with conductors for trifle matters make the journey uncomfortable for the passengers. That is why; we have proposed an idea for implementing smart card technology for ticketing the passengers travelling in bus. The smart card is mainly based on latest Radio Frequency Identification (RFID) technology. For this purpose, an interface is built between RFID setup and driver's mobile phone using a specifically developed Android app "SwipeNgo". The interface helps to send passenger ID from RFID reader to the driver's mobile phone via Bluetooth. The developed "SwipeNgo" app is installed in driver's mobile phone and receives passenger ID from the RFID card reader via interface when passenger get into the bus. Along with the passenger ID, "SwipeNgo" also keep records of the stoppage name/no. into database in mapping with the Global positioning system (GPS) coordinates. The exact fare between source and destination is calculated and deducted from the balance when the passenger gets down from the bus. This information regarding balance is also sent to the RFID setup where the fare is displayed. There is a separate announcement system which alerts the passengers prior to the next halt.

ICACCI--26.2 Impact of Algorithm Complexity on Energy Utilization of Wireless Sensor Nodes
VM Lekshmy (Amrita Center for Wireless Networks and Applications, India)

Nowadays wireless sensor networks are implemented in a variety of fields to obtain real-time measurements. These networks are comprised of small, low cost devices called wireless sensor nodes(WSN). There are different types of wireless sensor nodes available in the market. Based on the requirements, wireless sensor nodes can be selected for each application. Power consumption is a major aspect in developing wireless sensor applications. In this paper, analysis of power consumption in different sensor nodes is conducted based on algorithms with different complexities. The experimental analysis results show that at a particular input current limit, Waspmote consumes 15% less power than MICAz mote in the case of O(1) ,11.04% less in the case of O(n) , 7.6% less in the case of O(n2),3.9% less in case of O(log n) and 18.06% less in case of O(m+n)complex algorithms.

ICACCI--26.3 Feasibility and Performance Evaluation of VANET Techniques to Enhance Real-time Emergency Healthcare Services
Adwitiya Mukhopadhyay (Amrita Vishwa Vidyapeetham, India); Raghunath S and Kruti M (Amrita Vishwa Vidyapeetham, Amrita University, India)

Advancement in wireless technologies to improve telemedicine is one of the major goals in recent times. Wireless telemedicine for emergency primary healthcare is a technology which provides mobile healthcare and exchange of medical data from ambulances or rural healthcare centers to hospitals. This helps the hospitals to understand patients' medical condition before they arrive. The idea is to be prepared in advance for hospitals to respond to such cases. This work focuses on creating a vehicular ad hoc network scenario for telemedicine, where an attempt is made to identify an optimal solution using 802.11 networking standard. A vehicle-to-vehicle connection is created which has been evaluated using various node densities by choosing 802.11n, 802.11p and 802.11b with AODV routing protocol. Constant bit-rate traffic is used between the ambulance and hospital. Validations for the standards are carried out for the parameters PLR, delay and throughput considering blood pressure, video and audio transmission. The performance results are analyzed for all three standards based on mobility and varying vehicular speeds. We have compared the results of various parameters for each scenario and attempted to identify the better performing standard. NS3 has been used for simulation in networks, whereas for traffic simulation SUMO is used.

ICACCI--26.4 Implementation of Ignition Control with on Board Diagnosis
Rishi Dev and Shanmughasundaram R (Amrita University, India)

-The innovations in digital technology and semiconductor industry helped the development of automotive industry. The control actions which were done mechanically in the past decades have been successfully replaced by electronic systems. The development of Engine Control Unit (ECU) gave a breakthrough in automotive electronic systems. The ECU is a microcontroller which process the inputs from sensors and does the control action in real-time. This paper describes the design of a fully programmable, low cost system for ignition control, based on ARM core and an On-Board Diagnosis device for continuously monitoring the ignition parameters. The ECU is programmed to ignite the air-fuel mixture in a way that maximizes the efficiency of the engine. This is achieved by reading values from the primary ignition map (PIM). The ECU will provide the user access to the map and allow full customization. This will provide the user with the capability to adjust the engine's performance quickly and easily based on the requirements. On Board Diagnosis helps to identify the faults occurring in engine management system and to alert the user.

ICACCI--26.5 Localization of sensor nodes using Modified Particle Swarm Optimization in Wireless Sensor Networks
Neelam Barak (National Institute of Technology Delhi, India); Neha Gaba (IGDTUW, India); Shipra Aggarwal (Indira Gandhi Delhi Technical University for Women, India)

An indigenous and efficient techniques for global optimization methodology based on particle intelligent swarm to be utilized for locating the nodes in a Wireless Network of sensors has been proposed in this work. The objective function used is the estimation error of neighboring anchor nodes in the environment. Particle Swarm Optimization is a globally used optimization tool that works on swarm intelligence technique. This algorithm ensures high rates of convergence and prevents the problem of trapping in local minima. The proposed work, uses a modified particle swarm optimization technique which is computationally efficient and subsequent simulation results have proven the better convergence results derived from the algorithm over traditional methods of particle swarm optimization technique.

ICACCI--26.6 A Study on the Impact of Macroeconomic Factors on s&p BSE Bankex Returns
Shilpa Sudhakaran (Amrita Vishwa Vidyapeetham University, India); Balasubramanian P (Amrita School of Business, Amrita University, India)

Macro environment plays a major role along with the micro environment in making an impact over the performance of stock market in India. This study attempts to research whether Money Supply, Foreign Direct Investments (FDI), Inflation Rate, Index of Industrial Production (IIP), Foreign Exchange Reserves and Foreign Portfolio Investments (FPI) are making any significant impact on the BSE Bankex returns. The previous studies have taken different variables as macroeconomic factors in order to measure their impact on Bankex. This study is focusing on different macroeconomic factors that have not yet taken earlier in order to understand their impact on Bankex. For that monthly data was collected over a period of 10 years ranging from April 2005 to March 2015 from the websites of Bombay Stock Exchange, Reserve Bank of India etc. Unit root test, multiple regression and multicollinearity test were conducted for making the analysis. The analysis revealed that FDI and Foreign Exchange Reserves have a significant impact on the BSE Bankex returns and there is no multicollinearity exists between the variables in the model.

ICACCI--26.7 The Terrain Identification by the Pulse Radar Altimeter
Artem Sorokin, Vladimir Vazhenin and Lubov Lesnaya (Ural Federal University, Russia)

This paper concentrated on signal processing of the signal reflected from the terrain. The identification of the terrain type is based on the comparison of the statistical distributions of the reflected signal amplitude: the current distribution and the reference distribution, which type is known. The maximum of the posteriori probability is corresponded to the detected type of the terrain. The algorithm of the identification could be used for correction the inertial navigation system.

ICACCI--26.8 Power Aware Scheduling on Real-time Multi-core Systems
Amit Hanamakkanavar (BVB College of Engineering and Technology, PG Student); Vidya Handur (BVB Colloge of Engineering and Technology, Associate Professor); Priti Ranadive (KPIT Technologies Ltd. India, Principal Scientist, India); Venkatesh Kareti (KPIT Technologies Ltd. India, Sr Research Associate)

Multi-core systems are being used in real-time systems for fast computation and with increasing complexity of real-time systems, effective management of power is becoming a challenge. To address this challenge, scheduling strategy becomes an important factor to utilize power efficiently. This paper proposes a new priority (NP) calculation for task scheduling based on task parameters such as user priority, duration, deadline and slack. Considering multi-core system with each core assigned to different frequencies, two scheduling algorithms are proposed: Slowest Cores Available (SCA) and Fastest Cores Available (FCA). Algorithms are analyzed in terms of execution time, idle time, power consumption and processor utilization.

ICACCI--26.9 Multi-Functional Secured Smart Home
Vinay Kumar Mittal (KL University & KLEF, India); C. V. Raghavendra Dharma Savarni, Gajjala Viswanatha Reddy and G. Rambabu Yadav (Indian Institute of Information Technology, Sricity, Chittoor, India); Md Shariq Suhail (Indian Institute of Information Technology, Sricity, India)

Enhancing the home security by remote control means is a cutting-edge research area in the domain of Internet of Things (IoT's). The necessity of security is increasing these days, ranging from thefts, burglary, accidents, LPG gas leakage and fire detection etc, which all are important aspects of a Home Security System. In this paper, a prototype Multi-Functional Secured Smart Home (SSH) model is developed. Generally a home security system uses signals in terms of alarm of intruder detection. However, the proposed Multi-Functional Secured Smart Home uses a mobile communication (GSM) based Home Security System, which helps to provide a better security to have systems that can be globally connected. In the proposed system a text message is sent, whenever an event from any sensor is detected, so that immediate actions could be taken by the home owner. The proposed SSH sends SMS using GSM-Module and mail through Raspberry Pi micro-controller. The prototype SSH based smart system also uses an Arduino micro-controller board for commands processing and control. The system uses GSM technology, which provides global access to the Smart Home Security System. The prototype SSH, developed at a low cost, can be used for converting existing homes into smart homes at relatively affordable cost and with convenience. The performance testing results of the prototype SSH are encouraging.

ICACCI--26.10 Information Retrieval in Web Crawling A Survey
Chandni Saini (Thapar University, India); Vinay Arora (Patiala, Punjab & Thapar University, India)

In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.

ICACCI--26.11 Design of a Wide Output Range and Reduced Current Mismatch Charge Pump PLL with Improved Performance
Sujata Pandey (AMITY University, Noida, India); Monika Bhardwaj (Amity University, India)

The paper deals with the design of a charge pump circuit after analyzing the current mismatch problem. Charge sharing problem is eliminated by using transistors. For enhancing the matching characteristics a cascode type current mirror circuit is used. This design provides a technique by which the value of IUP and IDN is matched in way that there is no time mismatch between the UP and DN signals. The proposed design has a added advantage of low power consumption. All simulations are done at 1.8V power supply.

ICACCI--26.12 Net Energy Meter with Appliance Control and Bi-directional Communication Capability
Tania Tony (Amrita Vishwa Vidyapeetham, Amrita University, India); Sivraj P (Amrita Vishwa Vidyapeetham, Amrita University & Amrita School of Engineering, India); Sasi K Kottayil (Amrita Vishwa Vidyapeetham, Amrita University)

Various energy management strategies include distributed generation of electricity using renewable resources, distributed storage, effective control of smart appliances leading to energy conservation etc. The concepts like net zero energy buildings, roof top renewable generation and affordable storage schemes demand intelligent devices with net metering, appliance control and bi-directional communication capabilities. This paper, proposes a smart home controller which is capable of doing net metering, smart appliance control and bi-directional communication with utility and user. Smart Home Controller (SHC) is implemented using LPC2148 microcontroller, in which the net meter value is computed by offsetting the energy consumed from grid with the energy send back to the grid. RF communication module is interfaced to the controller so as to communicate with the room controllers for appliance control. Utility as well as consumers can access this device through GSM messages; providing consumer integration to grid.

ICACCI--26.13 An Intelligent Controller for Smart Home
Vishnu Babu (Amrita Vishwa Vidyapeetham, Amrita University, India); Ashwin Kumar U (Amrita Vishwa Vidyapeetham, India); Priyadarshini Ramasamy (Amrita School Of Engineering, India); Krithika Premkumar and Nithin S (Amrita Vishwa Vidyapeetham, India)

Home Area Network (HAN) forms an integral part of the Smart Grid technology providing a smarter Solution to the energy crisis. This article proposes a remote interface controller for Smart Homes as a part of Home Automation and Demand Side Management. The intelligent controller submits the user with the individual energy consumption profiles of the appliances at his home on an android mobile application providing the user with dual appliance control (on/off) options: manual or automatic. The former requires manual control of the appliances while the latter lets the controller make the decision based on dynamic tariff rates using Fuzzy Logic. This system being user friendly can be adopted for the Smart Grid and Smarter homes.

ICACCI--26.14 Design of Linear Phase Low Pass FIR Filter Using Restart Particle Swarm Optimization
Himanshu Gupta (SGSITS, Indore, India); Deepak Mandloi, Anand Jat and Arpit Gupta (SGSITS, India); Prabhakar Ojha (CDGI, India)

Finite Impulse Response Filter (FIR) filter are very popular in signal processing, image processing and communication system. Design of FIR filter which satisfies all the specified conditions is challenging task. In fact, design of FIR filter is a multimodal optimization problem. Digital Filter cannot be designed by traditional gradient methods as they are inefficient. Metaheuristic Optimization Algorithms have proved to be efficient in solving multimodal optimization problem where the traditional gradient based methods fail. Particle Swarm Optimization (PSO) is a population based metaheuristic algorithm widely used for optimization problems. In this paper we have used a variant of PSO called Restart PSO for design of linear phase low pass FIR filter.

ICACCI--26.15 Demographic Analysis of Twitter Users
Geeta Singh and Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)

In recent times the popularity of social media has gone up greatly. Due to the great number of people using these platforms and going vocal with their thoughts these can be used to determine public opinion. In this paper, we are extracting this opinion of people by analyzing the tweets collected from twitter on major events like T20 World Cup, Paris Attack, Oscar, Olympics, Formula 1 championship etc. Here we have used demographic analysis. We first analyze the opinion of users and then calculate the sentiments of users on different events. In this way, we determine how users' opinion and their positive and negative sentiments differ demographically. We have performed this analysis on millions of twitter users residing in different locations and have demonstrated the findings using graphs and pie charts.

ICACCI--26.16 Textual Content Based Information Retrieval From Twitter
Waseem Ahmad (Aligarh Muslim University, Aligarh, India); Rashid Ali (AMU Aligarh, India)

Now a day's micro-blogging site Twitter1 is rapidly gaining popularity among politician, celebrities, businessmen, academician and even ordinary people. Many users want to collect useful information from Twitter for possible future use. In this regard, the user requires a system that facilitates user to restore tweets and find them again with higher degree of relevance with user's query. In this paper, we propose a framework for tweets retrieval from Twitter. The system acquires information from Twitter by using Twitter search API and develops a corpus of user's contents by removing noisy and ambiguous elements from the retrieved collection of tweets. Further, we pose queries to obtain the results of the system. We find that the system return useful documents to the user in order of their decreasing relevance scores.

ICACCI--26.17 Distributed Image Processing Using Hadoop and HIPI
Swapnil Arsh and Abhishek Bhatt (The LNM Institute of Information Technology, India); Praveen Kumar (VNIT, Nagpur, India)

In the present era, huge amount of data is being produced every single day. A significant portion of this massive data or big data is contributed by images. Besides the amount of data, the size and resolution of individual images is also increasing at a very fast pace, leading to more and more complex image processing algorithms which in turn pose great demand to computation power. This paper provides a solution to one such image processing application which analyzes the image processing kernels from an industrial application: Organic-Light- Emitting-Diode (OLED) Printing for OLED center detection. The application uses Hadoop and Hadoop Image Processing Interface (HIPI) for parallelizing the processing. Hadoop provides the parallel processing paradigm, which when used along with HIPI can provide significant performance improvements for processing images.

ICACCI--26.18 A Correlation Based SVM-Recursive Multiple Feature Elimination Classifier for Breast Cancer Disease Using Microarray
Kavitha KR (Amrita Vishwasam Vidyapeetham & Amritapuri, India); Syamili RajendranG (Amrita School of Engineering Amritapuri, India); Varsha J (Amrita School of Engineering Amritapuri Kerala, India)

Support Vector Machine (SVM), is most widely popular learning algorithm used for classification of large dataset. Our work aims to generate a classifier for breast cancer genes microarray by using modified-SVM-RFE algorithm. This breast cancer microarray contains a large number of genes and its expression, so it is necessary to reduce the number of genes before applying classification. The most efficient algorithm that can be applied for classification of microarray is SVM-RFE, which is an embedded method, and performs backward single gene elimination as well as classification of the dataset. We are proposing a new modified method for breast cancer classifier with less computation over SVM-RFE. SVM-RFE generates rank to each features(genes) and eliminates one lowest ranked one; which means irrelevant feature in each iteration. Since our microarray contains 47,294 genes so it has very computational overhead to reduce the dimension. So we are proposing a modified algorithm which removes more than one irrelevant genes in single iteration by using SVM-RFE algorithm. And also the correlated genes is consider for generating virtual gene before applying SVM-RFE. we ascertain the correlated genes and extract a new gene from two, and then apply SVM-RFE on the new set of genes. Our proposed method is Correlation based Support Vector Machine Recursive Multiple Feature Elimination (CSVM- RMFE) algorithm which first extracts a new genes from two correlated genes called virtual gene and then apply SVM-RMFE to generate a classifier. This SVM-RMFE algorithm eliminate more than one feature at once, so that the computation time can be reduced and its accuracy can be increased.

ICACCI--26.19 T-Slotted Microstrip Patch Antenna for 5G WI-FI Network
Ravi Kumar Goyal (RTU & GOVT. ENGG. COLLEGE AJMER, India); Kamlesh Kumar Sharma (MNIT, India)

A compact planar T- slotted micro strip antenna simultaneously suitable for 5G Wireless communication at millimeter wave frequency is presented. The antenna is simulated using the CST software. CST, computer simulated technology simulator is used to study radiation characteristics of the antenna and simulated results such as return loss, the radiation pattern and polar plot gain is shown. The simulation result met the IEEE 802.1lad standard operating in 60 GHz millimeter wave frequency band suitable for 5G Wireless communication. The measured results shows the lowest return loss of -38 dB, gain of antenna is 6.3 dB and the voltage standing wave ratio (VSWR) is near to 1 at 23 GHz and 60 GHz indicating that the antenna is a good candidate for very high speed WLAN applications.

ICACCI--26.20 Backpropagation Artificial Neural Network for Determination of Glucose Concentration From NearInfra Red Spectra
Bilal Malik (University of Kashmir, India)

This paper proposes Backpropagation Artificial Neural Network algorithm as a calibration technique for prediction of glucose concentration. For the experimental work, simulated blood was formed by mixing only three constituents glucose, triacetin, and Urea in a phosphate buffer solution. Spectra of this solution was obtained from a near-infrared spectrophotometer in the region of 2100nm to 2400 nm and the spectral resolution of 1nm was used to collect the spectra. The improvement in the standard error of prediction and correlation coefficients as compared to Principal Component Regression and Partial Least Square regression techniques demonstrate that Backpropagation Artificial Neural Network algorithm could be used as an alternate calibration technique for non-invasive glucose measurement.

ICACCI--26.21 Pattern Generation and Symmetric Key Block Ciphering Using Cellular Automata
Rajat Mehta and Rajneesh Rani (Dr B R Ambedkar NIT Jalandhar, India)

This paper presents a symmetric key block cipher technique using Cellular Automata (CA). The proposed system deals with theory and application of cellular automaton. Cellular automata have the property of state transition that is basis to define fundamental transformations for encryption and decryption in the enciphering system. Firstly, the patterns are generated using MATLAB (v2011) and then the cryptographic system is implemented in C language but it can be implemented in other languages too. Different rule configurations of CA are used to form hybrid reversible cellular automata to be used in encryption and decryption of the data in the form of string or character array.

ICACCI--26.22 An Efficient Spatial Domain based Image Watermarking using Shell Based Pixel Selection
Shubham Mathur (VIT University, India); Akshay Dhingra (Vit University, India); Manoharan Prabukumar and Agilandeeswari Loganathan (VIT University, India); Muralibabu K (Lord Ayyapa Institute of Engineering and Technology)

In this paper, we implemented a new algorithm in spatial domain with shell based pixel selection for watermark embedding and extraction. Here, the watermark is first converted into binary image by local thresholding and then converted into a logical matrix. Before embedding, each value of logical matrix is XOR-ed using a random 8-bit key to generate modified logical matrix. Next, the pixel of host image is selected by shell-based technique along row and column alternatively, starting from position (2, 2) and moving diagonally. To prevent duplicate selection of pixel two direct-address tables are maintained. Each pixel is sliced into red, green, blue and alpha components and bits from modified logical matrix are embedded into LSB of each component and finally an extraction key is generated. To detect tampering in an image, watermark is extracted using key and compared with original watermark. The proposed method is evaluated with benchmark dataset and we obtained a favorable result in terms of PSNR and BER. We reported the results of various kinds of image manipulation to assess the performance of the proposed method by drawing a comparative study of the original watermark and the watermark extracted from a manipulated image. Shell based pixel selection gives sensitivity and converting watermark to logical matrix and storing it in each component of pixel gives higher capacity than traditional methods.

ICACCI--26.23 Anomaly Detection Based Multi Label Classification Using Association Rule Mining (ADMLCAR)
Prathibhamol Cp (Amrita University, India); Amala G s and Malavika Kapadia (Amrita School of Engineering, India)

Multi label classification contains multiple labels in the label space. Any Multi label classification problem (MLC) deals with numerous class labels associated with data instances. Due to this, correct prediction of labels for a test data remains as a challenge in this field. In our paper Anomaly Detection based Multi Label Classification using Association Rule Mining (ADMLCAR) is used for solving MLC problem. Conventionally,most of the multi label classification problem is solved by either of the two methods: Problem transformation, Algorithm adaptation. But the method discussed in this paper aims at a novel method different from traditional solution to multi label classification problem. For clustering ADMLCAR uses k-means algorithm and for association rule mining purpose it uses vertical data format. To predict the test data instances this method finds for the nearest cluster. Once the clusters are identified it uses oversampling principal component analysis (PCA) within the nearest cluster with respect to test instances. Oversampling PCA is used to emphasize the need for confirming the fact that test instance's label set will not only be confined to its nearest cluster label set. This is because, anyways the test instance will be identified to a nearest cluster by means of Euclidean distance measure but as clustering is unsupervised the nearest cluster may contain many objects entities of different label sets.

ICACCI--26.24 Multi Label Classification Based on Logistic Regression (MLC-LR)
Prathibhamol Cp (Amrita University, India); Jyothy K v and Noora B (Amrita School of Engineering, India)

Numerous class labels associated with each data instance is the main feature of any multi-label classification (MLC) problem. Correct prediction of class labels related to any test data is a big challenge in this domain. MLC can be applied in many fields such as personality prediction, cancer prognosis prediction, image annotation etc. In this paper(MLC-LR), we have employed problem transformation method for solving MLC. The proposed method uses initially clustering in the feature space. It is then followed by FP-growth algorithm for finding the relationship between labels. Once the desired clusters are obtained, then normalization of data associated with each cluster is performed. Also logistic regression is then applied over the normalized data for each particular cluster pertaining to all labels. When a new instance arrives in the testing phase, immediately the nearby cluster is identified by means of Euclidean distance metric as the measure. Rules related to label space for the nearby cluster is extracted to check for hypothesis of each antecedent label. If the calculated value is higher than a predefined threshold, it is assumed that both antecedent and consequent labels as the estimated labels for that test instance.

ICACCI--26.25 ECSim:A Simulation Tool for Performance Evaluation of Erasure Coded Storage Systems
Ojus Thomas Lee (NIT Calicut, India); S D Madhu Kumar and Priya Chandran (National Institute of Technology Calicut, India)

Simulation environments provide a comprehensive set of advantages to users, like cost effectiveness and capability to understand the shortcomings of the system under design, without physical implementation. Simulation platforms help the industry and academia, to document and publish their research outputs in a timely and cost efficient manner. The ECSim tool presented here, is meant for academic use in the initial stages of research. The platform provides an environment where the performance of erasure coded storage systems can be tested without much effort. The main highlight of the simulator is that it provide a very simple environment which can run on a standalone system. The environment does not require the user to be a programmer, since it provides an interactive command line interface to the user. The ability to simulate data center, clusters, master nodes, storage nodes with computing power, storage devices, bandwidth usage and disk I/O involved are notable features of the ECSim.

ICACCI--26.26 Real Time Performance Evaluation of Energy Detection Based Spectrum Sensing Algorithm Using WARP Board
H Yerranna, Samrat Sabat, Sunil Devanahalli Krishnamurthy and Siba Kumar Udgata (University of Hyderabad, India)

This paper presents a real-time performance evaluation of Neyman-Pearson (NP) criteria based energy detection algorithm for spectrum sensing in cognitive radio. We have implemented the energy detection algorithm on a Wireless Open-Access Research Platform (WARP). The algorithm is validated using bursts of QPSK signal. Each burst has four frames with 1024 samples. For validation, in each burst, only one frame is occupied with the signal. We have compared the implementation results with algorithm simulation results. The experimental results reveal that the algorithm can detect signal up to SNR of -4dB, and -7dB in real time and simulation respectively with the probability of detection and probability of false alarm as 0.9 and 0.1 respectively. The detection time for performing the sensing operation in WARP board is evaluated as 4.7${\mu}$Sec.

ICACCI--26.27 Design of Bioimpedance Spectrometer
Abhijit S. Patil (Vishwakarma Institute Of Information Technology ( Affiliated To Savitribai Phule Pune University), India); Rajesh Bhaskar Ghongade (Bharati Vidyapeeth University College of Engineering, India)

Patient health is monitored by invasive as well as non-invasive methods. As invasive methods are harmful to patient's body medical science requires more non-invasive methods. Bioimpedance spectroscopy (BIS) provides information regarding patient health in clinics as well as in home noninvasively. This paper describes design and implementation of the Bioimpedance Spectrometer. It uses the magnitude ratio and phase difference detection method as basic technique. Developed bioimpedance spectrometer measures impedance in the 10 KHz to 1 MHz range of frequency which is β-dispersion range. Pathological changes generally occurs in this range.

ICACCI--26.28 Is Sentiment Analysis an Art or a Science? Impact of lexical richness in training corpus on machine learning
Sanchit Garg, Aashish Saini and Nitika Khanna (HMR Institute of Technology & Management, India)

Social Media is exploding with data — that can help you derive an optimal marketing strategy in the internet world, engage with your audience on the fly, and protect your reputation from smearing campaigns if it is processed and analyzed in a timely fashion. Digital marketing analysts and data scientists rely on social media analytics tools to deduce customer sentiment from countless opinions and reviews. While numerous attempts have been made to improve their accuracy in the past, yet we know surprisingly little about how accurate their results are. We present an unbiased study of users' tweets and the methods that leverage the available tools & technologies for opinion mining. Our prime focus is on improving the consistency of text classifiers used for linguistic analysis. We also measure the impact of lexical richness in the sample data on the trained algorithm. This paper attempts to improve the reliability of sentiment classification process by the creation of a custom vote classifier using natural language processing techniques and various machine learning algorithms.

ICACCI--26.29 Performance Evaluation of Different Color Models Used in Color Iris Authentication
Abhilasha Sandipan Narote (Smt. Kashibai NAvale College Of Engineering, University of Pune, India); Laxman Madhavrao Waghmare (SGGS College of Engineering and Technology, SRMTU Nanded, India)

This paper presents an iris authentication system using different color models . This method is able to cope with different noises of color iris images. The experiments reveal that HSV and YIQ color spaces are optimal models as compared to other color spaces on iris authentication. The proposed method is validated on UBIRIS noisy iris database available in the public domain. This method using color histogram processing and fusion at the matching score achieves classification accuracy of 92.1% , equal error rate of 0.072 and computational time is 0.039 Sec. It has better accuracy as compared to related works and performs well in noisy conditions.

ICACCI--26.30 An E-business Chatbot using AIML and LSA
Thomas N T (Amrita University & Amrita Vishwa Vidyapeetham, India)

A successful e-business model needs cooperation and support in all its aspects from production to service. The e-business completely changed the way of selling products. E-commerce is a type of e-business model which mostly do business over the internet. The major drawback in this field is quality of customer service they provide. The limited customer service is a problem in this field. In every e-business model, customer have to wait for a long time to get a response from the customer service representative. Especially in case of live chat, they talk to multiple customers at a time. The responses may not be relevant as they copy paste pre-written answers. Also, the slow response and long time wait for the service agent is the biggest headache in this field of online services. As a solution to this problem, we propose a chatbot which automatically gives immediate responses to the users based on the data set of Frequently Answered Questions(FAQs) using Artificial Intelligence Markup Language (AIML) and Latent Semantic Analysis (LSA). Template based questions like greetings and general questions will be answered using AIML and other service related questions use LSA to give responses.

ICACCI--26.31 Classification of Mammogram Images by using CNN Classifier
Ketan Sharma (Chandigarh University, Mohali, India)

Classification of breast tissue into the benign and malignant classes is an difficult assignment for the computerized algorithms and trained radiologists alike. The experimental results are obtained from a data set of 40 images taken from MIAS for different types. We extract the GLCM, GLDM and Geometrical features from the mammogram images. In this paper we apply Convolution Neural Network as a classifier on the mammogram images to enhance the accuracy rate of CAD.We used Receiver Operating Characteristics (ROC) to measure performance of the different classifiers. In training stage, our proposed method achieved an overall classification accuracy of 73%, with 71.5% sensitivity and 73.5% specificity for dense tissue along with it, accuracy of 79.23%, 73.25% sensitivity and 74.5% specificity is acheieved for fatty tissue. Convolution neural network classifier is used to boost the classification performance. This classifier performs better than previous classifiers in that it shows more accuracy than the other classifiers, the misclassification rate of normal mammograms as abnormal. This approach performs good on overlapping problem. This method is fundamentally different from all other approaches, which are used to identify normal mammograms by detecting cancers. Overlapped tissues are also detected by this using this classifier.

ICACCI--26.32 A novel technique for LED dot-matrix text detection and recognition for non-uniform color system
Vandana Jhatwal (Chandigarh University, India)

Now-a-days LED dot matrix has increasingly role in many application areas to showing messages and contents. These messages contain various characters to display the message. And these characters are showed by a matrix containing a particular number of rows and columns. The LED text is very hard to detect because it is discontinue. This paper proposes a method to solve the problem of LED text detection and recognition. This paper propose a solution to the non-uniform color system by first defining the region of the LED board using rectangular region extraction and then applying gray scaling to it. From this paper we can detect even a single character with single character extraction method.

ICACCI--26.33 An Integrated Speech Processing Method Utilizing General Kalman Filter
Vijay Kiran Battula and Appala naidu Gottapu (University College of Engineering Vizianagaram, JNTU KAKINADA, India)

Speech processing has two major applications namely, Speech enhancement which deals with extracting of clean speech from a noisy one and Speaker identification ,a process of providing authentication basing on human speech. Since, speech is a non-stationary signal there are many adaptive algorithms developed to deal with the above applications. One such highly popular algorithm having many forms is Kalman filter. The purpose of present work is to study the use of General Kalman Filter (GKF) a form derived from Kalman filter along with Estimate Maximization (EM) algorithm. The first step of process involves understanding GKF and EM algorithm, second is to use them for Speech enhancement and third step is to use them in preprocessing step of Speaker identification, which shows its significance. Simulation results show the combination of GKF and EM provides a better Speech enhancement and acts as an efficient pre-processing step of Speaker identification over conventional method.

ICACCI--26.34 Vehicle Trajectory Prediction using a Catadioptric Omnidirectional Camera
Vigneshram Krishnamoorthy (National Institute of Technology Tiruchirappalli, India); Saksham Agarwal (Indian Institute of Technology Kanpur, India); K. s. Venkatesh (Indian Institute of Technology, Kanpur, India)

A practical method is presented to predict the future spatial-temporal trajectories of multiple vehicles at road intersections in real time using a catadioptric omnidirectional camera equipped with an Equiangular mirror. Tracking is done using CamShift algorithm running alongside a Kalman Filter to handle occlusions. Domain transformation of the tracked objects location and velocity from image space to real world is done using a geometrical model. A computationally effective model for trajectory prediction has been presented along with the experimental results obtained using it. Applications such as collision prediction and vehicle tracking or any other event of interest using a dual-camera system are also discussed briefly.

ICACCI--26.35 ECG Signal Analysis using Wavelet Coherence and S-Transform for Classification of Cardiovascular Diseases
Saksham Agarwal (Indian Institute of Technology Kanpur, India); Vigneshram Krishnamoorthy (National Institute of Technology Tiruchirappalli, India); Sawon Pratiher (Indian Institute Of Technology Kharagpur, India)

The spontaneous classification of cardiovascular diseases is a challenging task and can be made more feasible with proper ECG fluctuation analysis. In this paper we perform a qualitative analysis of the ECG data using complex Gaussian wavelets to investigate the multi-scale, self similar behaviour and deviation via phase plots of the wavelet cross spectrum of ECG signals. We further analyze ECG signals using S transform to overcome the limitations of continuous wavelet transform and make the results more consistent and reliable. The results obtained are promising and the inferences drawn to aid in disease classification using the ECG signals are also discussed.

ICACCI--26.36 Investigation of CO-OFDM system using VCSEL for long reach systems employing symmetrical dispersion compensation modules
Garima Chouhan (Chandigarh University, India)

Orthogonal frequency division multiplexing is technology comprises of laser modulation and data transmission, over optical fiber. The major purpose of study is to evaluate the VCSEL performance in coherent OFDM systems. Performance of the OFDM system has been analyzed for optimized VCSEL laser and VCSEL conventional laser. Two lasers are studied for different distance varied from 60 Km to 540 Km. Comparison has been done for improved and conventional VCSEL laser in terms of signal to noise ratio at 100 Gbps. Also error vector magnitude is analyzed for improved VCSEL as compared to conventional laser. Error in constellation is less at shorter distance and signals are less deviated from ideal position. However increase in distance causes more deviation in the signal placement to correct slot and error occurs at longer distances.

ICACCI--26.37 Sensible Approach for Soil Fertility Management Using GIS Cloud
Leena H u (Siddaganga Institute of Technology, India); Premasudha BG (Siddaganga Institute of Technology, Tumkur, Karnataka, India)

GIS embedded cloud storage of spatial data plays a significant role in current agriculture field. An alternative approach to our traditional method of soil fertility management in decision making processes using GIS Cloud (Geographic Information System) is a novel idea for e-governance and m-governance. In recent years, many researchers have studied the importance of cloud-computing in various sectors for storing large amount of data and GIS as a separate concept in viewing the spatial maps. The main objective of this study is to suggest a complete Decision Support System Decision Support System (DSS) using GIS enabled cloud technologies to administer agriculture related data required for soil fertility management based on different criteria's like crop, season, soil type, seed varieties, etc. The proposed DSS includes development of geo-referenced soil fertility maps showing distributions of soil nutrients and their spatial variability to provide fertilizer recommendations using Soil Test Crop Response (STCR) equations for targeted yield approach for different crops in India.

Saturday, September 24

Saturday, September 24 10:30 - 11:45 (Asia/Kolkata)

ICACCI-34: ICACCI-34: Late Breaking Results Posters

Room: Lawns(Academic Area)
Chair: Shweta Pandey (The LNM Institute of Information Technology (LNMIIT), India)
ICACCI-34.1 Performance Analysis of Detection Techniques of Sink Hole Attack and QOS for MANET using AODV
Rakhi Khandelwal (GWEC Ajmer, India); Sandeep Kumar Gupta (Malaviya National Institute of Technology, India); Pankaj Sharma (Rajasthan Technical University, India); Shubhlakshmi Agrwal (ICFAI University, India)

Wireless Mobile ad-hoc networks are quite vulnerable to many security compromising attacks because of their open deployment architecture. These attacks could involve wormhole attack, message replay or tampering, identity spoofing, black hole attack, eavesdropping, and so on. In the Sinkhole attack, malicious node tries to attract data packets of network by advertise its fake routing information in network. One of the impacts of sinkhole attack is that, it can be used to drops data packets and alter routing information. This research presents an Individual Trust Managing Technique to prevent against sink-hole attack. In this research sinkhole attack is implemented for analyzing different effects on performance of network due to increasing the mobility and probability of attacks. A detection technique is also analyzed for effective detection and removal of attacker node. The proposed analysis is simulated using network simulator NS3. In this way, the ad-hoc networks are exploited by routing protocol design. So, there is need of methods to make MANET routing protocols resistant to Sinkhole attack. In this research work, the Sinkhole attack has been performed over AODV. The prevention technique is significantly successful in handling the attack while restoring the performance of network and reduces the effect of attack from the network.

ICACCI-34.2 A hybrid technique for LED dot-matrix text recognition
Vandana Jhatwal (Chandigarh University, India)

Nowadays, LED dot matrix has increasingly role in many application areas to showing messages and contents. These messages contain various characters to display the message. A single character is displayed by a matrix containing a particular number of rows and columns. By combining a number of characters we displayed any message on LED display board. The message shown with the help of LED is the LED text. The LED text is very hard to detect because it shows discontinuity. This paper proposed a method to solve the problem of LED text detection and recognition. To perform the method we need to extract the board region from natural image. The work is developed by firstly inputting an image. The input image is then processed to convert the colors to gray-scale. Then, the image is segmented to extract the rectangular region. From this paper we can detect even a single character with single character extraction method. This paper recognized the characters with 88.57% accuracy.

ICACCI-34.3 Data Throughput Gains in LTE Advanced over LTE for FDD Networks
Gautam Thakur (BITS Pilani Hyderabad Campus, India)

LTE Networks are being planned to be upgraded to LTE Advanced along with increasing availability of Release 10 UEs and Spectrum Availability for Carrier Aggregation. LTE Advanced brings a major step increase in Down Link and Uplink Throughputs and related spectral efficiencies. This paper explains the evolution of FDD throughputs for different Channel Bandwidths. The maximum theoretical throughputs and expected actual throughputs after removing the overheads by different broadcast, signaling, synchronization & random access channels have been explained with an impact analysis thereof. Throughput gains for upcoming LTE Advanced deployments have been computed for FDD Frame Structure for Channel Bandwidth of 1.4 MHz, 3MHz, 5 MHz, 10 MHz, 15 MHz and 20 MHz along with Carrier Aggregation of 2 & 5 Component Carriers. Impact of increasing Signaling Channels, retransmissions & code rate on throughputs has been explained.

ICACCI-34.4 Hybrid IWT-DCT Image Compression Technique
Sandeep Kumar Gupta (Malaviya National Institute of Technology, India); Deeksha Choudhary (Govt Engineering College for Woman, India); Meeta Sharma (Govt Engineering College for Woman, Ajmer, India); Shubhlakshmi Agrwal (ICFAI University, India)

Image Compression is embedding scheme for reducing size of image so that image can be store in less disk space and faster attachment possible in communication. Research issues in Image Compression are to increase efficiency in term of the image quality of decompressed image on higher compression ratio and robustness against visual attacks. Discrete Wavelet transform domain based Image Compression is lossy compression technique. The disadvantage of DWT based compression is fraction loss in embedding which increases mean square error and results decreasing PSNR. Quality of decompressed image is proportional to PSNR. The Proposed compression approach use integer wavelet transforms to overcome above fraction loss. The paper presents Hybrid Integer wavelet transform (IWT) and discrete cosine transform (DCT) based compression technique to obtain increased quality of decompressed image compared to DWT+DCT based compression technique. The proposed combined IWT+DCT based compression technique reduces the fractional loss compared to DWT based compression which results better image quality of decompressed image on high compression ratio.

ICACCI-34.5 security enhancement in OCDMA system using multi-code keying and data randomisers
Upinderjit Kaur (Chandigarh University, Punjab, India)

In this work, a super secure OCDMA system is presented which consists of data randomiser, XOR gate and cipher key text generator. A new technique is demonstrated for designing of data randomiser in optical domain by using cheap components. Here confidentiality is increased by utilizing cheap and all optical components to generate keys from coded signal generally referred as multi code key transfer and data scrambler. Xor gate is based on SOA amplifiers, however in reported works SOA MZI is taken into consideration which is phase sensitive. To evaluate the performance any data is of sequence 10001110011 from CMUX is fed into scrambler.In this a secure transmitter is designed to prevent signal loss by eavesdropper. Randomiser is placed to scramble bits after cipher text and remove sequence of consecutive three 0's and 1's to enhance performance security. The basic model of randomiser is based on XOR operation and shifting of data. Randomiser remove the long sequences, it change one bit into another bit. Advantage of randomiser is that it can be used with continuous and digital data. However encryption can only be done for digital data.For eavesdropper, it become different to tap the right data, he steal shuffle data. So security of OCDMA is enhanced.

ICACCI-34.6 Non-invasive Estimate of Blood Glucose Level:Using Photoplethysmograph and neural network
Sucheta Kalunge, Rajesh Ghondade and Shraddha Habbu (Vishwakarma Institute of Information Technology, Pune, India)

Glucose monitoring and regulating is important for diabetic patients. But all commercially available systems are invasive. This paper describes a non-invasive technique for blood glucose level estimation which uses photoplethysmograph (PPG) and artificial neural network (ANN). Photoplethysmograph detects a change in blood volume with each cardiac beat. The features extracted form photoplethysmograph waveform are used to train artificial neural network. The pilot dataset consisting of 100 samples of all age groups, diabetic and non-diabetic is created. Artificial neural network establishes a relation between features extracted from photoplethysmograph waveform and actual blood glucose level. Promising results are achieved. Results show correlation factor of target versus estimation.

ICACCI-34.7 Secured LSB modification using Dual Randomness
Pavit Sapra and Himanshu Mittal (Jaypee Institute of Information Technology, India)

Steganography is being used to communicate confidential pieces of information across covert channels as it is less prone to attacks than conventional cryptography methods. It is increasingly used for images or audio files. LSB modification is most popular method for steganography through audio files. However, it is open to steganalysis. In this paper, a novel Secured LSB Modification using Dual Randomness method is presented where RSA, XOR operation and most significant bits are used for embedding the secret bits in the sample. The performance of proposed method has been compared with 'Enhanced LSB method' and tested on 8-bit audio samples of different sizes. The experimental results validates that the proposed method outperforms along enhanced capacity, robustness, and transparency.

ICACCI-34.8 Phishing Page Detection Tool: Shield from Cyber Frauds
Gaurav Saraswat (Guru Gobing singh Indraprasth Universitg & Maharaja Agrasen Institute of Technology, India); Varun Garg (Guru Gobind Singh Indraprasth University, India)

Phishing happens to be one of the greatest frauds which are occurring inside the area with high economic growth such as Indian .Though high-level companies are aware of the risk and preventive measures, but the small and medium companies are not able to adopt security practices Even the security tools are expensive and the losses due to frauds over the Internet is increasing .Thus, we have come with phishing detection tool which could minimize these risk of phishing by the use of some simple yet effective filters we deployed but aggressive research.

ICACCI-34.9 A Poly-Character Substitution based Enhancement In Vigenere Cipher
Dhananjay Radhanpura (Rajasthan Technical University, India)

Cryptography helps to solve a major problem of security of data. In the current era of various important data needed to be transferred from one end to other throughout the world needs encryption techniques that can ensure the encryption of any message being sent with a more powerful key.The proposed method is to enhance the method of Vigenere cipher technique. The current Vigenere cipher uses only alphabet characters for key and plain text to be encrypted, hence it limits the encryption of any UTF-8 character set. Here in this method, this identified limitation will be removed so as to allow all UTF-8 characters being encrypted using a strong key. The implementation will be done using java programming.

ICACCI-34.10 Dental Extraction & Matching using Contour Algorithm on JPG and DICOM Images for Human Identification
Deven Trivedi (C u shah University, India); Nimit Shah (C u Shah University, India); Ashish Kothari (C u shah University, India)

In this paper, Dental radiographs of JPG and DICOM images are collected from authenticate sources. Two Identical Images are to be considered to match with all images. Proposed algorithm is developed to match, In proposed algorithm adjacent pixels and Pattern are generated to match with all other images to identify the person. Adjacent pixels are considered in manners where difference between any pixels value to be considered as matching points, after applying this method to entire image matrix, all points are to be connected and pattern is generated. This Patterns co-ordinates values are matched with method of Euclidian distance with all other images after applying above procedure. After getting results, FAR (False Acceptence rate) and FRR ( False Rejection Rate ) derieved and optimum value of EER (Equal Error Rate ) is defined. Also GUI of all work is prepared so as and when practical application is needed , this software we can directly connected to any security system.

ICACCI-34.11 32 bit reconfigurable RISC processor design for BETA ISA with inbuilt Matrix Multiplier using Verilog HDL
Raj Singh (Bangalore Institute of Technology, India); Ankit Vashishtha and Krishna R. (Bangalore Institute Technology, India)

Abstract— A 32 bit reconfigurable RISC processor design has been proposed in this paper. The design is based on BETA Instruction Set Architecture, introduced by MIT, USA with concise no. of instructions for high speed computing by general purpose RISC processors. In our proposed design a new non-pipelined and reconfigurable data path has been introduced to provide inbuilt matrix multiplication functionality additional to BETA ISA. The incorporated matrix multiplier enables this processor to be a great option for DSP applications with signal and image processing requirements. Von-neumann architecture has been followed for memory implementation with two separate 32-bits address and data lines. Hence, this processor design can support up to 4Gb of external memory. The design has thirty 32-bit sized internal general purpose registers for direct instruction fetch. The design is implemented using Verilog HDL. Further, synthesis and function verification has been done on Xilinx Virtex 6 using Xilinx Tool suit.

ICACCI-34.12 Performance Analysis of Detection Technique forSelect Forwarding Attack on WSN
Divya Acharya (Govt. Women Engineering College, Ajmer, India); Pankaj Sharma (Rajasthan Technical University, India)

Wireless Sensor Network is used to measure environmental data in remote location, process huge data and send this to central location which is called Base Station. These WSN applications are very important useful requirement in collection of data from remote location where permanent structure is not possible as military applications, environmental condition detection, whether predictions, humidity measurement etc. Each sensor node is operated using battery and hard difficult to replace or recharge battery in remote areas. So routing protocols for Wireless Sensor Networks should be as energy efficient as possible. Wireless Sensor Network is quite vulnerable to many security compromising attacks as wormhole attack, message replay or tampering, identity spoofing, black hole attack, eavesdropping, and so on. One of the impacts of select forwarding attack is that, it can be used to drops some of data packets. LEACH (Low Energy Adaptive Clustering Hierarchy) uses randomized cluster rotation to distribute energy load among all sensor nodes. In this paper, Selective Forwarding Attack creation, detection, and removal is done on LEACH in Heterogeneous Wireless Sensor Networks and it is analyzed that how performance of networks affected with changes of parameters as energy level and number of malicious nodes. Moreover Performance of LEACH has been evaluated in terms of Packet Delivery Ratio with select forwarding attack, after detection and removal of malicious node and analyzed. The proposed analysis is simulated using network simulator NS2. The prevention technique is significantly successful in handling the attack while restoring the performance of network and reduces the effect of attack from the network.

ICACCI-34.13 Application Of COSA In Subspace Clustering For High Dimensional Data
Kahkashan Kouser (Birla Institute Of Technology Mesra, India)

Clustering high-dimensional data has been a crucial task because of inherent sparsity of the points.In conventional clustering every dimension is equally weighted when computing the space between features. Most of these algorithms perform well in clustering low-dimensional information sets . However, in higher dimensional characteristic spaces, their efficiency and affectivity deteriorate to a better extent because of the high dimensionality . Subspace clustering is an extension of typical clustering that seeks to find clusters in distinct subspaces within a dataset. In general in excessive dimensional knowledge, many dimensions are irrelevant and may mask existing clusters in noisy data. Feature selection eliminates irrelevant and redundant dimensions by using analyzing the entire dataset. Subspace clustering algorithms localize the search for critical dimensions allowing them to seek out clusters that exist in a couple of, very likely overlapping subspaces.

ICACCI-34.14 Application of Random Forest Algorithm on Feature Subset Selection and Classification and Regression
Jitendra Jaiswal (VIT University Vellore, India); Rita Sammikannu (VIT University, India)

Feature subset selection becomes quite important and predominant in the case of data sets those are contained with higher number of variables. It discards insignificant variables and produces efficient and improved prediction performance on the class variables that is more cost effective and more reliable understanding of the data. Random forest has been emerged as a quite efficient and robust algorithm that can handle feature selection problem even with the higher number of variables. It is also very much efficient while dealing with Missing data imputation, classification, and regression problems. It can also handle outliers and noisy data very well. In this paper we applied the concept of random forest algorithm on the feature subset selection and classification and regression to perform the comparative study of the random forest algorithm in different perspectives.

ICACCI-34.15 Low-Cost Soil Moisture Sensors and their Application in Automatic Irrigation System
Suruchi Chawla (Shaheed Rajguru College, India); Shakshi Bachhety (University of Delhi, India); Veni Gupta (DELHI University, India); Shalu Sharma (Shaheed Rajguru College, India); Shivani Seth and Tanya Gandhi (University of Delhi, India); Sheetal Varshney (Shaheed Rajguru College of Applied Sciences for Womrn, India); Saloni Mehta and Ruchika Jha (University of Delhi, India); Amita Kapoor (University of Delhi & Shaheed Rajguru College of Applied Sciences for Women, India)

This paper presents the design of low cost soil moisture sensor and its application in automatic irrigation system. We develop self-made capacitive sensors using resources available at home. The sensors are then interfaced with microcontroller. An algorithm is developed to detect the threshold moisture levels, and control the water inflow for efficient use of water. Thus, this system can replace human labor, save the water and at same time ensures that plants gets optimum level of water, hence increasing productivity of crop.

Wednesday, September 21

Wednesday, September 21 14:30 - 18:30 (Asia/Kolkata)

ISTA-01: ISTA- Image Processing and Artificial Vision/Applications Using Intelligent Techniques (Regular Papers)

Room: LT-8(Academic Area)
Chairs: Sovanlal Mukherjee (The LNM Institute of Information Technology, India), Joyeeta Singha (The LNMIIT, India)
ISTA-01.1 14:30 Face Recognition in Videos Using Gabor Filters
Swapnil Tathe (Bhivarabai Sawant College of Engineering & Research, Pune); Abhilasha Sandipan Narote (Smt. Kashibai NAvale College Of Engineering, University of Pune, India); Sandipann Pralhad Narote (Government Residence Women Polytechnic, Tasgaon, Sangli, India)

Advancement in computer technology has made possible to evoke new video processing applications in field of bio-metric recognition. Applications include face detection and recognition integrated to surveillance systems, gesture analysis etc. The first step in any face analysis systems is near real-time detection of face in sequential frames containing face and complex objects in background. In this paper a system is proposed for human face detection and recognition in videos. Efforts are made to minimize processing time for detection and recognition process. To reduce human intervention and increase overall system efficiency the system is segregated into three stages- motion detection, face detection and recognition. Motion detection reduces the search area and processing complexity of systems. Face detection is achieved in near real-time with use of Haar features and recognition is done using Gabor feature matching.

ISTA-01.2 14:45 MRI/CT Image Fusion Using Gabor Texture Features
Hema P Menon (Sreepathy Institute of Management and Technology, India); Narayanankutty Kotheneth K a (Amrita School of Engineering, India)

Image fusion has been extensively used in the field of medical imaging by medical practitioners for analysis of images. The aim of image fusion is to combine information from different images in the output fused image without adding artifacts. The output has to contain all information form the individual images without introducing artifacts. In images that contains more textural properties, it will be more effective in terms of fusion, if we include all the textures contained in the corresponding individual images. Keeping the above objective in mind, we propose the use of Gabor filter for analyzing the texture, because under this method the filter parameters can be tuned depending upon the textures in the corresponding images. The fusion is performed on the individual textural components of the two input images and then all the fused texture images are combined together to get the final fused image.To this the fused residual image obtained by combining the residue of the two images can be added to increase the information content. This approach was tested on MRI and CT images considering both mono-modal and multi-modal cases and the results are promising.

ISTA-01.3 15:00 Enhancement of Dental Digital X-Ray Images Based on the Image Quality
Hema P Menon (Sreepathy Institute of Management and Technology, India); Rajeshwari B (Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, Amrita University, India)

Medical Image Enhancement has made revolution in medical field, in improving the image quality helping doctors in their analysis. Among the various modalities available, the Digital X-rays have been extensively utilized in the medical world of imaging, especially in Dentistry, as it is reliable and affordable. The output scan pictures are examined by practitioners for scrutiny and clarification of tiny setbacks. A technology which is automated with the help of computers to examine the X-Ray images would be of great help to practitioners in their diagnosis. Enhancing the visual quality of the image becomes the prerequisite for such an automation process. The image quality being a subjective measure, the choice of the methods used for enhancement depends on the image under concern and the related application. This work aims at developing a system that automates the process of image enhancement, in such a way that the decision of the enhancement parameters and the method used is chosen, with the help of the image statistics (like mean, variance, and standard deviation). This proposed system also ranks the algorithms in the order of their visual quality and thus the best possible enhanced output image can be used for further processing. Such an approach would give the practitioners flexibility in choosing the enhanced output of their choice.

ISTA-01.4 15:15 Classroom Teaching Assessment Based on Student Emotions
Sahla Ks (Amrita Vishwa Vidyapeetham, India); Senthilkumar Thangavel (Amrita School of Engineering, India)

Classroom teaching assessments are designed to give a useful feedback on the teaching-learning process as it is happening. The best classroom assessments also serve as meaningful sources of information for teachers, helping them identify what they taught well and what they need to work on. In the paper, we propose a deep learning method for emotion analysis. This work focuses on students of a classroom and thus, understand their facial emotions. Methodology includes the preprocessing phase in which face detection is performed, LBP encoding and mapping LBPs are done using deep convolution neural networks and finally emotion prediction.

ISTA-01.5 15:30 Development of KBS for CAD Modeling of a Two Wheeler IC Engine Connecting Rod: An Approach
Jayakiran Reddy Esanakula (Sri Padmavati Mahila Visvavidyalayam & Tirupati, India); Cnv Sridhar (Annamacharya Institute of Technology and Sciences, Rajampet, India); V Pandu Rangadu (JNTUA College of Engineering, India)

The conventional CAD modeling methods of connecting rod are time-consuming because of the complex in geometry. A Little modification in shape or size of IC engine assembly will cause a considerable chain reaction in the geometry of connecting rod which leads to alter the CAD model because of various interrelated design issues. Consequently, the CAD model of the connecting rod needs to be altered so as to match the modification(s) of the engine or connecting rod. The advanced CAD modeling techniques such as parametric modeling technique offer the solutions to these issues. This paper introduces a knowledge-based system for quick CAD modeling of a two wheeler IC Engine Connecting Rod by using commercially available CAD package SolidWorks API. An inference engine and relevant GUI are developed within the CAD software for assisting the design engineers. The developed system is an application of engineering which utilizes the reuse of the design knowledge.

ISTA-01.6 15:45 Development of KBS for CAD Modeling of Industrial Battery Stack and Its Configuration: An Approach
Jayakiran Reddy Esanakula (Sri Padmavati Mahila Visvavidyalayam & Tirupati, India); Cnv Sridhar (Annamacharya Institute of Technology and Sciences, Rajampet, India); V Pandu Rangadu (JNTUA College of Engineering, India)

The conventional CAD modeling methods of industrial battery stacks are time-consuming because of the complex in geometry and nonavailability of geometry standards. A little modification in electrical power backup required for the customer will cause a considerable chain reaction in the geometry of industrial battery stacks which leads to alter the CAD model because of various interrelated design issues. The advanced CAD modeling techniques such as parametric modeling technique offer the solutions to these issues. This paper introduces a knowledge-based system for developing the CAD model of the industrial battery stack and its configuration and also to reduce the CAD modeling time. An inference engine and relevant GUI are developed within the CAD software for assisting the design engineers. The developed system is an application of engineering which utilizes the reuse of the design knowledge

ISTA-01.7 16:00 A Color Image Segmentation Scheme for Extracting Foreground from Images with Unconstrained Lighting Conditions
Niyas S and Reshma P (Indian Institute of Information Technology and Management-Kerala, India); Sabu M Thampi (Kerala University of Digital Sciences, Innovation and Technology (KUDSIT), India)

Segmentation plays a functional role in most of the image processing operations. In applications like object recognition systems, the efficiency of segmentation must be assured. Most of the existing segmentation techniques have failed to filter shadows and reflections from the image and the computation time required is much high to use in real time applications. This paper proposes a novel method for an efficient, unsupervised segmentation of foreground objects from a non-uniform image background. With this approach, false detections due to shadows, reflections from light sources and other noise components can be easily avoided. The algorithm works on an adaptive thresholding, followed by a series of morphological operations in low resolution down sampled image and hence, the computational overhead can be minimized to a desired level. The segmentation mask thus obtained is then upsampled and applied to the full resolution image. So the proposed technique is best suited for batch segmentation of high-resolution images.

ISTA-02: ISTA- Intelligent Tools and Techniques (Short Papers)

Room: LT-10 (Academic Area)
Chair: Prasheel V. Suryawanshi (MIT Academy of Engineering, Alandi (D), Pune, India)
ISTA-02.1 14:30 Inverse Prediction of Critical Parameters in Orthogonal Cutting Using Binary Genetic Algorithm
Ranjan Das (Indian Institute of Technology Ropar, India)

An inverse problem is solved for concurrently assessing the rake angle, the chip thickness ratio and the required cutting width in an orthogonal cutting tool, when subjected to a prescribed force constraint. The force components which can be obtained experimentally by mounting either suitable dynamometers or force transducers on a machine tool, are calculated here by solving a forward problem. Due to inherent complexities involved in the calculations of the gradients, genetic algorithm-based evolutionary optimization algorithm is used in the present study. The results of the inverse problem have been compared with those of the forward problem. It is observed that a good estimation of the unknowns is possible. The current study is projected to be of use to decide on the relevant cutting tool parameters and adjusting the cutting process in such a manner that the cutting tool works within the dynamic limits.

ISTA-02.2 14:42 Lattice Wave Digital Filter Based IIR System Identification with Reduced Coefficients
Akanksha Sondhi and Richa Barsainya (Netaji Subhas Institute of Technology, India); Tarun Rawat (Netaji Subhas Institute of Technology (NSIT), India)

The purpose of this paper is to identify unknown IIR systems using a reduced order adaptive lattice wave digital filter (LWDF). For the system identification problem, LWDF structure is utilized, as it mocks-up the system with a minimal coefficient requirement, less sensitivity and robustness. The modelling technique is based on minimizing the error cost function between the higher order unknown system and reduced order identifying system. Two optimization algorithms, namely, genetic algorithm (GA) and gravitational search algorithm (GSA) are utilized for parameter estimation. By means of examples, it is shown that LWDF offers various advantages in system identification problem such as requirement of minimum number of coefficients, low mean square error (MSE), variance and standard deviation. The results demonstrate that better system identification performance is achieved by LWDF structure compared to adaptive canonic filter structure.

ISTA-02.3 14:54 Robust Control of Buck-Boost Converter in Energy Harvester: A Linear Disturbance Observer Approach
Aniket D Gundecha (Savitribai Phule Pune University & MIT Academy of Engineering, India); Vinaya Gohokar (Maharashtra Institute of Technology, India); Kaliprasad Mahapatro (MIT Academy of Engineering, India); Prasheel V. Suryawanshi (MIT Academy of Engineering, Alandi (D), Pune, India)

An ingenious control of DC-DC buck-boost converter with uncertain dynamics is proposed in this paper. The proposed converter operates in buck-boost mode based on the uncertain input either from a photovoltaic cell (boost) or piezoelectric generator (buck). A linear disturbance observer is designed to alleviate the disturbances in load resistance and input source. The control is synthesized using sliding mode control. The stability of system is assured.

ISTA-02.4 15:06 Multi Objective PSO Tuned Fractional Order PID Control of Robotic Manipulator
Himanshu Chhabra (MLVTEC Bhilwara, India); Vijay Mohan (Neta Ji Subhas Institute of Technology, India); Asha Rani (NSIT University of Delhi NEW Delhi, India); Vijander Singh (Netaji subhas institute of Technology, Delhi University, India)

Designing of an efficient control strategy for robotic manipulator is a challenging task for control experts due to inherent nonlinearity and high coupling present in the system. The aim of this paper is to design precise tracking controller with minimum control effort for robotic manipulator. In order to fulfill the aforementioned purpose a fractional order PID controller is proposed. Multi objective particle swarm optimization (MOPSO) is used to optimize parameters value of FOPID controller. The integer order PID is also implemented for comparative study. Results show that the robustness of proposed controller towards trajectory tracking and uncertainty in parameters are superior over traditional PID controller.

ISTA-02.5 15:18 Book Recommender System Using Fuzzy Linguistic Quantifier and Opinion Mining
Shahab Saquib Sohail and Jamshed Siddiqui (Aligarh Muslim University, India); Rashid Ali (AMU Aligarh, India)

The recommender systems are being used immensely to promote various services, products and facilities of daily life. Due to the success of this technology, the reliance of people on the recommendations of others is increasing with tremendous pace. One of the best and easiest ways to acquire the suggestions of the other like-minded and neighbor customers is to mine their opinions about the products and services. In this paper, we present a feature based opinion extraction and analysis from customers' online reviews for books. Ordered Weighted Aggregation (OWA), a well-known fuzzy averaging operator, is used to quantify the scores of the features. The linguistic quantifiers are applied over extracted features to ensure that the recommended books have the maximum coverage of these features. The results of the three linguistic quantifiers, 'at least half', 'most' and 'as many as possible' are compared based on the evaluation metric - precision@5. It is evident from the results that quantifier 'as many as possible' outperformed others in the aforementioned performance metric. The proposed approach will surely open a new chapter in designing the recommender systems to address the expectation of the users and their need of finding relevant books in a better way.

ISTA-02.6 15:30 Roadmap for Polarity Lexicon Learning and Resources: A Survey
Swati Sanagar (Amrita Vishwa Vidhyapeetham, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India)

Sentiment analysis opens door for understanding opinions conveyed in text data. Polarity lexicon acts as heart in sentiment analysis tasks. Polarity lexicon learning is explored using multiple techniques over years. This survey paper discuss polarity lexicon in two aspects. The first part is literature study which depicts initial techniques of polarity lexicon creation to the very recent ones. The second part reveal facts about available open source polarity lexicon resources. Also, open research problems and future directions are unveiled. This informative survey is also useful for individuals entering in this arena.

ISTA-02.7 15:42 Factors Affecting Infant Mortality Rate in India: An Analysis of Indian States
Suriyakala Vijayakumar (Amrita University, India); Deepika Manippady Gopalkrishna (Amrita University, Bangalore, India); Amalendu Jyotishi (Amrita University, India); Deepa Gupta (Amrita Vishwa Vidyapeetham, India)

While there are enough efforts by the governments to reduce the infant mortality rate in developing countries, the results are not as desired. India is no exception to the case. Identifying the factors that affect the infant mortality rates would help in better targeting of the programs leading to enhanced efficiency of such programs. Earlier studies have shown the influence of socioeconomic factors on infant mortality rates at a global level and found that variables like fertility rate, national in-come, women in labour force, expenditure on health care and female literacy rates influence the infant mortality rates. The current study using the data from from all states and Union Territories of India for the years 2001 and 2011 tries to establish the relationship between infant mortality rate and some of the above mentioned factors along with a few healthcare infrastructure related variables. Using a regression analysis method we not only identify the influence of the variables on infant mortality, we went a step further in identifying the performance of states and union territories in reducing IMR. The performance was measured using 'technical efficiency' analysis. We then compared the performance and growth rate of IMR to classify the states as good performers and laggards. Our results suggest that most of the major states are on track on their performance on IMR. However, a few small states and union territories like Andaman and Nicobar Island, Mizoram, Arunachanl Pradesh, Jammu&Kashmir need special attention and targeting to reduce IMR.

ISTA-02.8 15:54 The Use of Simulation in the Management of Converter Production Logistics Processes
Konstantin Aksyonov and Anna Antonova (Ural Federal University, Russia)

This paper considers an application of a simulation model to the management of converter production logistics processes. The simulation model has been developed to determine the optimal time interval of delivering the melts to the converters. The goal of optimization is to find such a time interval in which the waiting time of the service on a continuous casting machine will be minimal, and downtime of the continuous casting machine will be minimal because it influences on the resources spending and the amount of harmful emissions into the atmosphere. The simulation model has been developed in a simulation module of the metallurgical enterprise information system. The simulation module supports a multi agent simulation. Agents in the developed model are intended to describe the cutting slabs algorithm used by technologists in the metallurgical production. As a result of a series of experiments with the model the best time interval between deliveries of the melts on the converters has been found and equal to 20 minutes.

ISTA-02.9 16:06 Discrete Sliding Mode Control Using Uncertainty and Disturbance Estimator
Prasheel V. Suryawanshi (MIT Academy of Engineering, Alandi (D), Pune, India); Pramod Shendge (College of Engineering Pune, India); Shrivijay B. Phadke (College of Engineering, Pune, India)

This paper presents design and validation of a delta-operator based discrete sliding mode control (DSMC) algorithm for uncertain systems. A unifying sliding condition is used and control is designed for model-following. The control law is synthesized by estimating states and uncertainties using UDE. The UDE used in combination with SMC makes it possible to use a smooth control without having to employ a smoothing approximation. A notable feature of the proposed control is that it affords control over the magnitude of quasi-sliding for a given sampling period. The stability of overall system is proved using Lyapunov criterion. The efficacy of control design is validated on a benchmark motion control problem.

ISTA-02.10 16:18 An Overview of Feature Based Opinion Mining
Rita Kamble (University of Pune, India); Avinash Golande (University of Pune & Rajashri Shahu College of Engineering, India); Sandhya Waghere (University of Pune, India)

Before the invention of web 2.0 people were only able to view the information but now they are also able to publish the information on Web in the form of comments and reviews. The user generated content forced organization to pay attention towards analyzing this content for better visualization of public's opinion. Opinion mining or Sentiment analysis is an autonomous text analysis and summarization system for reviews available on Web. Opinion mining aims for identifying and distinguishing the emotions and expressions expressed within the reviews, classifying them into positive or negative and summarizing into the form that is quickly understood by users. Feature based opinion mining performs fine-grain analysis by recognizing individual features of an object upon which user has expressed his/her opinion. This paper gives an idea of various methods that are proposed in the area of feature based opinion mining and also discuss the limitations of existing work and future direction in feature based opinion mining.

ISTA-02.11 16:30 A Survey of Brain MRI Image Segmentation Methods and the Issues Involved
Reshma Hiralal (Amrita Vishwa Vidyapeetham, Amrita University, India); Hema P Menon (Sreepathy Institute of Management and Technology, India)

This paper presents a survey on the existing methods for segmentation of brain MRI images. Segmentation of brain MRI images has been widely used as a preprocessing, for projects that involve analysis and automation, in the field of medical image processing. MRI image segmentation is a challenging task because of the similarity between different tissue structures in the brain image. Also the number of homogenous regions present in an image varies with the image slice and orientation. The selection of an appropriate method for segmentation therefore depends on the image characteristics. This study has been done in the perspective of enabling the selection of a segmentation method for MRI brain images. The survey has been categorized based on the techniques used in segmentation.

Thursday, September 22

Thursday, September 22 14:30 - 17:30 (Asia/Kolkata)

ISTA-03: ISTA- Intelligent Distributed Computing (Regular Papers)

Room: LT-7(Academic Area)
Chairs: Matthew Adigun (YES, South Africa), Purnendu Karmakar (The LNM Institute of Information Technology, India)
ISTA-03.1 14:30 A New Discrete Imperialist Competitive Algorithm for QoS-aware Service Composition in Cloud Computing
Fateh Seghir (Intelligent Systems Laboratory, Faculty of Technology, Sétif 1 University, Algeria); Abdellah Khababa (University of Ferhat Abbas Setif, France); Jaafar Gaber and Abderrahim Chariete (UTBM, France); Pascal Lorenz (University of Haute Alsace, France)

In this paper, an effective Discrete Imperialist Competitive Algorithm (DICA) is proposed to solve the QoS-aware cloud service composition problem, which is known as a non-polynomial combinatorial problem. To improve the global exploration ability of DICA, as inspired by the solution search equation of Artificial Bee Colony (ABC) algorithm, a new discrete assimilation policy process is proposed, and differently from the assimilation strategy of the original ICA, colonies moved toward their imperialists by integrating information of other colonies in the moving process. To enhance the local exploitation of DICA and to accelerate the convergence of our algorithm, the proposed assimilation process is also applied among imperialists. The performance of the proposed DICA is evaluated by comparing DICA with other recent algorithms, and the obtained results show the effectiveness of our DICA.

ISTA-03.2 14:45 Cluster Based Approach to Cache Oblivious Average Filter Using RMI
Manmeet Kaur (GURU NANAK DEV ENGG. COLLEGE, LUDHIANA, India); Akshay Girdhar (Guru Nanak Dev Engineering College, Ludhiana, India); Sachin Bagga (GURU NANAK DEV ENGG. COLLEGE, LUDHIANA, India)

Parallel execution uses the power of multiple system simultaneously thus comes out to be an efficient approach to handle and process complex problems producing result in less execution time. Present paper represents implementation of a Cache-oblivious algorithm for de-noising of corrupted images using parallel processing approach. In present era, there is a need to work with large sized image. Sequential execution of any process will results in long time of execution ultimately degradation of performance. This paper focuses to implement the algorithm on distributed objects by Cluster using RMI and utilize the concept of multithreading to enhance the depth of distributed parallel technology.

ISTA-03.3 15:00 Enhanced User Authentication Model in Cloud Computing Security
Kimaya Ambekar (K. J. SIMSR, India); Kamatchi R (Amity University, Mumbai, India)

The rate of technological advancement in the globe has increased rapidly in the last decade. There is a fair rate of enhancement in the various areas like Information Technology, Communication Technology and also on the area of its application like virtualization and utility computing. These all advancement has led to the conceptualization of Cloud Computing. Cloud computing is nothing but the variety of services on pay-as-per-use model. The increased security breaches are the main hindrance for increased use of cloud computing in business sector. There are various security measures are available to provide a personalized security framework based on the business needs. This paper proposes a completely new security model using VPN to provide a secured authentication to the users. The first section discusses on the various characteristics of cloud computing with its extended support to the business model. The final section proposes an advanced security model using VPN and analyses its impact on the cloud computing system.

ISTA-03.4 15:15 Smart Feeding in Farming Through IoT in Silos
Himanshu Agrawal (Indian Institute of Technology Jodhpur, India); Javier Prieto (University of Salamanca & AIR Institute, Spain); Juan M. Corchado (BISITE Research Group University of Salamanca & Air Institute, Spain); Carlos Ramos (Polytechnic Institute of Porto, Portugal)

Smart farming practices are of utmost importance for any economy to foster its growth and development and tackle problems like hunger and food inse-curity and ensure the well-being of its citizens. However, such practices usually require large investments that are not affordable for SMEs. Such is the case of ex-pensive weighing machines for silos, while the range of possibilities of the Inter-net of Things (IoT) could intensively reduce these costs while connecting the data to intelligent Cloud services, such as smart feeding systems. The paper presents a novel IoT device and methodology to monitor quantity and quality of grains in si-lo by estimating the volume of grains at different time instants along with temperature and humidity in the silo. A smart feeding system, implemented via a virtual organization of agents, processes the data and regulates the grain provided to the animals. Experimental on-field measurements at a rabbit farm show the suitability of the proposed system to reduce waste as well as animal diseases and mortality.

ISTA-04: ISTA - Applications using Intelligent Techniques (Short Papers)

Room: LT-8(Academic Area)
Chair: Narasimha Bolloju (LNMIIT, India)
ISTA-04.1 14:30 A Multimodel Approach for Schizophrenia Diagnosis Using fMRI and sMRI Dataset
Chandra Prakash (NIT Delhi, India); Achin Varshney, Namita Mittal and Pushpendra Singh (MNIT Jaipur, India)

Schizophrenia is an acute psychotic disorder, reflected as unusual social conduct. The exact reason for the Schizophrenia is still unknown. At Present it is deprived of any established clinical diagnostic test. Study reflects that imbalance brain chemicals, cells, environment and genetics contributes toward this disease. Its diagnosis is through the external observation of behavioral symptoms. Healthcare specialists take the help of Functional magnetic resonance imaging (fMRI) to identify schizophrenia patients by comparing the brain activation patterns with the normal subjects. This paper presents a novel approach for cognitive state classifier for Schizophrenia. A multivariate fusion model by combining Functional Network Connectivity (FNC) and Source Based Morphometry (SBM), is used as feature reduction technique on multiple data type (fMRI and sMRI datasets).

ISTA-04.2 14:42 Investigation of Effect of Butanol Addition on Cyclic Variability in a Diesel Engine Using Wavelets
Rakesh Kumar Maurya and Mohit Raj Saxena (Indian Institute of Technology Ropar, India)

This study focuses on the experimental investigation of the cyclic variations of maximum cylinder pressure (Pmax) in a stationary diesel engine using continuous wavelet transform. Experiments were performed on a stationary diesel engine at a constant speed (1500 rpm) for low, medium and high engine load conditions with neat diesel and butanol/diesel blends (10%, 20%, and 30% butanol by volume). In-cylinder pressure history data was recorded for 2000 consecutive engine operating cycles for the investigation of cyclic variability. Cyclic variations were analysed for maximum cylinder pressure. The results indicated that variations in the Pmax is highest at lower load condition and decreases with an increase in the engine load.Global wavelet spectrum (GWS) power decreases with an increase in the engine operating load indicating decrease in cyclic variability with the engine load. The results also revealed that lower cyclic variations obtained with butanol/diesel blends in comparison to neat diesel.

ISTA-04.3 14:54 Intelligent Energy Conservation: Indoor Temperature Forecasting with Extreme Learning Machine
Sachin Kumar (University of Delhi, India); Saibal K. Pal (DRDO, India); Ram Pal Singh (DDUC, Delhi University, India)

Energy efficiency domain is becoming very important day by day. This is because of the issues created by global warming and increase in environment temperature. At present energy is consumed in building around us in great amount. These buildings are spaces for our offices and residential activities. There is huge waste of energy which takes in such buildings and is not recognized due to negligence in long run. At present the most of the buildings are using process of heating, ventilation and air conditioning(HVAC)-systems. HVAC systems are also responsible for consumption of huge amount of energy. Home automation techniques are being used to reduce the waste of resources especially energy that is available to us in the form of temperature, electricity, water, sunlight, etc. Forecasting and predicting the future demand of the energy can help us to maintain and to reduce the cost of energy in the buildings. In this paper, we use the experiments "the small medium large system (SMLsystem)" which is the house built at the university of CEU cardinal Herrera (CEU-UCH) for competition named Solar Decathlon 2013. With the data available from this experiments, we try to predict and forecast the future temperature condition intelligently for energy conservation system development. In this paper we develop model based on Extreme Learning Machine(ELM) to forecast the indoor temperature on the basis of some attributes which can help in determining the energy needs of the buildings and rooms in it that further in efficient utilization and conservation of energy.

ISTA-04.4 15:06 Development of Real Time Helmet Based Authentication with Smart Dashboard for Two Wheelers
Ashish Kumar Pardeshi (Centre for Development of Advance Computing, India); Hitesh Pahuja (Centre for Development of Advance Computing); Balwinder Singh (Centre for Development of Advanced Computing Mohali, India)

Nowadays, it is mandatory to wear helmet in all countries for safety purpose. At the time of accident helmet may save a rider's life. Usually most of the people avoid wearing helmet. Thus to encourage rider to wear helmet and to provide smart features to two wheeler, a work is proposed here which will authenticate the engine ignition only when the rider will wear a helmet and provide smart dashboard on the two wheeler for making communication with helmet when worn by the rider. The communication between helmet and smart dashboard is established using Bluetooth, thus providing secure and confident link between the two. Network of limit switch with proper mechanical assembly installed inside the helmet is used to detect the helmet wearing phenomenon. Smart dashboard has graphical touch screen, thus enabling Graphical User Interface (GUI) for the features like maintaining the bike service records, automatic head light control mechanism through LDR (Light Dependent Resistor) and One Time Password (OTP) input for two wheeler access in the absent of helmet. Smart dashboard is also equipped with GPS and GSM to locate the Vehicle position, in case of vehicle lost or theft through a missed call. During emergency rider can also contact to the contacts list in the dashboard via specific button on GUI.

ISTA-04.5 15:18 A Simplified Exposition of Sparsity Inducing Penalty Functions for Denoising
Shivkaran Singh Sokhey, Sachin Kumar S and Soman K P (Amrita Vishwa Vidyapeetham, India)

This paper attempts to provide a pedagogical view to the approach of denoising using non-convex regularization developed by Ankit Parekh et al. The present paper proposes a simplified signal denoising approach by explicitly using sub-band matrices of decimated wavelet transform matrix. The objective function involves convex and non-convex terms in which the convexity of the overall function is restrained by parameterized non-convex term. The solution to this convex optimization problem is obtained by employing Majorization-Minimization iterative algorithm. For the experimentation purpose, we utilized different wavelet filters such as daubechies, coiflets and reverse biorthogonal.

ISTA-04.6 15:30 Design of a Multi-Priority Triage Chair for Crowded Remote Heathcare Centers in Sub-Saharan Africa
Santhi Kumaran (University of Rwanda-CST, Rwanda); Jimmy Nsenga (Self-employed & Jimmy NSENGA, Belgium)

Remote healthcare centers located in sub-Saharan of Africa are still facing a shortage of healthcare practitioners, yielding long queue of patients in waiting rooms during several hours. This situation increases the risk that critical patients are consulted too late, motivating the importance of priority-based patient scheduling systems. This paper presents the design of a Multi-Priority Triage Chair (MPTC) to be installed at the entrance of healthcare center's waiting rooms such that each new patient first sits in the MPTC to measure its vital signs and reg-ister his/her other priority parameters such as arrival time, travel time or distance between the concerned center and its home, and so on. The proposed MPTC will then dynamically update the consultation planning in order to statistically mini-mize (i) the number of critical patients not treated within a pre-defined time after their arrival, (ii) the number of patients waiting for more than a pre-defined period and (iii) the number of patients living relatively far who gets their consultation postponed to another day.

ISTA-04.7 15:42 Inter-Emotion Conversion Using Dynamic Time Warping and Prosody Imposition
Susmitha Vekkot (Amrita School of Engineering, India); Shikha Tripathi (PES University, India)

The objective of this work is to explore the importance of parameters contributing to synthesis of expression in vocal communication. The algorithm discussed in this paper uses a combination of Dynamic Time Warping (DTW) and prosody manipulation to inter-convert emotions among one another and compares with neutral to emotion conversion using objective and subjective performance indices. Existing explicit control methods are based on prosody modification using neutral speech as starting point and have not explored the possibility of conversion between inter-related emotions. Also, most of the previous work relies entirely on perception tests for evaluation of speech quality post synthesis. In this paper, the objective comparison in terms of error percentage is verified with forced choice perception test results. Both indicate the effectiveness of inter-emotion conversion by speech with better quality. The same is also depicted by synthesis results and spectrograms.

ISTA-04.8 15:54 A Personalized Social Network Based Cross Domain Recommender System
Sharu Vinayak and Richa Sharma (Chandigarh University, India); Rahul Singh (Thapar, India)

In the last few years recommender systems has become one of the most popular research field. Although with time various new algorithms have been introduced for improving recommendations but there are some areas in this research field that still need to be concentrated on. Cross domain recommendations and recommender systems and social networks are two of the research challenges that need to be explored more. In this paper we have proposed a novel idea for making recommendations in one domain using information from the other domain. The information has been extracted from popular social networking site

ISTA-04.9 16:06 Fuzzy Based Autonomous Parallel Parking Challenges in Real Time Scenario
Naitik Nakrani (Einfochips, India); Maulin Joshi (Gujarat Technological University & Sarvajanik College of Engineering and Technology, India)

Fuzzy based automation in automobile industries has attracted many re-searcher in recent years for their ability to adapt human like expertise. In this paper, combination of fuzzy based navigation and parallel parking problem is discussed. Main objective of this paper is to highlight and explore challenges present in parking system. Different forward and re-verse parking situations are considered in parallel parking. Simulations are also provided to aid and validate claim of challenges

ISTA-04.10 16:18 Application of a Hybrid Relation Extraction Framework for Intelligent Natural Language Processing
Lavika Goel (BITS Pilani, India); Rashi Khandelwal (Birla Institute of Technology and Science (BITS) Pilani, India); Eloy Retamino and Suraj Nair (TUM CREATE, Singapore); Alois Knoll (Technical University Munich Garching, Germany)

When an intelligent system needs to carry out a task, it needs to understand the instructions given by the user. But natural language instructions are unstructured and cannot be resolved by a machine without processing. Hence Natural Language Processing (NLP) needs to be done by extracting relations between the words in the input sentences. As a result of this, the input gets structured in the form of relations which are then stored in the system's knowledge base. In this domain, majorly two kinds of extraction techniques have been discovered and exploited - rule based and machine learning based. These approaches have been separately used for text classification, data mining, etc. However progress still needs to be made in the field of information extraction from human instructions. The work done here, takes both the approaches, combines them to form a hybrid algorithm and applies this to the domain of human robot interactions. The approach first uses rules and patterns to extract candidate relations. It then uses a machine learning classifier called Support Vector Machine (SVM) to learn and identify the correct relations. The algorithm is then validated against a standard text corpus taken from the RoCKIn transcriptions and the accuracy achieved is shown to be around 91%.

ISTA-04.11 16:30 An Innovative Solution for Effective Enhancement of Total Technical Life (TTL) of an Aircraft
Balachandran A (PRIST University, India); Suresh PR (NSS College of Engineering, India); Shriram K Vasudevan and Akshay Balachandran (Amrita University, India)

All the countries in the globe eventually happen to spend a lot of money towards strengthening their respective Air force every year by spending a lot of money. Based on the observation made on the budgets framed, it will be clearly visible for any-one to understand that about 60 to 70 percentage of total money is spent towards technology upgrade and maintenance. Coming to the civilian aircraft maintenance and upgrade, the cost and budget is no inferior to the former class of military aviation budgets. We have taken this as challenge and we have come up with a solution for enhancing TTL (Total Technical Life) and reduced maintenance cost. We have built an embedded system based portable solution for measuring the usage of the aircraft and based on the usage and utilization, the maintenance can be made and this will greatly reduce the cost. The stress and strain measurements has lead us to calculate the cyclic stress that the aircraft undergoes and based on which the maintenance can be carried out. The complete implementation has been done and results are analyzed along with challenges which were resolved later.

ISTA-04.12 16:42 ANFIS Based Speed Controller for a Direct Torque Controlled Induction Motor Drive
Hadhiq Khan (National Institute of Technology, Srinagar, India); Shoeb Hussain and Mohammad Abid Bazaz (National Institute of Technology Srinagar, India)

This paper presents a Neuro-Fuzzy adaptive controller for speed control of a three phase direct torque controlled induction motor drive. The Direct Torque Control (DTC) scheme is one of the most advanced methods for controlling the flux and electromagnetic torque of machines. Control of electromagnetic torque/speed in these drives for high performance applications requires a highly robust and adaptive controller. Adaptive Neural-Fuzzy Inference System (ANFIS) is a hybrid between Artificial Neural Networks (ANN) and Fuzzy Logic Control (FLC) that enhances the execution of direct torque controlled drives and overcomes the difficulties in the physical implementation of high performance drives.

MATLAB/SIMULINK implementation of 15 hp, 50 Hz, 4 pole squirrel cage induction motor controlled with the DTC scheme is presented in this paper. The PI controller used for speed control in conventional DTC drives is substituted by the ANFIS based controller. Simulation results show the use of ANFIS decreases the response time along with reduction in torque ripples.

ISTA-04.13 16:54 Sensorless Control of PMSM Drive with Neural Network Observer Using a Modified SVPWM Strategy
Shoeb Hussain and Mohammad Abid Bazaz (National Institute of Technology Srinagar, India)

In this paper sensorless controlled PMSM drive is presented with neural network designed for speed and position estimation of the motor. Multi-level inverter (MLI) is operated using a modified space vector modulation (SVPWM) strategy for generation of 5-level and 7-level voltages. The sensorless control estimates the value of speed ad position from calculations based on measured current and voltage. The issue with sensorless control strategy in state estimation arises with motor parameter variation and with distortions in current and voltage. The use of neural network observer deals with the issue of motor parameter variation. The use of MLI improves estimation with reduction in distortion in current further improved with the use of proposed SVPWM. The proposed scheme uses lesser switching states thus reducing the power loss compared to conventional scheme. Simulation is carried out in MATLAB on PMSM drive in order to test the physical performance of the drive.

ISTA-04.14 17:06 Genetic Algorithm Based Suggestion Approach for State Wise Crop Production in India
Saakshi Gusain (Manipal University); Kunal Kansal (Manipal University, India); Tribikram Pradhan (Manipal University & MIT, India)

Agriculture is considered as the backbone of Indian economy. Despite the tremendous increase in industrialization and advancement in technology since independence, a majority of India's population is dependent on agriculture. Agriculture in India is not uniform throughout, since India has a wide range of weather conditions which varies across a vast geographic scale and varied topography. In this paper, rule based classification has been applied to classify the states based on the amount of rainfall. The accuracy of Classification results is validated using Random Forest algorithm. Also we have found the major crops that can be grown in a state using a proposed algorithm by analyzing the soil, temperature and rainfall data. Due to time constraint and non-optimal usage of land resources, cultivation of all the suggested crops is not possible. Therefore, Genetic Algorithm is applied to give the best possible suggestion for crop cultivation across various states in India. So our proposed method will help to maximize the overall agriculture production in India.

ISTA-04.15 17:18 Precision Capacitance Readout Electronics for Micro Sensors Using Programmable System on Chip
Abilash Anandan (SSN College of Engineering, India); S Radha (SSN College of Engineering & Anna University, India)

Several techniques are used for measuring the capacitance value with various range, but the need of producing an accurate, less complex and low error system is still a difficulty. In this paper a less complex capacitance measurement system with low stray capacitance and offset error is designed and implemented in Programmable system on chip (PSOC 3) development kit CY8C3866-AXI040. The principle used is linear charging and discharging of capacitor, it generates an oscillation whose frequency is inversely proportional to the capacitance value. The board is interfaced with the laboratory workstation using LabVIEW via serial communication for online data acquisition and monitoring using UART. The embedded readout system developed, measures the capacitance value ranging from nF to pF and the results are discussed. The system measures the capacitance of fabricated capacitive microsensor in its static mode, which is in pF range and it can also measure values of commercially available capacitor.

Friday, September 23

Friday, September 23 14:30 - 18:30 (Asia/Kolkata)

ISTA-05: ISTA- Intelligent Tools and Techniques (Regular Papers)

Room: LT-11(Mechatronics Dept.)
Chair: Vikas Bajpai (The lnm IIT, India)
ISTA-05.1 14:30 Neuro-Fuzzy Approach for Dynamic Content Generation
Amol Bhagat, Monali Tingane and Mir Ali (P Ram Meghe College of Engineering and Management, Badnera, India); Priti Khodke (Prof, India)

E-Learning across all environments, whether it may be profit-making, educational or personal, it can be greatly used if the learning experience fulfilled both without delay and contextually. This paper presents the neuro-fuzzy based approach for dynamic content generation. Fast expansion of technology invalidates recently produced information and causes it to be converted into previous. This circumstance leads teachers, instructors and academicians to make use of the information suitably and efficiently. To accomplish this objective, there should be several tools that will be used by broad variety of educationists in a manner that is quick and efficient. These tools must allow generating the information and distributing it. In this paper, a web-based dynamic content generation system for educationists is presented. Neuro-fuzzy approach is used for developing this system. So that by considering performance of the learners the system will provide the content based on the knowledge level of that particular learner. It will help to individual learners to improve their knowledge level. Educators can quickly assemble, package, and redistribute web-based instructional content, easily import prepackaged content, and conduct their courses online. Students learn in an accessible, adaptive, social learning environment.

ISTA-05.2 14:45 An Area Efficient Built-In Redundancy Analysis for Embedded Memory with Selectable 1-D Redundancy
Srirama Murthy Gurugubelli (VFSTR University & SoCtronics Technologies Pvt. Ltd., India); Darvinder Singh (Industry, India); Sadulla Shaik (VFSTR University, India)

In this paper, a novel redundant mechanism for dual port embedded SRAM is presented. This work relates to 1-D (one dimensional) bit oriented redundancy algorithm to increase the reliability and yield during manufacture of memory integrated circuit chips, specifically to the efficient use of the limited space available for fuse-activated redundant circuitry on such chips, and more particularly, to replace multiple faulty memory locations using one independent redundancy element per sub-array, and reducing the number of defects-signaling fuses. In this way we double the yield and repair rate with 0.5% area penalty.

ISTA-05.3 15:00 Soft Computing Technique Based Online Identification and Control of Dynamical Systems
Rajesh Kumar (Netaji Subhas Institute of Technology, New Delhi, University Of Delhi, India); Smriti Srivastava and Jai Ram Prasad Gupta (Netaji Subhas Institute of Technology, India)

This paper proposes a scheme for online identification and indirect adaptive control of dynamical systems based on intelligent radial basis function network (RBFN). The need to use an intelligent control techniques arises as the conventional control methods like PID fails to perform when there is a non linearity in the system or system is affected by parameter variations and disturbance signals. In order to show the effectiveness of the proposed scheme, the mathematical models of the dynamical systems considered in this paper were assumed to be unknown. Since most real-world systems are highly complex and their precise mathematical descriptions are not available which further makes their control more difficult. These factors laid the foundation for the development of control schemes based on intelligent tools so that such systems can be controlled. One such scheme, based on RBFN, is presented in this paper. The key part of the scheme is the selection of inputs for the controller and in the proposed scheme, the inputs to the controller were taken to be the past values of plant's as well as of the controller's outputs along with the externally applied input. A separate RBFN identification model was also setup to operate in parallel with the controller and plant. Simulation study was performed on two dynamical systems and the results obtained show that the proposed scheme was able to provide the satisfactory online control and identification under the effects of both parameter variations and disturbance signals.

ISTA-05.4 15:15 A Comprehensive Review on Software Reliability Growth Model Utilizing Soft Computing Approaches
Shailee Lohmor (New Delhi Institute Of Management, India)

Software Reliability Engineering is an area that created from family history in the dependability controls of electrical, auxiliary, and equipment building. Reliability models are the most prevailing devices of Programming Dependability Building for approximating, insidious, gauging, and assessing the unwavering quality of the product. In order to attain solutions to issues accurately, speedily and reasonably, a huge amount of soft computing approaches has been established, however, it is extremely difficult to discover among the capabilities which is the utmost one that can be exploited allover. In this paper, we show a wide survey of existing delicate processing methodologies, and after that diagnostically inspected the work which is finished by various analysts in the area of software reliability. The possibility of flavor working of a software system for a definite period in a described context. Likewise, we have thought about soft computing approaches regarding capacities of software reliability modeling.

ISTA-05.5 15:30 The Use of Biometrics to Prevent Identity Theft
Syed Rizvi, Cory Reger and Aaron Zuchelli (Pennsylvania State University, USA)

This paper investigates the identity theft and how to effectively use the biometric technology in order to prevent it from happening. Over the past few years, identity theft has become one of the most serious financial threats to corporations in the United States. Identity theft is when a thief steals sensitive information in order to engage in large financial transactions. When a thief successfully steals an identity, he or she may take out large amounts of loans and make purchases in the victim's name. Identity theft allows thieves to pose as the victim so that all of the thief's actions will be projected as actions from the victims. Thieves can damage corporations by compromising corporate credit cards as well as file documents to change the legal address of the victim's company. Throughout the past decade, corporations have been using variations of biometric technology to prevent identity theft. This paper presents a new scheme for corporate identity theft prevention using biometric technology. Specifically, we develop a biometric based authentication system consisting of encryption and decryption processes. To show the practicality of our proposed scheme, an attacker-centric threat mode is created.

ISTA-05.6 15:45 Diagnosis of Liver Disease Using Correlation Distance Metric Based K-Nearest Neighbor Approach
Aman Singh (Lovely Professional University, India); Babita Pandey (LPU, India)

Mining meaningful information from huge medical datasets is a key aspect of automated disease diagnosis. In recent years, liver disease has emerged as one of the commonly occurring disease worldwide. In this study, a correlation distance metric and nearest rule based k-nearest neighbor approach is presented as an effective prediction model for liver disease. Intelligent classification algorithms employed on liver patient dataset are linear discriminant analysis (LDA), diagonal linear discriminant analysis (DLDA), quadratic discriminant analysis (QDA), diagonal quadratic discriminant analysis (DQDA), least squares support vector machine (LSSVM) and k-nearest neighbor (KNN) based approaches. K-fold cross validation method is used to validate the performance of mentioned classifiers. It is observed that KNN based approaches are superior to all classifiers in terms of attained accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive (NPV) value rates. Furthermore, KNN with correlation distance metric and nearest rule based machine learning approach emerged as the best predictive model with highest diagnostic accuracy. Especially, the proposed model attained remarkable sensitivity by reducing the false negative rates.

ISTA-05.7 16:00 Implementation of Adaptive Framework and WS Ontology for Improving QoS in Recommendation of WS
S Subbulakshmi (Department of CS, Amrita School of Engineering, Amritapuri, Amrita University, Kollam, Kerela, India); Ramar K (Department of CSE, Einstein College of Engineering, Tirunelveli, India); Renjitha R and Sreedevi T u (Department of CS, Amrita School of Engineering, Amritapuri, Amrita University, Kollam, Kerela, India)

With the advent of more users accessing internet for information retrieval, researchers are more focused in creating system for recommendation of web service(WS) which minimize the complexity of selection process and optimize the quality of recommendation. This paper implements a framework for recommendation of personalized WS coupled with the quality optimization, using the quality features available in WS Ontology. It helps users to acquire the best recommendation by consuming the contextual information and the quality of WS. Adaptive framework performs i) the retrieval of context information ii) calculation of similarity between users preferences and WS features, similarity between preferred WS with other WS specifications iii) collaboration of web service ratings provided by current user and other users. Finally, WS quality features are considered for computing the Quality of Service. The turnout of recommendation reveals the selection of highly reliable web services, as credibility is used for QoS predication.

ISTA-06: ISTA-Networks/ Distributed Systems (Short Papers)

Room: LT-12 (Mechatronics Dept)
Chair: Sudhanshu S. Gonge (Symbiosis Institute of Technology, Lavale & Symbiosis International University, Pune, India)
ISTA-06.1 14:30 Secure and Efficient User Authentication Using Modified Otway Rees Protocol in Distributed Networks
Krishna Prakasha (Manipal Academy of Higher Education, India); Balachandra Muniyal (Manipal Academy of Higher Education, Manipal, India); Vasundhara Acharya and Akshaya Kulal (Manipal University, India)

Authentication protocol is used for authenticating the two communicating entities in order to build a secure communication channel between them to exchange mes-sages. This authentication is built using exchange of keys. The existing authenti-cation protocol has flaws of mutual authentication. We in our proposal have tried to overcome the existing flaw. Here we have combined the Otway Rees protocol with a new protocol termed as CHAP (Challenge handshake authentication pro-tocol) . It authenticates the communicating entities by using a shared key. We also compare symmetric algorithms and choose the best algorithm for encryption.

ISTA-06.2 14:42 Efficient Television Ratings System with Commercial Feedback Applications
Aswin Ts (247 AI India, India); Kartik Mittal and Shriram K Vasudevan (Amrita University, India)

Everyone would have noticed several inconsistencies in TV ratings, with many popular programmes going off the air due to lack of sponsors, or certain shows getting undue patronage. Due to absence of a uniform and transparent system, several flaws exist in current rating systems that are open to manipulation, denying the world all the benefits that a transparent ratings system has to offer, like efficient collection of viewership data and a strong sponsor base to patronize well received programmes. The purpose of our innovative invention is to mainly relay data to TV stations of what shows, programmes are watched by whom precisely without any chance for manipulation (with a responsibility of safeguarding one's private information and biometric data). The proposed smart TV system works such that, the biometric details stored in the remote device recognize the user and store age and gender details and not name or location details for sake of user's anonymity. This ensures the user's personal data as well as viewership data is not made available for public usage.The proposed system is aimed at eliminating shortcomings of currently used ratings systems in the world that are bogged down by issues of transparency, insufficient data, and statistical approximation. Data pertaining to user and viewed statistics (shows, time viewed) is transmitted to the cable station, for purposes of 'feedback' i.e., showing vital statistics of the individual and the programme watched. This system will undoubtedly be one of the most reliable sources of feedback, compared to the existing systems.

ISTA-06.3 14:54 Enhancing Group Search Optimization with Node Similarities for Detecting Communities
Nidhi Arora (University of Delhi, India); Hema Banati (University of Delhi & DYAL SINGH COLLEGE, India)

Recent research in nature based optimization algorithms is directed towards analysing domain specific enhancements for improving results optimality. This paper proposes an Enhanced Group Search Optimization (E-GSO) algorithm, a variant of the nature based Group Search Optimization (GSO) algorithm to detect communities in complex networks with better modularity and convergence. E-GSO enhances GSO by merging node similarities in its basic optimization process for fixing co-occurrences of highly similar nodes. This leads to avoidance of random variations on fixed node positions, enabling faster convergence to communities with higher modularity values. The communities are thus evolved in an unsupervised manner using an optimized search space. The experimental results established using real/synthetic network datasets support the efficacy of the proposed E-GSO algorithm.

ISTA-06.4 15:06 A Literature Survey on Malware and Online Advertisement Hidden Hazards
Priya Jyotiyana (Rajasthan Technical University, India)

Malware is a malignant code that proliferates over the connected frameworks in system. Malvertising is a malicious action that can distribute malware in different forms through advertising. malware is the key of advertising and generate the revenue and for various Internet organizations, extensive advertisement systems, for example, Google, Yahoo and Microsoft, contribute a ton of effort to moderate malicious advertising from their advertise network systems. This paper specifically discusses various types of detection techniques, procedures, and analysis techniques for detect the malware threat. Malware detection method used to detect or identify the malicious activities so that malware could not harm the user system. Moreover the study includes about malicious advertising. This paper will look at the strategies utilized by adware and spyware as a part of their endeavors to stay inhabitant on the framework and analyze the sorts of information being separated from the client's framework.

ISTA-06.5 15:18 Mining High Utility Itemset Using Graphics Processor
Maya Joshi (Gujarat Technological University, India); Dharmesh Bhalodiya (Gujarat Technical University & B H Gardi VidyaPith, India)

In Data Mining, Association Rule Mining is one of the most influential tasks. Several analyses and algorithm of it provides knowledge to investors or marketing manager to analysis and predict their market field and managing their records. But these procedures are not enough to originate more productive results. The traditional high utility itemset mining algorithms occupies more space, memory and time for generation of candidate list. We presented the novel algorithm for high utility itemsets mining using a parallelization approach for transaction datasets. Therefore, the sales manager can use this utility itemset for their historical analysis of data, stock planning and decision making. Our new approach is extension of FHM algorithm, by attaching pruning method in HUIM. This utilization is improved to acquire immense performance on a miscellaneous platform which consists of a shared memory multiprocessor and numerous cores NVIDIA based Graphics Processing Unit (GPU) coprocessor. An empirical study and results of existing algorithm FHM are compared with the novel algorithm on NVIDIA Kepler GPUs and discovered significant improvements in computing time compare to FHM.

ISTA-06.6 15:30 Performance Analysis and Implementation of Array Multiplier Using Various Full Adders
Kunjan Devendra Shinde (PESITM Shivamogga, India); Asha Ananthapadmanabha (PESITM, Shivamogga, India)

Multipliers are the significant arithmetic units which are used in various VLSI and DSP applications. Besides their crucial necessity, Multipliers are also a main source for power dissipation. Hence prior importance must be given to lessen power dissipation in order to satisfy the overall power budget for various digital circuits and systems. Multiplier performance is directly influenced by the adder cells employed, for multipliers designed using adders; therefore power dissipation problem can be solved by exploring and using better adder designs. In this paper various full adder designs are analysed in terms of delay, power consumption and area, As the adder block is prime concern for array multiplier in order to propose an efficient Multiplier architecture. The design and implementation of full adder cells and multiplier is performed on CADENCE design suite at GPDK 180nm technology. The CMOS, GDI and Optimized full adder design is employed to implement array multiplier.

ISTA-06.7 15:42 Performance Tuning Approach for Cloud Environment
Rajeev Tiwari (University of Petroleum & Energy Studies, India); Gunjan Lal (University of Petroleum and Energy Studies); Tanya Goel and Varun Tanwar (University of Petroleum and Energy Studies, India)

In a cloud environment, the workload that has to be maintained using virtualization is limited by the available hardware resources. A Virtual Machine allocation policy has a crucial role in the cloud computing lifecycle. There are different techniques that can be used for these allocation and scheduling processes, which can impact the performance and working of the cloud environment. In this paper, analysis and implemention of the existing allocation techniques has been performed using CloudSim. We have further proposed a new technique for performance tuning of the cloud environment.

ISTA-06.8 15:54 Design & Analysis of Clustering Based Intrusion Detection Schemes for E-Governance
Rajan Gupta and Sunil K Muttoo (University of Delhi, India); Saibal K. Pal (DRDO, India)

The problem of attacks on various networks and information systems is increasing. And with public information systems like those involved under E-Governance are more badly hit. So there is a need to work on either designing an altogether different intrusion detection system or improvement of the existing schemes with better optimization techniques and easy experimental setup. The current study discusses the design of an Intrusion Detection Scheme based on traditional clustering schemes like K-Means and Fuzzy C-Means along with Meta-heuristic scheme like Particle Swarm Optimization. The experimental setup includes comparative analysis of these schemes based on a different metric called Classification Ratio and traditional metric like Detection Rate. The experiment is conducted on a regular Kyoto Data Set used by many researchers in past, however the features extracted from this data are selected based on their relevance to the E-Governance system. The results shows a better and higher classification ratio for the Fuzzy based clustering in conjunction with meta-heuristic schemes. The development and simulations are carried out using MATLAB.

ISTA-06.9 16:06 Internet of Vehicles for Intelligent Transportation System
Kundan Munjal (Lovely Professional University, Phagwara, Punjab, India); Shilpa Verma (PEC University of Technology, India)

Wireless communication between vehicles is a new era of communication that leads to intelligent transportation system. Internet of Things (IOT) is a new paradigm which is a combination of storing sensor data and computing the provided data to achieve useful information in the real world applications. The origin of IoT is Radio Frequency Identification (RFID) which is networked over the region to collect data, based on the concept of sensor data which is collected by RFID a user can derive important information after getting raw data from established network. In this paper we will discuss how Vehicles with sensors and actuators can absorb large amount of information from environment and provide this useful information to assist secure navigation pollution control and efficient traffic management we will also discuss various challenges architecture and cloud based implementation of Internet of vehicles

ISTA-06.10 16:18 Handoff Schemes in Vehicular Ad-Hoc Network: A Comparative Study
Prasanna Roy, Sadip Midya and Koushik Majumder (West Bengal University of Technology, India)

Vehicles on road are increasing at a rapid pace recently. Passengers are also feeling the urge to get various other services while they are travelling. Thus, there is a necessity to bring about enhancements in the existing Intelligent Transport System (ITS). In VANET, handoff is required because a vehicle is mobile and move from one network region covered by an access point to another. During handoff the connection that has been established between the vehicle and the network should remain intact to maintain seamless connectivity. It is also necessary to bring about reduction in packet loss and handover delay. This paper provides a comparative study of some existing schemes that deal with handoff management in VANET. In addition to this we have carried out a qualitative analysis of these schemes to explore the new areas of research and the scopes for enhancement for providing better quality of services to the users.

ISTA-06.11 16:30 Energy Efficient Deflate (EEDeflate) Compression for Energy Conservation in Wireless Sensor Network
Pramod Ganjewar (Sathyabama University, Chennai, Tamilnadu, india & MIT Academy of Engineering, Alandi (D.), Pune, India); Barani S. (Sathyabama University, Chennai, India); Sanjeev Wagh (Government College of Engineering, Karad, India)

WSN comprises of sensor nodes distributed spatially to accumulate and transmit measurements from the environment through radio communication. It utilizes en-ergy for all its functionality (sensing, processing, and transmission) but energy utilization in case of transmission is more. Data compression can be used to re-duce the data before transmission in WSN. The proposed compression algorithm, i.e. Energy Efficient Deflate (EEDeflate) along with fuzzy logic works effectively to prolong the network lifetime in Wireless Sensor Network. The EEDeflate algo-rithm saves approximately 7% to 10% of energy in comparison with the data transmitted without compression. It also achieves a better compression ratio of approx. 22% more than Huffman and approx. 8% more than Deflate compression algorithm. The improvement in terms of compression ratio using EEDeflate would help us to extend the network lifetime.

Saturday, September 24

Saturday, September 24 10:45 - 13:30 (Asia/Kolkata)

ISTA-07: ISTA-Applications using Intelligent Techniques (Regular Papers)

Room: LT-8(Academic Area)
Chairs: Subrat Dash (The LNM Institute of Information Technology (LNMIIT), India), Mohan Krishen Kadalbajoo (IIT Kanpur, India)
ISTA-07.1 10:45 Improving the performance of Wavelet based Machine Fault Diagnosis System using Locality Constrained Linear Coding
Vinay Krishna, Piruthvi Chendur P, Abhilash P P and Reuben Thomas Abraham (Amrita Vishwa Vidyapeetham, Amrita University, India); Gopinath Rajendiran (CSIR-CSIO, Chennai, India); Santhosh C Kumar (Amrita Vishwa Vidyapeetham, India)

The popular machine learning algorithm, Support Vector Machine (SVM) is being used widely in the field of machine health monitoring. In this paper, we experiment with SVM kernels to diagnose the inter turn short circuit faults in a 3kVA synchronous generator. We extract wavelet features from the current signals captured from the synchronous generator. From the experiments, it is observed that the performance of baseline system is not satisfactory because of the inherent non linear characteristic of the features. We improve the performance of the SVM classifier by linearizing the features onto a higher dimensional linear space where a computationally efficient linear kernel can be used. In this work, Locality-constrained Linear Coding (LLC) is used to map the features onto a higher dimensional linear space to improve the performance of the classifier. Experiments and results reveal that LLC improves the performance of the best baseline for R, Y, and B phases by 25.87 %,21.47 %, and 21.79 % respectively.

ISTA-07.2 11:00 Automatic Agriculture Spraying Robot with Smart Decision Making
Sonal Sharma (Savitribai phule Pune University, India); Rushikesh Prakash Borse (E&TC Engineetring, MIT Academy of Engineering, Alandi, Pune & Indian Institute of Technology, India)

The responsibility of controlling and managing the plant growth from early stage to mature harvest stage involves monitoring and identification of plant diseases, controlled irrigation and controlled use of fertilizers and pesticides. The proposed work explores the technology of wireless sensors for remote real time monitoring of vital farm parameters like humidity, environmental temperature and moisture content of the soil. We also employ the technique of image processing for vision based automatic disease detection on plant leaves. Thus this paper vigorously describes the design and construction of an autonomous mobile robot featuring plant disease detection, growth monitoring and spraying mechanism for pesticide, fertilizer and water to apply in agriculture or plant nursery. To realize this work we provide a compact, portable and a well founded platform that can survey the farmland automatically and also can identify disease and can examine the growth of the plant and accordingly spray pesticide, fertilizer and water to the plant. This approach will help farmers make right decisions by providing real-time information about the plant and it's environment using fundamental principles of Internet, Sensor's technology and Image processing.

ISTA-07.3 11:15 Intelligent System for Wayfinding Through Unknown Complex Indoor Environment
Sobika S and Rajathilagam B (Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, Amrita University, India)

For a complex indoor environment, Wayfinding is knowing the environment and navigating within it. It is only possible if the person knows the place very well. If a person goes into an unknown environment, he/she may need assistance for wayfinding. GPS technology which works very well and is popularly used in outdoor navigation cannot be relied for indoor wayfinding as the signal strength is weak. This paper proposes a system which can be used as an assistance for navigating through an unknown environment with the aid of Wi-Fi, visual landmarks including corridors, staircase and others. This work aims at creating a custom route map for an unfamiliar complex indoor setting through visual perception and graphic information.

ISTA-07.4 11:30 Anaphora Resolution in Hindi: A Hybrid Approach
Ashima Kukkar (Jaypee University of Information Technology); Rajni Mohana (Jaypee University of Information and Technology, Waknaghat, India); Sukhnandan Kaur (Jaypee University of Information Technology, India)

Now-a-days machines have the ability of upturning the large matrices with speed and grace but they still fail to master the basics of our written and spoken languages to the machine.. It is difficult to address your computer as though you were addressing another person so there are various methods to resolve the problem of refereeing and this process is called anaphora resolution. Anaphora is very challenging job in Hindi language because the style of writing is changed with the respect of expressions. In this paper we used hybrid approach which is the combination of rule based and learning based to resolve gender, number agreement, co-reference and animistic knowledge in Hindi domain. The results are computed by using various globally accepted evaluation metrics like MUC, B3, CEAF, F- score on three different data sets. The accuracy of the system is evaluated by kappa statics.

ISTA-07.5 11:45 Hybrid Associative Classification Model for Mild Steel Defect Analysis
Veena Jokhakar (VNSGU & M. Sc. (I. T. ), India); Somabhai Patel (SCET College, Gujarat Technological University,Surat,India)

In steel coil manufacturing, the quality of the coil being manufactured is affected by many parameters during the process and it becomes a challenging task to find the particular attribute that cause the problem. Among many possible defects, one of the major defects is "coiling temperature deviation defect". This defect causes steels metallurgical properties to diverge in the final product. This paper presents a new hybrid model HACDC (Hybrid Associative Classification with Distance Correlation) to analyze causality for coiling temperature deviation. Due to the hybrid combination of association rule, distance correlation and ensemble techniques we achieve an accuracy of 95%. As per our knowledge, this is the first work that implements random forest algorithm in analyzing steel coil defects.

ISTA-07.6 12:00 Implementing and Deploying Magnetic Material Testing as an Online Laboratory
Rakhi Radhamani (Amrita School of Biotechnology, Amrita Vishwa Vidyapeetham, Amrita University, India); Dhanush Kumar (Development Engineer, India); Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India); Bipin Nair (Amrita Vishwa Vidyapeetham ( Amrita University), India); Shyam Diwakar (Amrita Vishwa Vidyapeetham, India)

Hysteresis loop tracing (HLT) experiment is an undergraduate experiment for physics and engineering students to demonstrate magnetic properties of ferrite materials. In this paper, we explore a new approach of setting- up triggered testing of magnetic hysteresis via a remotely controlled loop tracer. To aid student learners, through an experimental design, we focused on factors such as analytical expression of mathematical model and modeling of reversible changes, which were crucial for learning hysterisis components. The goal was to study the phenomena of magnetic hysteresis and to calculate the retentivity, coercivity and saturation magnetization of a material using a hybrid model including simulation and remotely controlled hysteresis loop tracer. The remotely controlled equipment allowed recording the applied magnetic field (H) from an internet-enabled computer. To analyze learning experiences using online laboratories, we evaluated usage of online experiment among engineering students (N=200) by organized hands-on workshops and direct feedback collection. We found students adapted to use simualtions and remotely controlled lab equipment augmenting laboratory skills, equipment accessibility and blended learning experiences.

ISTA-07.7 12:15 Bio-inspired Model Classification of Squamous Cell Carcinoma in Cervical Cancer Using SVM
Anousouyadevi M, Ravi Subban, Vaishnavi Jothimani and Punitha Stephen (Pondicherry University, Pondicherry, India)

Cervical cancer is a deadly cancer which occurs in women's of all age group without any pre-symptoms. This cancer can be detected at the earliest by the manual screening of Pap smear test and LCB test which suffers from high false positive rate and cost effectiveness. To over-come this disadvantage an automatic computerised system is used to enhance the efficiency and sensitivity in the detection of cervical can-cer. There are three types of tissues in the cervix region of uterus as Columnar Epithelium (CE), Squamous epithelium (SE) and the Aceto white (AW) region. The AW region, when immersed in 5% acetic shows a change of white colour as abnormal cervical cells. In this pa-per, a bio-inspired model is used for the automatic detection of cervi-cal cancer where a support vector machine is used to classify the squamous cell carcinoma which will produce more accurate results by reducing the false positive rates.

ISTA-07.8 12:30 Flexible Extensible Middleware Framework for Remote Triggered Wireless Sensor Network Lab
Ramesh Guntha (Amrita Center for Wireless Networks and Applications, Amrita Vishwa Vidyapeetham University, India); Sangeeth Kumar (Amrita Vishwa Vidyapeetham, AMRITA University, India); Maneesha Ramesh (Amrita University, India)

Wireless sensor networks (WSN) are also being actively researched and learned as part of graduate and undergraduate technical disciplines as they are being applied in variety of fields such as landslide detection, smart home monitoring etc. It is essential for the students to learn the WSN concepts through experimentation. But it needs a lot of time, hardware resources and technical expertise to design, program, setup, and maintain these experiments. A shared remote triggered (RT) WSN test-bed which can be accessed, controlled, monitored through internet would be of great help for many students to learn these concepts quickly without any setup at their end. Our RT WSN test-bed consists of 11 experiments, each designed to explore a particular WSN concept. Each experiment consisting of a WSN setup, a web interface for user input and results, and middleware layer to communicate between web and WSN [1, 2, and 3]. In this paper we present the architecture of our flexible and extensible middleware framework, a single code base supporting all the experiments, and can be easily extended for any other WSN experiment just by configuring. We also illustrate the success of our system through the analysis of various usage statistics.

ISTA-08/VisionNet-03: ISTA-08/VisionNet-03 -Image Processing and Artificial Vision (Short Papers)

Room: LT-10 (Academic Area)
Chairs: Joyeeta Singha (The LNMIIT, India), Shrikant Mapari (SICSR, Symbiosis International Deemed University (SIU), India)
ISTA-08/VisionNet-03.1 10:45 Comparative Analysis of Segmentation Algorithms Using Threshold and K-Mean Clustering
Swati S. Savkare (Savitribai Phule Pune University, Pune & JSPM, Narhe Technical Campus, India); Abhilasha Sandipan Narote (Smt. Kashibai NAvale College Of Engineering, University of Pune, India); Sandipann Pralhad Narote (University of Pune, India)

Worldwide many parasitic diseases infect human being and causes deaths due to misdiagnosis. These parasites infect Red Blood Cells (RBCs) from blood stream. Diagnosis of these diseases is carried out by observing thick and thin blood smears under the microscope. In this paper segmentation of blood cells from microscopic blood images using K-Mean clustering and Otsu's threshold is compared. Segmentation is important innovation for identification of parasitic diseases. Number of blood cells is an essential count. Preprocessing is carried out for noise reduction and enhancement of blood cells. Preprocessing is fusion of background removal and contrast stretch. Preprocessed image is given for segmentation. Separation of overlapping cells is done by watershed transform. Segmentation using K-Mean clustering is more suitable for microscopic blood images.

ISTA-08/VisionNet-03.2 10:57 Automatic Diagnosis of Breast Cancer Using Thermographic Color Analysis and SVM Classifier
Asmita Wakankar (Sathyabama University, Chennai, India); G R Suresh (St Peters Institute of Higher Education and Research, India); Preetha Ramiah (Anna University, India)

Breast cancer is the commonly found cancer in women. Studies show that the detection at the earliest can bring down the mortality rate. Infrared Breast thermography uses the temperature changes in breast to arrive at diagnosis. Due to increased cell activity, the tumor and the surrounding areas has higher temperature emitting higher infrared radiations. These radiations are captured by thermal camera and indicated in pseudo colored image. Each colour of thermogram is related to specific range of temperature. The breast thermogram interpretation is primarily based on colour analysis and asymmetry analysis of thermograms visually and subjectively. This study presents analysis of breast thermograms based on segmentation of region of interest which is extracted as hot region followed by colour analysis. The area and contours of the hottest regions in the breast images are used to indicate abnormalities. These features are further given to ANN classifier for automated analysis. The results are compared with doctor's diagnosis to confirm that infra-red thermography is a reliable diagnostic tool in breast cancer identification.

ISTA-08/VisionNet-03.3 11:09 Convolutional Neural Networks Based Method for Improving Facial Expression Recognition
Tarik A Rashid (Software and Informatics Engineering, Salahaddin University-Erbil, Kurdistan)

The field of Facial Behavior Recognition via computer procedures is thought to be inspiring for forming emotional interactions between humans and computers. Numerous methods of emotion recognition were previously proposed based on a single scheme via using one single dataset or using the data set as it is collected to evaluate the system outputs without performing extra preprocessing steps such as data balancing process which is needed to enhance the generalization and increase the accuracy results of the system. In this paper, a technique for recognizing facial expressions using different imbalanced data sets of facial expression is presented. The data is preprocessed, then, balanced using Synthetic Minority Over-sampling Technique, next, the facial feature point extraction technique is implemented. Finally, selected feature sets are fed into classification models. Three models namely; Decision Tree (DT), Multi-Layer Perceptron (MLP) and Convolutional Neural Network (CNN) are used as classifiers of emotions to tackle fundamental issues, specifically; various types of facial databases and various classes in each database. The Convolutional Neural Network is determined to produce the best recognition accuracy.

ISTA-08/VisionNet-03.4 11:21 Composition of DCT-SVD Image Watermarking and Advanced Encryption Standard Technique for Still Image
Sudhanshu S. Gonge (Symbiosis Institute of Technology, Lavale & Symbiosis International University, Pune, India); Ashok Anandrao Ghatol (Director Genba Sopanrao Moze College of Engineering Pune, India)

Nowadays, multimedia technology is developing rapidly. It provides a best platform for multiple media like image, audio, video, text, etc.Due to continuous improvement in technology it has attracted large number of user and made presentation of information through media user friendly. Internet technology helps the user in transmission of data in various media format. In this research, data is considered as digital image. However, there is need to provide security and copyright protection to digital image. There are many security algorithms like blowfish, data encryption standards, advanced encryption standard, RSA,RC5,Cast-128, triple data encryption standard,IDEA,etc.The copyright protection can be provided to image by using digital image watermarking techniques. These techniques broadly classified into two broad categories i.e. transform domain technique and spatial domain technique. In this research work, combination of transform domain technique i.e. combined discrete cosine transform with singular value decomposition used for digital image watermarking and 256 bit key advanced encryption algorithm for security of digital image against various attacks, like Gaussian, cropping, salt pepper noise, median attack ,jpeg compression attack, etc is discussed.

ISTA-08/VisionNet-03.5 11:33 Scene Understanding in Images
Athira S and Manjusha R (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Amrita University, India); Latha Parameswaran (Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Amrita University)

Scene understanding targets on the automatic identification of thoughts, opinions, emotions, and sentiment of the scene with polarity. The sole aim of scene understanding is to build a system which infer and understand the image or a video just like how humans do. In the paper, we propose two algorithms- Eigenfaces and Bezier Curve based algorithms for scene understanding in images. The work focuses on a group of people and thus, targets to perceive the sentiment of the group. The proposed algorithm consist of three different phases. In the first phase, face detection is performed. In the second phase, sentiment of each person in the image is identified and are combined to identify the overall sentiment in the third phase. Experimental results show Bezier curve approach gives better performance than Eigenfaces approach in recognizing the sentiments in multiple faces

ISTA-08/VisionNet-03.6 11:45 Performance Analysis of Human Detection and Tracking System in Changing Illumination
M M Sardeshmukh (Sinhgad Academy of Engineering Pune Maharashtra); Mahesh Kolte (Pune University, India); Vaishali Joshi (Sinhgad Academy of Engineering Pune Maharashtra, India)

Detection and tracking of a human in a video is useful in many applications such as video surveillance, content retrieval, patent monitoring etc. This is the first step in many complex computer vision algorithms like human activity recognition, behavior understanding and emotion recognition. Changing illumination and background are the main challenges in object/human detection and tracking. We have proposed and compared the performance of two algorithms in this paper. One continuously update the background to make it adaptive to illumination changes and other use depth information with RGB. It is observed that use of depth information makes the algorithm faster and robust against varying illumination and changing background. This can help researchers working in the computer vision to select the proper method of object detection.

ISTA-08/VisionNet-03.7 11:57 Feature Extraction in Dental Radiographs in Human Extracted and Permanent Dentition
Kanika Lakhani (Amity University, Noida)

Feature extraction in dental images in the form of radiographs involves the identification of major defect areas. While analyzing complex radiograph images, one of the major problems stems from the types of defects present. Analysis with a large number of defects present generally requires a large amount of memory and computational power. Feature extraction applied over the radiographs, once the edge detection process is accomplished, derives combinations of the defects to get around the problems while still describing the problem areas with sufficient accuracy. The process has been implemented over a set of 20 of extracted human dentition for the identification of similar features to actualize the presence of defects in the dentition.

ISTA-08/VisionNet-03.8 12:09 Camouflaged Target Detection and Tracking Using Thermal Infrared and Visible Spectrum Imaging
Supriya Mangale (Cummins College of Engineering for Women, Pune, India); Madhuri Bhushan Khambete (Cummins College of engineering, India)

In this paper ,we describes the new robust thermo-visible moving object detection system under different scenarios such as camouflaged, glass, snow, similar object and background color or temperature etc. Background subtraction is performed by formation of mean background frame and use of global thresholding, then using connected component theory and fusion rule, moving objects are detected and blob based tracked in adverse situations also.

ISTA-08/VisionNet-03.9 12:21 Multilayered Presentation Architecture in Intelligent eLearning Systems
Uma G and Ramkumar N (Amrita Vishwa Vidyapeetam, India); Venkat Rangan and Balaji Hariharan (Amrita University, India)

eLearning systems have become essential in enabling lectures from experts accessible to masses spread across distant geographic location. The digital representation of different components in the instructor's environment, such as the presentation screen and the instructor's video, in a remote location is often spatially disconnected. This results in missing out spatially related gestural cues such as pointing at the screen by the instructor. This paper discusses a method of creating a spatially connected, close to natural view of the instructor's environment by overlaying the instructor's video defined by his contour over the presentation screen content. This technique involves a calibration step required to match the camera perspectives of the different video components. We present a real-time and robust calibration technique allowing for automatic recalibration in a dynamic classroom scenario which involves changes in camera position and pan, tilt and zoom parameters.

ISTA-08/VisionNet-03.10 12:33 Semi-Supervised FCM and SVM in Co-Training Framework for the Classification of Hyperspectral Images
Prem Shankar Singh Aydav (DTE U.P); Sonajharia Minz (Jawaharlal Nehru University, India)

Collection of labeled samples is very hard, time-taking and costly for the Remote sensing community. Hyperpectral image classification faces various problems due to availability of few numbers of labeled samples. In the recent years, semi-supervised classification methods are used in many ways to solve the problem of labeled samples for the hyperspectral image classification. In this Article, semi supervised fuzzy c-means (FCM) and support vector machine (SVM) are used in co-training framework for the hyperspectral image classification. The proposed technique assumes the spectral bands as first view and extracted spatial features as second view for the co-training process. The experiments have been performed on hyperspectral image data set show that proposed technique is effective than traditional co-training technique.

ISTA-08/VisionNet-03.11 12:45 Segmentation of Thermal Images Using Thresholding-Based Methods for Detection of Malignant Tumours
Shazia Shaikh (Doctor BAM University, India); Hanumant Gite (Dr Babasaheb Ambedkar Marathwada University, Aurnagabad, India); Ramesh Manza (Dr Babasaheb Ambedkar Marathwada University, Aurangabad, India); Karbhari Kale (Babasaheb Ambedkar Marathwada University, India); Nazneen Akhter (Maulana Azad College of Arts, Science and commerce, Aurangabad, Maharashtra, India)

Segmentation methods are useful for dividing an image into regions or segments that may be meaningful and serve some purpose. This study aims at application of thresholding-based segmentation methods on thermal images of skin cancer and analysis of the obtained results for identification of those algorithms that are superior in performance for identification of malignancy and extraction of the ROI (Region Of Interest). Algorithms based on global thresholding were applied on the skin cancer thermal images for a comparative study. While the MinError(I) method gave the most desirable results, Huang, Intermodes, IsoData, Li, Mean, Moments, Otsu, Percentile and Shanbhag were effective and consistent in performance as well. Other methods like MaxEntropy, Minimum, RenyiEntropy, Triangle and Yen could not yield expected results for segmentation and hence could not be considered useful. The investigations performed, provided some useful insights for segmentation approaches that may be most suitable for thermal images and consequent ROI extraction of malignant lesions.

ISTA-08/VisionNet-03.12 12:57 Recognition of Handwritten Benzene Structure with Support Vector Machine and Logistic Regression a Comparative Study
Shrikant Mapari (SICSR, Symbiosis International Deemed University (SIU), India); Ajaykumar Dani (G H Raisoni Institute of Engineering and Technology (GHRIET) Pune (MH), India)

A chemical reaction is represented on a paper by chemical expression which can contain chemical structures, symbols and chemical bonds. If handwritten chemical structures, symbols and chemical bonds can be automatically recognized from the image of Handwritten Chemical Expression (HCE) then it is possible to automatically recognize HCE. In this paper we have proposed an approach to automatically recognize benzene structure which is the most widely used chemical compound in aromatic chemical reactions. The proposed approach can recognize benzene structure from the image of HCE. We have developed two classifiers to classify the benzene structure from HCE. The first classifier is based on Support Vector Machine (SVM) and the second classifier is based on logistic regression. The comparative study of the both classification technique is also presented in this paper. The outcome of comparison shows that both classifiers have accuracy of more than 97%. The result analysis shows that classification technique based on SVM classification performs better than classification technique using logistic regression.

ISTA-08/VisionNet-03.13 13:09 Image and Pixel Based Scheme for Bleeding Detection in Wireless Capsule Endoscopy Images
Vani V (VTU, India); Mahendra Prasanth (SJBIT, India)

Bleeding detection techniques that are widely used in digital image analysis can be categorized in 3 main types: image based, pixel based and patch based. For computer-aided diagnosis of bleeding detection in Wireless Capsule Endoscopy (WCE), the most efficient choice among these remains still a problem. In this work,different types of Gastrointestinal bleeding problems:Angiodysplasia, Vascular ecstasia and Vascular lesions detected through WCE are discussed. Effective image processing techniques for bleeding detection in WCE employing both image based and pixel based techniques have been presented. The quantitative analysis of the parameters such as accuracy,sensitivity and specificity shows that YIQ and HSV are suitable color models; while LAB color model incurs low value of sensitivity.Statistical based measurements achieves higher accuracy and specificity with better computation speed up as compared to other models. Classification using K-Nearest Neighbor is deployed to verify the performance. The results obtained are compared and evaluated through the confusion matrix.

ISTA-08/VisionNet-03.14 13:21 Leaf Recognition Algorithm for Retrieving Medicinal Information
Venkataraman D, Siddharth Narasimhan, Shankar Natarajan, Varun Sidharth S and Hari Prasath D (Amrita University, India)

India has a vast history of using plants as a source of medicines. This science is termed as Ayurveda. But, sadly somewhere in the race of keeping up with medicinal science and technology, India as a country has lost its track in the field of Ayurveda. Researchers and medicinal practitioners today, in spite of knowing that allopathic medicines are made using certain plant extracts, are oblivious about the medicinal the properties of plants. This paper aims at eradicating this problem, and hence strives to help potential users make better use of plants with medicinal properties. The dataset consists of 300 images of different types of leaves. The classification of the leaves is done with the help of a decision tree. Our system is an easy to use application which is fast in execution too. The objective of doing this paper is to develop an application for leaf recognition for retrieving the medicinal properties of plants. The recognition of leaves is done by extracting the features of the leaves from the images. The primary stakeholders involved with this project are researchers, medical practitioners and people with a keen interest in botany. We believe that this application will be an important part of the mentioned stakeholders' daily lives. The primary purpose that this paper serves is to solve the problem of not knowing the useful properties of many plants.

ISTA-08/VisionNet-03.15 13:33 Heuristic Approach for Face Recognition Using Artificial Bee Colony Optimization
Astha Gupta and Lavika Goel (BITS Pilani, India)

Artificial Bee Colony (ABC) algorithm is inspired by the intelligent behavior of the bees to optimize their search for food resources. It is a lately developed algorithm in Swarm Intelligence (SI) that outperforms many of the established and widely used algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) under SI. ABC is being applied in diverse areas to improve performance. Many hybrids of ABC have evolved over the years to overcome its weaknesses and better suit applications. In this paper ABC is being applied to the field of Face Recognition, which remains largely unexplored in context of ABC algorithm. The paper describes the challenges and methodology used to adapt ABC to Face Recognition. In this paper, features are extracted by first applying Gabor Filter. On the features obtained, PCA (Principal Component Analysis) is applied to reduce their dimensionality. A modified version of ABC is then used on the feature vectors to search for best match to test image in the given database.

ISTA-08/VisionNet-03.16 13:45 ILTDS: Intelligent Lung Tumor Detection System on CT Images
Kamil Dimililer (Near East University, Cyprus); Yoney Kirsal Ever (Faculty of Engineering, Near East University, Nicosia, Mersin 10, Turkey); Buse Ugur (Faculty of Engineering, Near East University, Nicosia, North Cyprus, Turkey)

Cancer detection and research on early detection solutions play life sustaining role for human health. Computed Tomography images are widely used in radiotherapy planning. Computed Tomography images provide electronic densities of tissues of interest, which are mandatory. For certain target delineation, the good spatial resolution and soft/hard tissues contrast are needed. Also, Computed Tomography techniques are preferred compared to X-Ray and magnetic resonance imaging images. Image processing techniques have started to become popular in use of Computed Tomography images. Artificial neural networks propose a quite different approach to problem solving and known as the sixth generation of computing. In this study, two phases are proposed. For first phase, image pre-processing, image erosion, median filtering, thresholding and feature extraction of image processing techniques are applied on Computed Tomography images in detail. In second phase, an intelligent image processing system using back propagation neural networks is applied to detect lung tumors.

ISTA-08/VisionNet-03.17 13:57 Blink Analysis Using Eye Gaze Tracker
Amudha J (Amrita Vishwa Vidyapeetham, India); Roja Reddy and Supraja Reddy (Amrita School of Engineering, India)

An involuntary action of opening and closing the eye is called blinking. In our work we have performed blink analysis on different persons performing various tasks.We have relied on gaze coordinate d