Program for International Conference on Advances in Computing, Communications and Informatics (ICACCI-2014)

Wednesday, September 24

Wednesday, September 24, 09:00 - 10:30 (Asia/Calcutta)

R1: Conference RegistrationDetailsgo to top

Room: Block E, Ground Floor (Reception)

Wednesday, September 24, 09:30 - 10:30 (Asia/Calcutta)

SRS-2014: Student Research Symposium (SRS'14) PosterDetailsgo to top

Room: Lawn Area Block E

Wednesday, September 24, 10:30 - 13:45 (Asia/Calcutta)

Inauguration: Opening Ceremonies and Inaugural Sessiongo to top

Room: Auditorium Block D Ground Floor

Wednesday, September 24, 12:15 - 13:30 (Asia/Calcutta)

K0: Inaugural Keynote - Innovative Cloud and BigData Computing with Aneka PlatformDetailsgo to top

Dr. Rajkumar Buyya, University of Melbourne, Australia
Room: Auditorium Block D Ground Floor

Wednesday, September 24, 13:45 - 14:30 (Asia/Calcutta)

L1: Lunch Breakgo to top

Room: Lawn Area Block E

Wednesday, September 24, 14:30 - 15:30 (Asia/Calcutta)

K2: Keynote - Hybrid Resource Allocation Techniques for Cloud Computing SystemsDetailsgo to top

Dr. Jemal H. Abawajy, Director, Parallel and Distributed Computing Lab, Deakin University, Australia
Room: Auditorium Block D Ground Floor

Wednesday, September 24, 16:00 - 19:00 (Asia/Calcutta)

S1-A: Internet and Web Computinggo to top

Room: 110 Block E First Floor
Chair: Abdellatif Obaid (University of Quebec, Canada)
S1-A.1 Evaluating Quality Score of New Advertisements
Sohil Jain (Thapar University, India); Deepak Garg (Bennett University, Greater Noida, India)
Online or web advertisement is the prime source of income for search engines. Revenue generated through web advertisement depends on the number of times user clicks on ads. To increase revenue search engine selects best ad from pool of ads. So, ads of good quality score has more chances of getting selected by search engine. It is very difficult to calculate quality of new ads as they have no historic information related of their performance. Hence, evaluation of quality score is crucial. In this paper, we have proposed straightforward yet efficient method in terms of computation and space requirement to evaluate quality score of new ad. And also values of dominant parameters are computed empirically, that quality web page should possess.
pptx file
S1-A.2 Classification of Facebook News Feeds and Sentiment Analysis
Shankar Setty, Rajendra Jadi, Sabya Shaikh and Chandan Mattikalli (B. V. Bhoomaraddi College of Engineering & Technology, India); Uma Mudenagudi (B. V Bhoomaraddi College of Engineering and Technology, Hubli, India)
As recently seen in Google's Gmail, the messages in inbox are classified into primary, social and promotions, which makes it easy for the users to differentiate the messages which they are looking for from the bulk of messages. Similarly, a users wall in facebook is usually flooded with huge amount of data which makes it annoying for the users to view the important news feeds among the rest. Thus we aim to focuses on classification of facebook news feeds. In this paper, we attempt to classify the users news feeds into various categories using classifiers to provide a better representation of data on users wall. News feeds collected from facebook are dynamically classified into various classes such as friends posts and liked pages posts. Friends posts are further categorized into life events posts and entertainment posts. Posts or updates from pages which are liked by the users are grouped as liked pages posts. Posts from friends are tagged as friends posts and those regarding the events occurring in their lives are said to be life event posts and the rest are tagged as entertainment posts. This helps users to find "important news feeds" from "live news feeds". Sentiments are important as they depict the opinions and expressions of the user. Hence, detecting the sentiments of users from the life event posts also becomes an essential task. We also propose a system for automatic detection of sentiments from the life event posts and categorize based on sentiments into excited, happy and bad feelings posts. This paper looks towards applying the classification methods from the literature to our dataset with the objective of evaluating methods of automatic news feeds classification and sentiment analysis which in future can provide facebook page a well organized and more appealing look.
S1-A.3 6LoWPAN Based Service Discovery and RESTful Web Accessibility for Internet of Things
Jamal Mohammad, Palli Sowjanya and Varka Bhadram (Centre for Development of Advanced Computing, India); Santosh Koshy (Center for Development of Advanced Computing, India)
6LoWPAN being one among the enabling technologies of the vision, Internet of Things, brings forward the sensor networks into the world of Internet through the effective implementation of IPv6 on constrained devices. Even then, in-order to avail the numerous services provided by the sensor networks ubiquitously and with ease, there is a need to be cognizant of the available services, identity of nodes providing the desired services and the means to access. In this paper, we present an 6LoWPAN based architecture of sensor/actuator nodes, that employ Simple Service Location Protocol(SSLP), and respond to all service related queries from the user devices. The system implements Constrained Application Protocol(CoAP) a RESTful Architecture to leverage service access similar to HTTP transactions. The proposed architecture also integrates various protocols, defined specific to the constrained LoWPANs like, Neighbor Discovery(ND) Protocol, and Routing Protocol for Low power lossy networks(RPL) for interactions between traditional IP networks and the 6LoWPAN network.
pdf file
S1-A.4 Extended Clique Percolation Method to Detect Overlapping Community Structure
Sumana Maity (NIT Rourkela, India); Santanu Kumar Rath (National Institute of Technology (NIT), Rourkela, India)
Community detection in social network is a prominent issue in the study of network system as it helps to understand the structure of a network. A member of a social network can be part of more than one group or community. As a member can be overlapped between more than one group, overlapping community detection technique need to be considered in order to identify the overlapping nodes. Clique percolation method is a community detection algorithm which is mostly used for detecting overlapping communities. The problem with this method is that it does not cover the complete network. Hence some nodes may not be a part of any community irrespective of their connectivity. In this paper a novel approach has been introduced to extend the clique percolation method so that each and every connected node will be part of at least one community. The main strategy is to find the initial communities using clique percolation method and then expanding those communities by adding left out nodes which are not included in any community. Left out nodes are included to initial communities based on their belonging coefficient. Real world networks are used to evaluate the proposed algorithm. Quality of the detected community structure is measured by modularity measure which shows proposed method detects communities of better quality over clique percolation method.
pdf file
S1-A.5 Structural Model for the Adoption of Online Advertising on Social Network
Thanh D. Nguyen (Banking University of Ho Chi Minh City, Vietnam); Thi H. Cao (Saigon Technology University, Vietnam); Nghia D. Tran (HCMC University of Technology, Vietnam)
Social network is strongly expanding in all over the globe, it is being an indispensable part of the online world, so social network advertising is a potential market for the business propensity. Hence, researches on the adoption models of online advertising on social network are essential work. This study proposes a structural model of online advertising on social network adoption to overcome the limitations of previous study. The concepts in the model were analyzed by linear structural model. The research results illustrated that have the relationships between entertainment, irritation, credibility, and interaction-social, attitude toward online advertising, and online advertising on social network adoption.
rar file

S1-B: The New Internet Symposium (NIS-2014)go to top

Room: 110 Block E First Floor
Chair: Abdellatif Obaid (University of Quebec, Canada)
Routing Multiple Services in Optical Transport Network Environment Using Mixed Line Rates
Maninder Singh (Simon Fraser University, India); Maninder Lal Singh (Guru Nanak Dev University, Amritsar, India); Sudarshan Iyengar (IIT Ropar, India)
ITU-T has laid down broad principles for the convergence of packet and optical platforms through a series of Optical Transport Network (OTN) protocols. Convergence of multiple services in order to use a common transport layer has emerged as an attractive solution to bring down the capital expenditure and operational costs. In this paper an Integer Linear Programming (ILP) model based upon OTN protocol is presented to provision end to end converged optical transport network while dealing with Synchronous Digital Hierarchy(SDH) network ,10 Gigabit Ethernet(10GE),40 Gigabit Ethernet(40GE) and 100 Gigabit Ethernet(100GE) traffic at a same time. Objective function is defined in terms of the cost of the links on which the traffic was routed and further these links supports mixed line rates. The algorithm has also been studied for evaluating the effect of data grooming and inverse multiplexing at a node. The port size of packet optical transport platform used at each node is approximated in terms of the optimum amount of each type of traffic moving through a node. To reduce the running times , the ILP model has also been evaluated for a greedy routine, what we call Highest Rate First (HRF) ordering, through which the ILP took considerably less time to converge
pptx file
Pattern Matching Algorithms for Intrusion Detection and Prevention System: A Comparative Analysis
Vibha Gupta, Maninder Singh and Vinod Bhalla (Thapar University, India)
Intrusion Detection and Prevention Systems (IDPSs) are used to detect malicious activities of intruders and also prevent from the same. These systems use signatures of known attacks to detect them. Signatures are identified through pattern matching algorithm which is the heart of IDPSs. Due to technological advancements, network speed is increasing day by day, so pattern matching algorithm to be used in IDPS should be fast enough so as to match the network speed. Therefore choice of pattern matching algorithm is the critical to the performance of IDS and IPS. Several pattern matching algorithms exist in literature, but which pattern matching algorithm will give best performance for IDPS is not known at hand. So in this work four pattern matching algorithms namely Brute-force, RabinKarp, Boyer-Moore and Knuth-Morris-Pratt has been selected for the analysis. These single keyword matching algorithms are mainly used. Performance of pattern matching algorithms is analyzed in terms of run time by varying number of patterns and by varying size of network captured (pcap) file.
ppt file

S10: Third International Symposium Pattern Recognition and Image Processing (PRIP-2014)go to top

Room: 007-B Block E Ground Floor
Chair: Manjunath Aradhya (Sri Jayachamarajendra College of Engineering, India)
Automated Image Destriping for Improving Data Quality of OCM - Oceanic Geophysical Products
Suresh Kumar, Manchikanti (NATIONAL REMOTE SENSING CENTRE & ISRO, India); Senthil Kumar A (National Remote Sensing Centre, India); Dadhwal Vinay Kumar (NATIONAL REMOTE SENSING CENTERE, India)
Ocean Colour Monitor (OCM) in Oceansat-2 mission is a push-broom sensor with a spatial resolution of 360m, containing eight spectral channels operating in visible-Near IR spectral range with a swath of 1420 km across scan. The residual detector-to-detector variations appear as a along track striping in derived products, even though the rigorous effort has been put to minimize them during radiometric normalization. Oceanic geophysical products (chlorophyll-a, total sediment matter, yellow substance etc.) suffer from severe vertical striping, by virtue of very low signal return from ocean water and its constituents, thus obscuring the visual interpretation of these products. An algorithm is developed to destrip the original products, based on Lyon's algorithm with some modifications. The modification includes a) reduction of window size averaging, b) accounting for edge pixels outside the window size region and C) mean-value normalization to bring radiometry of pre- and post-correction. This automated algorithm works on scene-based image analysis to estimate the compensation factor for each pixel. It has been found that the proposed algorithm brings down the striping significantly between the original and the corrected geophysical product. This paper outlines modified Lyon's de-striping algorithm and quantitative analysis of the effectiveness of the algorithm on the final products.
ppt file
Devanagari Text Extraction From Natural Scene Images
Hrishav Raj (NIT Patna, India); Rajib Ghosh (National Institute of Technology Patna India, India)
In scenic images, information in the form of text provides vital clues for most applications based on image processing. These include assisted navigation content based image retrieval, automatic Geo-coding and understanding the scene. But in a multicolored complex background, it is quite a daunting task to locate the text. This task is daunting because of non-uniformity in illumination, complexity of the backdrop, and differences in the size font & line-orientation of the text. We propose a novel approach for Devanagari text extraction from natural scene images in this paper. We can use a 'text to speech' engine or 'Optical Character Reader' to recognize the extracted text. The basis of our scheme is to analyze the CCs. This is done to extract Devanagari text from scenic images captured by camera. The presence of head line is unique to this script. Our scheme makes use of mathematical morphological operations to extract the headlines. Also the binarization of scenic images was studied. Here the effectiveness of the adaptive thresholding approach was observed. The algorithm was tested on Devanagari text contained within a collection of 100 scenic images
pptx file
Off-line Handwritten Character Recognition Using Hidden Markov Model
Gayathri P and Sonal Ayyappan (SCMS School of Engineering & Technology)
In this paper, we present a Hidden Markov Model (HMM) based Optical Character Recognition (OCR) system for Malayalam character. OCR is the ability of a computer to receive and recognize character input from different sources. The goal of OCR is to classify optical patterns in an image to the corresponding characters. Recognition of handwritten Malayalam vowels is proposed in this paper. Images of the characters written by eighteen subjects are used for this experiment. Training and recognition are performed using Hidden Markov Model Toolkit. Recognition process involves several steps including image acquisition, dataset preparation, pre-processing, feature extraction, training and recognition. An average accuracy of about 81.38% has been obtained.
zip file
HEp-2 Cell Images Classification Based on Statistical Texture Analysis and Fuzzy Logic
Nur Farahim Jamil (Universiti Teknologi PETRONAS, Malaysia); Ibrahima Faye (Universiti Teknologi PETRONAS); Zazilah May (Universiti Teknologi Petronas, Malaysia)
Autoimmune diseases occur when an inappropriate immune response takes place and produces autoantibodies to fight against human antigens. In order to detect autoimmune disease, a test, called indirect immunofluorescence (IIF) is carried out to identify antinuclear autoantibodies (ANA) in the HEp-2 cell. Current method of analyzing the results is inconsistent as it is limited to subjective factors such as experience and skill of the medical experts. Thus, there is a need for an automated recognition system to reduce the variability and increase the reliability of the test results. This paper proposes a pattern recognition algorithm consisting of statistical methods to extract seven textural features from the HEp-2 cell images followed by classification of staining patterns by using fuzzy logic. This method is applied to the data set of the ICPR 2012 contest. The textural features extracted are based on the first-order statistics and second-order statistics computed from grey level co-occurrence matrices (GLCM). The extracted features are then used as an input parameter to classify five staining patterns by using fuzzy logic. A working classification algorithm is developed and gives a mean accuracy of 84% out of 125 test images.
pdf file
Investigation of Effectiveness of Ensemble Features for Visual Lip Reading
Krishnachandran M (SCMS School of Engineering and Technology, India); Sonal Ayyappan (SCMS School of Engineering and Technology)
Features used for classification play essential role in the performance of system. In the field of Lip reading, features appear in large number which has to be solved by selection of subset of features. Work covered in this paper validates the performance of individual visual features such as lip height, lip width, area of lip region, angles at corners and then combine them to create a new subset feature that improves the classification accuracy of certain weak features when combined with significant attributes. Each feature provides different level of representing classification characteristics for words. Area feature provided highest independent accuracy of 75.70%. Ensemble feature area- h4 produced highest combined accuracy of 71.25%.
ppt file file
Sub-Band Exchange DWT Based Image Fusion Algorithm for Enhanced Security
Jaypal Jagdish Baviskar (VJTI, Mumbai & Veermata Jijabai Technological Institute, Mumbai, India); Afshan Mulla (Veermata Jijabai Technological Institute, India); Amol Baviskar (Mumbai University & Universal College of Engineering, Vasai, Maharashtra,., India); Aditi Parthasarathy (Intelligent Communication lab, India)
Transmission of critical data over the link, in the presence of an attacker has augmented the demand for secure communication. Particularly in the military field, exchange of classified images have to be executed with extreme caution. Advanced digital technologies require encryption, authentication and key distribution techniques to facilitate reliable and secure communication. In this paper, Discrete Wavelet Transform (DWT) Sub-band Exchange based color image fusion scheme for enhanced security is proposed. In this method, the horizontal and vertical sub-bands of two colored images are replaced by chrominance information of each other. This permits generation of uniquely fused gray-scale textured images. The method facilitates fused images which are extremely difficult to intercept by the attacker, since the probability of getting hold on all the information pertaining to one image is negligible. Also, this method offers reduced bandwidth utilization and less transmission time as it converts colored images to compressed textured gray-scale images. Hence in addition to proposing a novel image fusion technique and compression scheme, this paper illustrates detailed analysis of the algorithm.
pptx file
Study of Subspace Mixture Models with Different Classifiers for Very Large Object Classification
K Mahantesh (SJBIT, India); Manjunath Aradhya (Sri Jayachamarajendra College of Engineering, India); Niranjan SK (Sri Jayachamarajendra College of Engineering (SJCE) - Mysore, India)
Since Gaussian Mixture Models (GMM) captures complex densities of the data and has become one of the most significant methods for clustering in unsupervised context; we study and explore the idea of mixture models for image categorization. In this regard, we first segment all image categories in hybrid color space (HCbCr - LUV) to identify the color homogeneity between the neighboring pixels and then k-means technique is applied for partitioning image pixels into its coordinated clusters. Further, we use subspace methods such as Principal Component Analysis (PCA) & Fisher's Linear Discriminant (FLD) to partition set of all segmented classes into several clusters to obtain transformation matrix for each cluster. These clusters are viewed as mixture of several Gaussian classes (latent variables) and Expectation Maximization (EM) algorithm is applied to these Gaussian mixtures giving best maximum likelihood estimators and thereby obtaining highly discriminative features in reduced feature space. For subsequent classification, we use diverse Distance Measures (DM) and Probabilistic Neural Network (PNN). The results obtained is evident that the proposed model exhibits highly discriminative image representation that leads to the improved classification rates to the state-of-the-art on standard benchmark datasets such as Caltech-101 & Caltech-256.
pdf file
High Performance and Flexible Imaging Sub-system
Mihir N Mody (Texas Instruments, India); Hetul Sanghvi (Texas Instruments Inc, India); Niraj Nandan (Texas Instruments, USA); Shashank Dabral (Texas Instruments Inc., USA); Rajasekhar Allu (Texas Instruments, USA); Dharmendra Soni (Texas Instruments, India); Sunil Sah (Rambus & Rambus, India); Gayathri Seshadri and Prashant Karandikar (Texas Instruments, India)
Imaging Sub-system (ISS) enables capturing photographs or live video from raw image sensors. This consists of a set of Sensor interfaces and cascaded set of algorithms to improve image/video quality. This paper illustrates typical imaging sub-system architectures consisting of a Sensor front end, an Image Signal Processor (ISP) and an Image Co-processor (sIMCOP). Here we describe the ISS developed by Texas Instruments (TI) for the OMAP 5432 processor. The given solution is flexible to interface with various kinds of image sensors and provides hooks to tune visual quality for specific customers as well as end applications. This solution is also flexible by providing options to enable customized data flows based on actual algorithm needs. The overall solution runs at a high throughput of 1 pixel/clock cycle to enable full HD video at high visual quality.
pptx file

S11-A: Second International Workshop on Advances in VLSI Circuit Design and CAD Tools (AVCDCT-2014)go to top

Room: 009 Block E Ground Floor
Chairs: Saurabh Gautam (None & Cadence Design Systems, India), Sergey G. Mosin (Kazan Federal University, Russia)
Design of Enhanced Arithmetic Logical Unit for Hardware Genetic Processor
Haramardeep Singh (Lovely Professional University, India)
Genetic Algorithm is based on natural evolution. The genetic algorithm is a probabilistic search algorithm that iteratively transforms a set (called a population) of mathematical objects (typically fixed-length binary character strings), each with an associated fitness value, into a new population of offspring objects using the Darwinian principle of natural selection. The research in genetic algorithm mainly concentrates on the software implementation, which is lagging in term of speed. Genetic algorithm, processor consists of lots of sub-modules like ALU unit, memory unit and control unit. From all the sub-module, ALU module is responsible of bringing the genetic variation. In this paper, Enhanced ALU unit consisting of Mutation Operator like Flap, Uniform, Virus mutation and Crossover operators like One point, Multi Point, Uniform crossover was designed using VHDL language and was implemented using Spartan 3E FPGA.
ppt file
Capacitor Less DRAM Cell Design for High Performance Embedded System
Prateek Asthana (National Institute of Technology HAMIRPUR, India); Sangeeta Mangesh (JSS ACADEMY, India)
In this paper average power consumption and timing parameter i.e. read access time, write access time and retention time comparison of 3T1D DRAM is carried out. These analyses are carried out on 32nm scale. This DRAM cell is used in high performance embedded system. A technique is being used in the paper to improve average power consumption and read access time for 3T1D DRAM to make it more comparable to the SRAM 6T.A circuit to improve the average power consumption and the read access time of the 3T1D cell are analyzed. These circuits are analyzed on TANNER EDA. Circuits are designed on SEDIT and simulated on TSPICE.
pptx file
Low Power Multiplier Using Dynamic Voltage and Frequency Scaling (DVFS)
Deepak Garg (NIT Kurukshetra, India); Rajender Sharma (Associate Prof, India)
In the recent years, the main concern of VLSI Engineers is on the power reduction techniques. In this paper, Dynamic Voltage Frequency Scaling (DVFS) for reducing power using Virtex 5 FPGA Kit along with XPower Estimator tool is used. In proposed DVFS technique, multiplication and addition are performed at different frequencies. Using sequential multiplier, practical analysis has been carried out. Its power is simulated without DVFS and further simulated the result with DVFS. Analyzing this result it is found that, power has been reduced drastically. The simulation results shows that 54.53% of total power and 25% of dynamic power is reduced using this approach as compared to non DVFS approach.
Logical Effort Based Power-Delay-Product Optimization
Sachin Maheshwari (University of Westminster, United Kingdom (Great Britain)); Jimit Patel and Sumit Kumar Nirmalkar (Birla Institute of Technology & Science, Pilani, India); Anu Gupta (BITS, pilani, India)
In circuit design and evaluation, power and delay play a major role in deciding the circuit performance. Power and delay are approximately inversely related to each other and if one parameter is decreased the other gets increased. The proposed method uses power-delay-product (PDP) as performance metric to optimize the delay and power. The mathematical model has been developed to minimize the PDP by sizing the gates in a chain of inverters and NAND-NOR-INV combinational circuit. The results are validated using TSMC 0.18µm, 1.8V CMOS technology in mentor graphics' ELDO Spice. A significant reduction in power 11% and 27% for 3 and 5 stages respectively has been observed with a small increment in delay of 2% and 11%. Similarly, for NAND-NOR-INV circuit a reduction of approximately 14% in power with a minute change in delay has been observed.
pptx file
Analysis of Multi-bit Flip Flop Low Power Methodology to Reduce Area and Power in Physical Synthesis and Clock Tree Synthesis in 90nm CMOS Technology
Saurabh Gautam (Cadence Design Systems, Noida)
Power and Area has become a burning issue in modern VLSI design. Power has become a bottleneck in digital circuit in ultra deep sub micron design. Clock power is one of the major power sources. As design size reduces Area also become a major concern. The Multi-bit Flip Flop is one of the methodologies to reduce power and area. By using MBFF methodology in physical design flow, we can also reduce the global congestion as wire length reduces significantly. For a given design, we can reduce its power, area and wire length through merging of flip-flops without affecting functionality of design and achieve better quality results. In this paper, we will review multi-bit flip flops concepts, library syntax for MBFF and analytical study of design's results with MBFF and without MBFF at Physical Synthesis and clock tree synthesis on 90nm technology. Keywords: MBFF - multi-bit flip flop , Low Power, area
pptx file

S11-B: Computer Architecture and VLSIgo to top

Room: 009 Block E Ground Floor
Chair: Sergey G. Mosin (Kazan Federal University, Russia)
Low Power Divider Using Vedic Mathematics
Dalal Rutwik Kishor and Kanchana VS Bhaaskaran (VIT University Chennai, India)
Divider is an inevitable and basic hardware module employed in advanced and high speed digital signal processing (DSP) units of high precision. It is widely used in radar technology, communication, industrial control systems and linear predictive coding (LPC) algorithms in speech processing. This paper proposes a fast, low power and cost effective architecture of a divider using the ancient Indian Vedic division algorithm. The merits of the proposed architecture are proved by comparing the gate count, power consumption and delay against the conventional divider architectures. The validation of the proposed architecture has resulted in 52.93% reduction in power dissipation against comparison with conventional divider using repeated subtraction. The designs were implemented using industry standard Cadence software using 45nm technology library. The design has been validated on FPGA Spartan-3E kit. The validation results show appreciable reduction in circuit latency and in the Look-Up-Table (LUT) utilization using proposed Vedic divider than the conventional divider.
pptx file
Design and Analysis of Program Counter Using Finite State Machine and Incrementer Based Logic
Divya M, Ritesh Belgudri and Kanchana VS Bhaaskaran (VIT University Chennai, India)
The paper presents the full-custom design of the program counter for low power and high performance. Two approaches namely, 1) finite state machine logic based and 2) incrementer based logic have been employed. The designs have been implemented and the comparison between the two design methodologies has been made. The finite state machine logic based design uses flip flops and multiplexers, while the incrementer based design employs the incrementer circuit and registers. The average power consumed by the program counter designed using FSM based logic is 64.72% less as compared to that of Incrementer based design at 1 GHz operation frequency. The delay incurred by the incrementer based design is 34.92% lesser compared to that of the finite state machine based approach, however at the cost of increase in the area. The designs have been implemented using industry standard Cadence EDA tools and simulated using 90nm technology files.
pptx file
Thermal Modeling of Homogeneous Embedded Multi-Core Processors
Axel Sikora and Daniel Jaeckle (University of Applied Sciences Offenburg, Germany)
Temperature regulation is an important component for modern high performance single -core and multicore processors. Especially high operating frequencies and architectures with an increasing number of monolithically integrated transistors result in a high power dissipation and - since processor chips convert the consumed electrical energy into thermal energy - in high operating temperatures. High operating temperatures of processors can have drastic consequences regarding chip reliability, processor performance, and leakage currents. External components like fans or heat spreaders can help to reduce the processor temperature - with the disadvantage of additional costs and reduced reliability. Therefore, software based algorithms for dynamic temperature management are an attractive alternative and well known as Dynamic Thermal Management (DTM). However, the existing approaches for DTM are not taking into account the requirements of real-time embedded computing, which is the objective in the given project. The first and basic steps are the profiling and in the DTM closed loop control are the profiling and the thermal modeling of the system, which is reported in this paper for a Freescale i.MX6Q quad-core microprocessor. An analytical model is developed and verified by an extensive set of measurement runs.
pdf file
A Hybrid Cache Replacement Policy for Heterogeneous Multi-Cores
K. Mani Anandkumar (Anna University, Chennai, India); Akash S, DivyaLakshmi Ganesh and Monica Snehapriya Christy (Easwari Engineering College, India)
Future generation computer architectures are endeavoring to achieve high performance without compromise on energy efficiency. In a multiprocessor system, cache miss degrades the performance as the miss penalty scales by an exponential factor across a shared memory system when compared to general purpose processors. This instigates the need for an efficient cache replacement scheme to cater to the data needs of underlying functional units in case of a cache miss. Minimal cache miss improves resource utilization and reduces data movement across the core which in turn contributes to a high performance and lesser power dissipation. Existing replacement policies has several issues when implemented in a heterogeneous multi-core system. The commonly used LRU replacement policy does not offer optimal performance for applications with high dependencies. Motivated by the limitations of the existing algorithms, we propose a hybrid cache replacement policy which combines Least Recently Used (LRU) and Least Frequently Used (LFU) replacement policies. Each cache block has two weighing values corresponding to LRU and LFU policies and a cumulative weight is calculated using these two values. Conducting simulations over wide range of cache sizes and associativity, we show that our proposed approach has shown increased cache hit to miss ratio when compared with LRU and other conventional cache replacement policies.
Design of a High Speed, Low Power Synchronously Clocked NOR-based JK Flip-Flop Using Modified GDI Technique in 45nm Technology
Krishnendu Dhar (Jadavpur University, India)
The current paper aims to put forward the arrangement of a new high speed, low power synchronously clocked NOR-based JK flip-flop embracing modified Gate Diffusion Input (GDI) procedure in 45nm technology. The propounded design on comparison with a synchronously clocked NOR-based JK flip-flop employing the traditional CMOS transistors, transmission gates and Complementary Pass-Transistor Logic (CPL), respectively showed a considerable amount of reduction in delay time, average power consumption (Pavg) along with Power Delay Product (PDP). Delay time is found to be as low as 2.42nano second while Pavg is as low as 11.19µW thereby giving a PDP as low as 2.71 x 10-14 Joule for 0.9 volt power supply. Furthermore there is a remarkable contraction in transistor count compared to conventional synchronously clocked NOR-based JK flip-flop comprising CMOS transistors, transmission gates and CPL, accordingly suggesting minimization of area. The simulation of the proposed design has been carried out in Tanner SPICE and the layout has been designed in Microwind.
pptx file
Workload Characterization for Shared Resource Management in Multi-core Systems
Sapna Prabhu (Fr. Conceicao Rodrigues College of Engineering, India); Rohin D. Daruwala (V. J. Technological Institute, India)
The multi-core industry is facing a number of challenges which need to be addressed to achieve optimal performance from the multiple cores. One of the dominant causes of performance degradation that arises in multi-core processors is due to hardware resource sharing between cores. The degradation in performance is largely dependent on the nature of the applications. To minimize this degradation, application characteristics need to be analyzed and based on this, a solution can be devised to reduce degradation. This paper aims at presenting a consolidated survey of present classification schemes and propose a new classification scheme which can be applied on-line. Also, using this scheme, a dynamic cache partitioning mechanism has been proposed . The experimentation has been conducted by constructing workloads using the SPECCPU 2006 benchmark suite and experimentation has been conducted on SIMICS (full system simulator).
pptx file
Decoder and Pass Transistor Based Digitally Controlled Linear Delay Element
Prachi Sharma and Anil Gupta (NIT Kurukshetra, India)
With advancement and scaling in Integrated circuit design, the accuracy of clock circuitry plays an increasingly important role.Timing precision is important to reduce the synchronization problem in digital circuits. The precise timing of pulse is also extremely important for designing asynchronous circuits. Delay element is the basic building block for asynchronous circuits. Many circuits of digitally controlled delay elements have been reported in literature. The reported circuits provide a delay which is a non linear function of digital inputs. Non linearity of delay in reported circuits affects the accuracy of time because of which they cannot be used in real world applications. This paper presents a novel circuit of decoder and pass transistor based delay element.The proposed circuit is designed for 3 bits and it provides a delay which is a linear function of digital inputs. The design is implemented in cadence using 180 nm CMOS technology.
pptx file
Performance Analysis of Alternate Adder Cell Structures Using Clocked and Non-Clocked Logic Styles At 45nm Technology
Bhagyalaxmi T (Vardhaman College of Engineering, India); Sandiri Rajendar (Vardhaman College of Engineering & Vardhaman College of Engineering, Jawaharlal Nehru Technological University, India); Y Pandu Rangaiah (Center for Advanced Computing Research Lab, India)
In this paper, the performance analysis of alternative full adder cell structures using various clocked and non-clocked logic styles were implemented for deep-submicron CMOS technology. Various clocked and non-clocked logic styles are used for the implementation of CMOS full-adder circuits. In this paper, the circuits considered for performance comparison are: nonclocked logic styles namely Static, CPL, DPL, SR-CPL, modified SR-CPL, modified DPL and clocked logic styles namely Dynamic logic, FTL, and CD logic. All these full-adder cell structures are compared based on power and delay analysis. The design of adder cells is carried-out in Cadence Virtuoso Analog Design Environment at 45nm CMOS process technology and simulated using Spectre simulator.
pdf file
A Novel High Performance Low Power CMOS NOR Gate Using Voltage Scaling and MTCMOS Technique
Ankish Handa (Maharaja Surajmal Institute of Technology, India); Jitesh Chawla and Geetanjali Sharma (Indraprastha University, India)
CMOS logic is extensively used in VLSI circuits but due to scaling of technology, the threshold voltage of the transistors used in CMOS circuits decrease which cause an increase in leakage power. Dynamic power consumption, which is proportional to square of supply voltage VDD further adds to the overall power dissipation. This results in low battery life of mobile devices. In this brief, a novel method to curtail both dynamic power dissipation and leakage power is proposed. The proposed method combines Voltage Scaling and Multi-Threshold CMOS (MTCMOS) technique which helps in reducing dynamic and static power dissipation respectively without degrading the circuit's performance. The proposed technique saves power dissipation by 30% to 90% as compared to conventional CMOS and other existing techniques. A 2-input NOR gate is implemented using the proposed VS-MTCMOS technique in sub-threshold region over different temperatures. Tanner EDA Tool is used to simulate the designed circuit
pptx file
Hardware-Efficient FPGA Implementation of Symbol & Carrier Synchronization for 16-QAM
Sapta Girish Babu Neelam (Bharat Electronics Limited & IIT Guwahati, India)
This paper describes the hardware efficient implementation of Non data aided method for Symbol timing Synchronization and Carrier phase synchronization for 16-QAM on Kintex 7 FPGA using a feedback structure. Costas loop is used for Carrier phase synchronization and pre-filtering is done for Symbol timing Synchronization to extract the timing information. Computer simulations are used to assess the receiver performance in the presence of AWGN during the transient state and steady state and the results are matching with the practical results. The receiver performance is measured in fixed frequency mode for high data rate.
pdf file
Efficient System Level Cache Architecture for Multimedia SoC
Prashant Karandikar and Mihir N Mody (Texas Instruments, India); Hetul Sanghvi (Texas Instruments Inc, India); Vasant Easwaran (Texas Instruments, USA); Prithvi Shankar Y A (Texas Instruments, India); Rahul Gulati (Qualcomm, India); Niraj Nandan (Texas Instruments, USA); Subrangshu Das (Canon, India)
A typical multimedia SoC consists of hardware components for image capture & processing, video compression and de-compression, computer vision, graphics and display processing. Each of these components access and compete for the limited bandwidth available in the shared external DDR memory. The traditional solution of using cache is not suitable for multimedia traffic. In this paper, we propose a novel cache architecture which is beneficial for multimedia traffic in terms of DDR bandwidth savings and latency reduction. The proposed cache architecture uses qualifier based splitter, multiple fully associative configurable features cache and an arbiter. The proposed cache architecture is evaluated using architectural model. The paper also proposes newer applications of this cache architecture as an infinite circular buffer for data buffer sharing across hardware components. The simulation results show 50% improvement in DDR bandwidth for video decoder traffic.
pptx file
Low Power Vedic Multiplier Using Energy Recovery Logic
Hardik Sangani, Tanay Modi and Kanchana VS Bhaaskaran (VIT University Chennai, India)
Multiplier is one of the primary hardware blocks in modern day digital signal processing (DSP) and communication systems. It is extensively used in DSP and image processing applications such as, Fast Fourier Transform (FFT), convolution, correlation, filtering and in ALU of microprocessors. Therefore, high speed, low area and power efficient multiplier design remain the critical factors for the overall system. This paper presents high performance and energy efficient implementation of the binary multiplier. The design is based on ancient Indian Vedic multiplication process and the low power energy recovery (aka adiabatic logic). The generation of partial sums and products in a single step in the Vedic approach and the energy recovery capability of the adiabatic logic together realize high speed and low power operation of the design. A 16X16 Vedic multiplier and conventional array multiplier based on the Differential Cascode Pre-resolve Adiabatic Logic (DCPAL) is proposed in the paper. Simulation results validate this design incurring 87.21 percent lesser power than the standard CMOS equivalent design.
pptx file

S12: International Symposium on Cloud Computing: Architecture, Applications, and Approaches (CCA-2014) / International Workshop on Cloud Security and Cryptography (CloudCrypto'14) 10go to top

Room: 108-A Block E First Floor
Chairs: Ganesh Deka (MIR Labs, India), Noor Mahammad Sk (Indian Institute of Information Technology Design and Manufacturing (IIITDM) Kancheepuram, India)
Design and Implementation of a Forensic Framework for Cloud in OpenStack Cloud Platform
Saibharath S (BITS Pilani Hyderabad Campus, India); Geethakumari G (BITS-Pilani, Hyderabad Campus, India)
In this paper, a forensic framework has been developed to do cloud forensics in OpenStack for infrastructure as a service model using the existing forensic tools. For the instances been allotted to the user, the snapshots of volatile random access memory and image from the hard disk (cinder) in the specific path where it is mounted on should be acquired to do forensics. Adding to internal, external and floating ip address, for every task or modification a cloud end user does through the cloud api or dashboard (in OpenStack cloud platform), packets get transferred through ISP and then the changes get updated in the cloud setup. So network forensics is an integral part of cloud forensics. Our forensic framework obtains live snapshots, image evidences, packet captures and log evidences and does analysis on it. Simulation is carried out through Digital forensic framework on image files of block storage and live snapshots, Wireshark on raw network captures, XML and Java for structuring log files. Cloud forensic process for image acquisition and analysis has been defined by steps used in simulation. Two scenarios of integrity checking in object storage has been simulated through JSch are detailed. Discussion on finding various attacks happened from the evidences obtained is elaborated.
pptx file
A Case for CDN-as-a-Service in the Cloud: A Mobile Cloud Networking Argument
Florian Dudouet (Zurich University of Applied Sciences, Switzerland); Piyush Harsh (Zurich University of Applied Sciences & ICCLab, Switzerland); Santiago Ruiz (Soft Telecom, Spain); Andre S. Gomes (University of Bern, Switzerland); Thomas Michael Bohnert (Zurich University of Applied Sciences, Switzerland)
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
pdf file
Study and Analysis of Various Task Scheduling Algorithms in the Cloud Computing Environment
Teena Mathew (Mahatma Gandhi University, India); Chandra Sekaran K (National Institute of Technology Karnataka, India); John Jose (Rajagiri School of Engineering & Technology Kochi & IIT Madras, India)
Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays a good role in adding the performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing in the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also consolidated in this paper
pptx file
A Robust Scheme on Proof of Data Retrievability in Cloud
Nitin Chauhan (Infosys Technologies Ltd, Hyderabad India, India); Ashutosh Saxena (Infosys Technologies Limited, India)
In the era of rampant adoption of information technology and social networking, data generation is rapidly outpacing data storage resources. Cloud computing has emerged as cost effective, flexible and scalable alternate to address issue of increasing resource requirements. Though the advantages offered by such services appear attractive, there are certain inherent challenges related to trust and data security. Ensuring data integrity and retrievability of data in cloud is one such challenge. Proof of retrievability (POR) concept tries to establish assurance for cloud customer regarding correctness and completeness of their data. In this paper, we evaluate an already existing POR scheme and propose a new crypto based scheme which helps in reducing the amount of computation and storage at cloud customer side while establishing POR. In our scheme client is not required to store any large set of data locally except a secret key which is required for encryption. Compare to previous scheme, we also avoid the necessity of encrypting entire data at client side, thereby saving client computational resources. The proposed scheme is relevant for large static data such as video files, audio files and social networking data etc.
pdf file
Biometrics Template Security on Cloud Computing
Heba Sabri (Sadat Academy for Management Sciences, Egypt); Kareem Kamal A. Ghany (Beni Suef University, Egypt); Hesham Hefny (Cairo University & Institute of Statistical Studies and Research, Egypt); Nashaat Elkhameesy (Sadat Academy for Management Sciences, Egypt)
Cloud computing is the concept of using maximum remote services through a network using various minimum resources, it provides these resources to users via internet. There are many critical problems appeared with cloud computing such as data privacy, security, and reliability etc. But we find that security is the most important between these problems. In this research paper, the proposed approach is to eliminate the concerns regarding data security using bio hash function for biometrics template security to enhance the security performance in cloud as per different perspective of cloud customers. Experiments using well know benchmark CASIA fingerprint-V5 data sets show that the obtained results proved that using Bio-hash function approach is more efficient in protecting the biometric template compared to Crypto-Biometric Authentication approach and the error rate is minimized by 25%.
rar file
A Review of Adaptive Approaches to MapReduce Scheduling in Heterogeneous Environments
Nenavath Srinivas Naik and Atul Negi (University of Hyderabad, India); Sastry (IDRBT, India)
MapReduce is currently a significant model for distributed processing of large-scale data intensive applications. MapReduce default scheduler is limited by the assumption that nodes of the cluster are homogeneous and that tasks progress linearly. This model of MapReduce scheduler is used to decide speculatively re-execution of straggler tasks. The assumption of homogeneity does not always hold in practice. MapReduce does not fundamentally consider heterogeneity of nodes in computer clusters. It is evident that total job execution time is extended by the straggler tasks in heterogeneous environments. Adaptation to Heterogeneous environment depends on computation and communication, architectures, memory and power. In this paper, first we explain about existing scheduling algorithms and their respective characteristics. Then we review some of the approaches of scheduling algorithms like LATE, SAMR and ESAMR, which have been aimed specifically to make the performance of MapReduce adaptive in heterogeneous environments. Additionally, we have also introduced a novel approach for scheduling processes for MapReduce scheduling in heterogeneous environments that is adaptive and thus learns from past execution performances.
pdf file

S2-A: Second International Symposium on Green Networks and Distributed Systems (GNDS-2014)go to top

Room: 110 Block E First Floor
Chair: Christian Callegari (RaSS National Laboratory - CNIT & University of Pisa, Italy)
Cross-Layer Energy Model for Relay Assisted 802.15.4 Networks in a Non-Beacon-Enabled Mode
Sankalita Biswas (Burdwan, West Bengal, India); Aniruddha Chandra (National Institute of Technology, Durgapur, WB, India); Sanjay Dhar Roy (National Institute of Technology Durgapur, India)
Cross-layer models are becoming popular in various wireless networking domains due to their realistic predictions and for fundamental understanding of the interaction between adjacent networking layers. A combined PHY/MAC layer energy consumption model is considered here for short range IEEE 802.15.4 networks for non-beacon-enable mode in case of dual hop transmission under Rayleigh fading channel. For both AF and DF relays this cross layer model is developed and a comparison of energy efficiency is done for this two type of relays. In particular, we focus on how the new model differs from single layer models (either PHY or MAC) in terms of energy efficiency.
pdf file
An Approach for A Reduced Response Time and Energy Consumption in Mixed Task Set Using a Priority Exchange Server
Sarla Mehariya (Rajasthan Technical University, India); Ved Mitra (MNIT, Jaipur, India); Mahesh Chandra Govil (Malaviya National Institute of Technology, Jaipur INDIA & Malaviya National Institute of Technology, Jaipur INDIA, India); Dalpat Songara (Rajasthan Technical University, India)
An important factor that affects the performance of battery operated real-time and embedded systems is Energy. Various techniques have been employed to limit the energy dissipation. Dynamic voltage and frequency scaling (DVFS) is one of the most popular techniques for energy conservation in such systems and is a well researched area. This paper presents an energy conscious real-time scheduling algorithm - DVFSPES, DVFS with Priority Exchange Server for mixed task set comprising of periodic and aperiodic tasks. It uses Earliest Deadline First (EDF) based Priority Exchange Server. The results of DVFSPES are compared with EEDVFS, Energy Efficient DVFS algorithm, that uses an EDF based Deferrable Server on various performance metrics. Depending on the task set used DVFSPES can provide upto 50% improvement in response time for aperiodic tasks without compromising on the deadlines of the periodic task.
pptx file

S2-B: Second International Workshop on Energy Efficient Wireless Communications and Networking (EEWCN 2014)go to top

Room: 110 Block E First Floor
Chair: Christian Callegari (RaSS National Laboratory - CNIT & University of Pisa, Italy)
Energy Efficient Two Level Distributed Clustering Scheme to Prolong Stability Period of Wireless Sensor Network
Manpreet Kaur, Abhilasha Jain and Ashok Kumar Goel (Gaini Zail Singh Punjab Technical University Campus, Bathinda, India)
Energy efficiency, fault tolerance, scalability, connectivity and reliability are major challenges in wireless sensor networks. Many clustering algorithm are proposed to handle these challenges. Heterogeneous schemes are similar to real situations. Many routing protocols have been proposed for clustering scheme based on heterogeneity. In this paper, an Energy Efficient Two Level Distributed Clustering (EE-TLDC) scheme is proposed with two level cluster heads for three level heterogeneous networks. Proposed technique reduces the transmission cost of cluster heads to reduce energy consumption in network. Simulation shows that proposed scheme prolongs the stability period and reduces energy consumption in network.
pptx file
An Effective Heuristic for Construction of ALL-to-ALL Minimum Power Broadcast Trees in Wireless Networks
Wilson Naik Bhukya (University of Hyderabad, India)
All-to-all broadcast refers to the process by which every node inherently communicates to every other node in the network. The all-to-all broadcast problem seeks a broadcast tree scheme using minimum unique cast tree (MUCT) with minimum energy consumption. The minimum all-to-all power broadcast problem is NP-Hard. This work proposes an energy efficient heuristic to find minimum all-to-all power shared broadcast trees in wireless networks. The simulation results on numerous problem instances confirm that the proposed heuristic significantly outperforms existing heuristics in terms of all-to-all broadcast power minimization.
CHATSEP: Critical Heterogeneous Adaptive Threshold Sensitive Election Protocol for Wireless Sensor Networks
Rajesh Kumar Yadav (Delhi Technological University, India); Arpan Jain (M. TECH & Delhi Technological University, India)
Networking together of hundreds to thousands of limited energy battery powered nodes derive a Wireless Sensor Network (WSN). The use and applicability of WSNs has increased in vivid areas like vehicular movement, weather monitoring, security and surveillance, industry applications etc. The limited powered nodes in WSNs sense the environment and send the desired information to a processing centre (base station) either directly or via a mechanism for optimization. In this paper we propose CHATSEP, a clustering protocol for reactive networks with threshold sensitive heterogeneous sensor nodes. It includes an adaptive characteristic which helps the base station to be aware of the status of the nodes in situations when the nodes are idle from a long time and hence helping the base station to analyze the information network dynamically and efficiently. It also incorporates Critical Threshold which is any information of utmost importance in the network, and when such information is sensed it has to be sent to base station with highest priority. Our proposed protocol with adaptive nature and criticality of information is observed to perform better than the conventional clustering protocols like LEACH, SEP and TSEP in terms of stability period, network lifetime and throughput for a temperature sensing application.
pptx file
Performance Analysis of LT Codes and BCH Codes in RF and FSO Wireless Sensor Networks
Anuj Nayak (North Carolina State University, USA)
Free Space Optical (FSO) Wireless Sensor Networks (WSN) are immune to electromagnetic interference, license-free and are characterized by high bandwidth and ease of deployment. They face a major drawback in turbulent atmospheres. So, we consider hybrid FSO/RF energy-aware WSN with reliable data communication using ECC (Error Control Coding) methods. LT codes are a class of rateless codes used for recovering data from the transmitted symbols, which are prone to erasure. They are normally applied for packet recovery in high packet loss scenarios. In this paper, we implement LT codes for Forward Error Correction (FEC). We consider BPSK subcarrier intensity modulated FSO communication where a method of discarding weak symbols received, is employed with an optimal erasure zone as a pre-processing stage in LT decoder. We determine the optimal symbol-erasure zone and also impose an upper bound on BPSK threshold. The performances of LT codes and BCH codes are analysed for FSO and RF channel under different atmospheric conditions and link ranges. We consider energy consumption per bit as a performance metric for comparison. The proposed method for LT codes was observed to be more energy-efficient than BCH code in highly turbulent atmospheric conditions.
pptx file

S2-C: QoS and Resource Managementgo to top

Room: 110 Block E First Floor
Chair: Christian Callegari (RaSS National Laboratory - CNIT & University of Pisa, Italy)
Energy Efficient QoS Provisioning: A Multi-Colony Ant Algorithm Approach
Sunita Prasad (Centre for Development of Advanced Computing, India); Zaheeruddin (Jamia Millia Islamia (A Central University), India); Daya Krishan Lobiyal (Jawaharlal Nehru University, New Delhi, India)
With the advancement of technology, there is a growing desire to provide support for real time multimedia services over wireless ad hoc networks. These applications demand strict Quality-of-Service (QoS) guarantees by the underlying network in terms of delay, delay jitter and packet loss. However, limited battery life of the nodes in a wireless network poses significant challenges in finding QoS optimized multicast routes. The depletion of energy resources of the node may lead to a disjoint network that can severely affect the QoS requirement of the application. In this paper, we investigate the problem of building a multicast tree that aims energy conservation while meeting the QoS demands of the application. We formulate the issue as a multiobjective optimization problem and apply a multi-colony ant approach to find the Pareto front for the new problem. The experimental results are presented to demonstrate the effectiveness of the algorithm. The results are compared with Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II)
Performance Analysis of a Short Flow Favoring TCP
Most flows in the Internet run over TCP and most flows in the Internet are short. These TCP flows are predominantly disadvantaged in networks due to their short time to react to the environment along the flow path. There are arguments proposing TCP to favor short flows to counter such imbalance. It would improve user experience, reduce energy consumption of wireless devices and bring economic benefits to service providers. We describe sender-side-only TCP modifications to favor short flows and validate it through a Linux implementation and evaluation in real networks. We contrast the flow metrics against Reno and Cubic flows. The modified TCP is found to increase the goodput of short flows by more than 30% in typical settings with corresponding decrease in flow-completion-time without taking much bandwidth share from long flows. The conditional suitability of the algorithm for deployment in the Internet is endorsed.
ppt file
Evaluation of Queuing Algorithms on QoS Sensitive Applications in IPv6 Network
Junaid Latief Shah and Javed Parvez (University of Kashmir, India)
Quality of Service (QoS) is an important network performance parameter having significant impact on real time applications like VoIP, Interactive gaming and Video Streaming. Although IPv6 was designed to improve addressing, security and QoS in IPv4, services like video conferencing and VoIP with strong reliance and sensitive towards delay and Jitter pose a daunting challenge in today's packet based networks. These parameters must be well below the level tolerance so that the service doesn't degrade. In this paper we discuss various parameters and dimensions on which the QoS of network depends. We also discuss the mechanisms used by IPv4 and IPv6 to implement QoS. Finally performance analysis of different queuing algorithms like FIFO, WFQ and PQ is carried out to study their impact on real time applications in IPv6 Environment. A simulation framework based on OPNET Modeler 14.5 is used to model, simulate and analyze network behavior.
zip file
EQC16: An Optimized Packet Classification Algorithm for Large Rule-Sets
Uday Trivedi (Samsung R&D Institute, Bangalore, India); Mohan Lal Jangir (Samsung Research India, Bangalore (INDIA), India)
Packet classification is a well-researched field. However, none of the existing algorithms works well for very large rule-sets up to 128K rules. Further with the advent of IPv6, number of rule field bytes is going to increase from around 16 to 48. With higher number of field bytes, both memory usage and classification speed is affected badly. EQC16 attempts to solve this particular problem. It borrows the design from ABV (Aggregated Bit-Vector) algorithm and adds some effective optimizations. EQC16 uses 16 bit lookup to reduce memory accesses, min-max rule information to narrow down search scope, and combines two 8 bit fields for fast search. It has very high classification speed, reasonable memory requirement and small preprocessing time for large rule-sets and it supports real-time incremental updates. EQC16 algorithm was evaluated and compared with existing decomposition based algorithms BV (Bit-Vector), ABV and RFC (Recursive Flow Classification). The results indicate that EQC16 outperforms both BV and ABV in terms of classification speed and RFC in terms of preprocessing time and incremental update feature.
pptx file
An Optimized RFC Algorithm with Incremental Update
Uday Trivedi (Samsung R&D Institute, Bangalore, India); Mohan Lal Jangir (Samsung Research India, Bangalore (INDIA), India)
RFC (Recursive Flow Classification) is one of the best packet classification algorithms. However, RFC has moderate to prohibitive high preprocessing time for rule-sets having more than 10K rules. RFC does not provide incremental update. Due to these essential missing features, RFC is used in limited scenarios. This paper attempts to add these essential features in RFC. Our algorithm uses various memory and processing optimizations to speed up RFC preprocessing phase. We provide an algorithm to compute only those CBM (Class Bit Map) intersections for which corresponding value pairs are found in rules. We optimize CBM intersection by using ABV algorithm and min-max rule information. We also propose an optimized algorithm to manage real time incremental updates in RFC. The algorithm modifies only required parts of RFC tables and makes sure that the updated tables have information in correct order. For incremental update, moderate amount of extra memory is required. We tested our algorithm for preprocessing time and incremental update feature. The results indicate that we get moderate improvement in pre-processing time with real time incremental updates in our modified RFC.
pptx file
An Optimal Range Matching Algorithm for TCAM Software Simulation
Mohan Lal Jangir (Samsung Research India, Bangalore (INDIA), India); Uday Trivedi (Samsung R&D Institute, Bangalore, India)
This paper presents an algorithm for matching a search key against multiple entries with arbitrary ranges. These entries are referred as range fields of rules. The range matching is an important feature required in routers or gateways to implement policy based routing or firewall. This feature is usually provided by TCAM which can be implemented either in H/W or S/W. This paper presents an algorithm to simulate S/W TCAM. The algorithm can match a 16-bit or bigger search key against multiple range fields by decomposing the key and range field in 8-bit sub-fields. The paper explains an application of this algorithm using Lucent Bit Vector to heavily optimize memory consumption. The algorithm proposes to expand a range field into maximum 3 range fields for 16-bit range matching. Similarly, worst case expansion for 24-bit range field and 32-bit range field is 5 and 7 respectively.
pptx file
Performance Analysis of a Machine-to-Machine Friendly MAC Algorithm in LTE-Advanced
Poonacha G (IIITB, India); Ameneh Pourmoghadas (IIIT-B, India)
One of the main challenges in Machine Type Communication (MTC) services in the 3GPP LTE networks is to improve device power efficiency for Machine to Machine (M2M)communication. We consider MTC application scenarios which consist of a number of devices connecting to the network concurrently. The methods considered in TR 37.868 involve Disjoint Allocation (DA) and Joint Allocation (JA). The DA scheme is based on separating the available RACH (random access channel) preambles into two disjoint sets for Human-to-Human (H2H) and M2M calls. In the joint allocation scheme the H2H calls have access to the preamble set of M2M consumers. Performance analyses of the schemes assume that both H2H and M2M calls follow Poisson traffic models. In this paper we consider the joint allocation scheme in which M2M calls have access to H2H preambles as we expect more M2M calls in the future. We derive and present analytical expressions for the throughput, the collision probability, the success probability, and the idle probability for DA and JA methods based on the assumption that RA (random access) arrivals of M2M and H2H devices follow Beta and Poisson distributions respectively. The energy consumption performances of the two methods are compared. Results show that with the new JA method we can reduce the number of collisions and save the power consumption of M2M devices up to 4%.
pdf file
Development of CCSDS Proximity-1 Protocol for ISRO's Extraterrestrial Missions
Akshay Sharma (University of Bristol, India); Unnikrishnan E (ISAC, India); Ravichandran V. (ISRO, India); Natarajan Valarmathi (ISRO Satellite Centre, India)
Proximity-1 protocol had been key in successful communication link establishment for a number of extraterrestrial missions. In recent years ISRO has flown missions to the moon and Mars. Given the reliability of Proximity-1 protocol over short distances, it was decided to develop the protocol for ISRO's future space missions where a short haul communication link is required for communication between rovers, Landers and orbiters. In this paper we discuss the implementation of Proximity-1 protocol with reference to the requirement of Indian spacecrafts. The paper discusses the modifications and customization made during implementation of the protocol under the purview of protocol standard. Requirements and feasibility of a new features such as proximity safe mode and data interface unit are discussed with possible scenarios which may employ them. The safe mode allows usage of all proximity features, but with a greater protocol control and reduced complexity. Data interface unit allows proximity to function as independent module and communicate with higher layers via high speed interfaces. The paper details the response to directives and generation of notifications at various sublayers of the protocol. The protocol is developed and tested on an FPGA platform. Test setup and testing strategies used to evaluate the performance of the protocol at base band level are also discussed.
ppt file

S3-A: Parallel and Distributed Algorithmsgo to top

Room: 108-B Block E First Floor
Chair: Mallesham Dasari (Stony Brook University, USA)
Moldable Load Scheduling Using Demand Adjustable Policies
Sachin Bagga (GURU NANAK DEV ENGG. COLLEGE, LUDHIANA, India)
Workload distribution among processors is one sided task. Whereas consistent management of processor availability to bulk job arrival is an aspect of resource management. Parallel systems where high probability of infinite job arrivals with varying processor demand requires a lot of adjustment efforts to map processors space over job space .Each job has different required characteristics like no. of processors etc. But the number of available resources is of different characteristics. Particular characteristic processor demanded by a job usually are not available. Such case scenarios are then adjusted to adapt moldable parallel characteristics. Rigid based approaches considered as static demand fit allocation schemes where the job is considered to be active task only when scheduler satisfied the processor demand. Current research focuses on demand adjustment schemes by considering synthetically generated work load and processor availability map with discrete clock frequency. Illustrations produced on the basis of simulation study about demand adjustment schemes consisting static and dynamic approaches with the aim of consistent processor availability fit i.e. processor offered space. Idea behind such experimental study is to analyze various scheduling algorithms along with different performance parameters managing best processor space.
pptx file
Accelerating the DNA Sequence Reconstruction Problem in Approximate Algorithm by CUDA
Yukun Zhong and JiaoBiao Lin (Road no 1 PengShan SiChuan, P.R. China); BaoQiu Wang and Che Nian (Sichuan University Jinjiang Collage, P.R. China); Chen Tao (Sichuan University Jinjiang Collage, USA)
Traditionally, shotgun for DNA sequence alignment is one of the main method of bioinformatics. It is used to break a long DNA sequence into small fragments. This paper introduces a new method to improve the efficiency of DNA sequence reconstruction after shotgun method using construction suffix array based on CUDA programming model. The experimental results show the construction of suffix array using GPU is an more efficient approach on Intel(R) Core(TM) i3-3110K quad-core and NVIDIA GeForce 610M GPU. Consequently, The experiment presents the efficiency of GPU performance compared with CPU performance, and study shows the method is more than 20 times speedup than that of CPU serial implementation.core and NVIDIA GeForce 610M GPU(kepler architecture).
pptx file
A Real-Time Stereo Rectification of High Definition Image Stream Using GPU
Pritam Prakash Shete, Dinesh Sarode and Surojit Bose (Bhabha Atomic Research Centre, India)
Stereo vision systems are essential for the 3D object reconstruction, industrial automation, telerobotics and many more. Stereo rectification process aligns the left and the right images such that their respective image rows are accurately aligned with each other. In this paper, we propose and implement a real-time stereo rectification of the high definition stereo image stream coming from the IP cameras for the comfortable stereoscopic perception. We utilize the GStreamer multimedia framework for the image acquisition. We make use of the GPU for the real-time stereo rectification using the OpenGL. We make use of the OpenGL concepts like the framebuffer object, the vertex buffer object for the real-time image remapping. We divide stereo rectification process into two phases. Initially we compute the stereo rectification inverse lookup maps once using the OpenCV library. Subsequently we apply these inverse maps to each incoming image frame using the OpenGL concepts. We compare our results with the OpenCV realization using the optimized AVX instructions and the CUDA framework. Our OpenGL based module performs the stereo rectification of the full high definition stereo image stream at about 70 frames per second.
ppt file
GPU Accelerated Inexact Matching for Multiple Patterns in DNA Sequences
Priyank Rastogi (National Institute of Technology Karnataka, India); Ram Guddeti (National Institute of Technology Karnataka, Surathkal, India)
DNA sequencing technology generates millions of patterns on every run of the machine and it poses a challenge for matching these patterns to the reference genome effectively with high execution speed. The main idea here is inexact matching of patterns with mismatches and gaps (insertions and deletions). In Inexact matching pattern is to be matched with DNA sequence with some allowed number of errors. Here we have considered 2 errors. Errors can be mismatches or gaps. Existing algorithm as SOAP3 performs inexact matching on GPU with mismatches only. SOAP3 doesn't consider gaps (insertion and deletion). General Purpose Graphical Processing Unit (GPGPU) is an effective solution in terms of the cost and speed and thereby providing a high degree of parallelism. This paper presents a parallel implementation of multiple pattern inexact matching in genome reference using CUDA based on BWT. The algorithm incorporates DFS (Depth First Search) Strategy for For matching multiple patterns, each thread of GPGPU is provided with a different pattern and hence millions of patterns can be matched using only one CUDA kernel. Since the memory of the GPU is limited then memory management is handled carefully. Synchronization of multiple threads is provided in order to prevent illegal access to the shared memory. GPU results are compared with that of CPU execution Experimental results of the proposed methodology achieved an average speedup factor of seven as compared to that of CPU execution.
pptx file
Towards a New Way of Reliable Routing: Multiple Paths Over ARCs
Foued Melakessou (University of Luxembourg, Luxembourg); Maria Rita Palattella (Luxembourg Institute of Science and Technology (LIST), Luxembourg); Thomas Engel (University of Luxemburg, Luxembourg)
The Available Routing Constructs (ARCs), recently proposed ad IETF, provide a promising model for achieving highly reliable routing in large-scale networks. Among its features, ARCs offers multi-path routing by design. In the present work we introduce ARCs for the first time to the research community. Then, we show, by means of simulation results, how ARC over-performs classical multi-path routing algorithms, by building disjoint multiple paths without extra-cost due to new route computation.
pptx file

S3-B: International Workshop on Big Data Search and Mining (IWBSM-2014)go to top

Room: 108-B Block E First Floor
Chairs: Mallesham Dasari (Stony Brook University, USA), Asif Ekbal (IIT Patna, India)
Secure Code Assignment to Alphabets Using Modified Ant Colony Optimization Along with Compression
Hitesh Hasija (SENIOR SOFTWARE ENGINEER, India); Rahul Katarya (Delhi Technological University, India)
Assigning ASCII values to the plaintext alphabets during transmission of documents is very common, and as we know that ASCII value of 'a' is less than that of 'z'. But, what would happen if the document contains more 'z' as compared to alphabet 'a'. In that case, achieving security through encryption - decryption and compression is very crucial. This paper proposes a technique of assigning codes to plaintext alphabets, based on the frequency of occurrences of those alphabets by incorporating ant colony optimization with roulette wheel selection algorithm, such that security in code generation could be achieved. Obtained codes are then applied over a particular encryption algorithm so that cipher text generated should contain those alphabets only whose code values are near about similar to that of plaintext alphabets, so that we can achieve compression simultaneously. Achieving all this within specified time constraints makes this paper more impressive.
pptx file
Accelerating Low-Rank Matrix Completion on GPUs
Achal Shah (Indian Institute of Technology, Guwahati, India); Angshul Majumdar (Indraprastha Institute Of Information Technology-Delhi & University of British Columbia, India)
Latent factor models formulate collaborative filtering as a matrix factorization problem. However, matrix factorization is a bi-linear problem with no global convergence guarantees. In recent years, research has shown that the same problem can be recast as a low-rank matrix completion problem. The resulting algorithms, however, are sequential in nature and computationally expensive. In this work, we modify and parallelize a well known matrix completion algorithm so that it can be implemented on a GPU. The speed-up is significant and improves as the size of the dataset increases; there is no change in accuracy between the sequential and our proposed parallel implementation.
pdf file
Enhancing Precision of Markov-based Recommenders Using Location Information
Ali Abbasi (Sharif University of Technology, Iran); Amin Javari (Sharif Uni Tech, Iran); Mahdi Jalili (RMIT University, Australia); Hamid R. Rabiee (Sharif University of Technology, Iran)
Recommender systems are a real example of human computer interaction systems that both consumer/user and seller/service-provider benefit from it. Different techniques have been published in order to improve the quality of these systems. One of the approaches is using context information such as location of users or items. Most of the location-aware recommender systems utilize users' location to improve memory-based collaborative filtering techniques. However, our proposed method is based on items' location and utilizes a Markov-based approach which can be easily applied to implicit datasets. The main application of this technique is in datasets containing the location of items. Experimental results on real dataset show that the performance of the proposed method is much better than the classic CF methods.

S4-A: Systems and Software Engineeringgo to top

Room: 105 Block E First Floor
Chairs: Sherly Elizabeth (IIITM-K, Technopark, Trivandrum, India), Ali G Hessami (Vega Systems & London City University, United Kingdom (Great Britain))
An Open Source Approach to Enhance Industry Preparedness of Students
Smrithi Rekha V (Amrita Vishwa Vidyapeetham, Coimbatore, India); Adinarayanan V (Amrita University, Coimbatore, India)
Transitioning from college to industry can be challenging task if students are not exposed to the right content while in college. In this paper we present an Open Source based approach to Software Engineering where students get the opportunity to work on live Open-Source projects thereby gaining practical exposure to various Software Engineering principles. The objective of this approach is to enhance the Industry Preparedness of students and addressing the challenges they may face as professionals. This approach is based on the result of our study of Software Engineers who have newly joined the industry. The study involved understanding the Software Engineering(SE) course content they were exposed to as students, whether the SE related course(s) were able to meet the outcomes laid by Software Engineering Education Knowledge, whether the course adequately prepared them to take on various industry responsibilities and the challenges they faced in the transition
pptx file
Cross Project Change Prediction Using Open Source Projects
Ruchika Malhotra and Ankita Bansal (Delhi Technological University, India)
Predicting the changes in the next release of software, during the early phases of software development is gaining wide importance. Such a prediction helps in allocating the resources appropriately and thus, reduces costs associated with software maintenance. But predicting the changes using the historical data (data of past releases) of the software is not always possible due to unavailability of data. Thus, it would be highly advantageous if we can train the model using the data from other projects rather than the same project. In this paper, we have performed cross project predictions using 12 datasets obtained from three open source Apache projects, Abdera, POI and Rave. In the study, cross project predictions include both the inter-project (different projects) and inter-version (different versions of same projects) predictions. For cross project predictions, we investigated whether the characteristics of the datasets are valuable for selecting the training set for a known testing set. We concluded that cross project predictions give high accuracy and the distributional characteristics of the datasets are extremely useful for selecting the appropriate training set. Besides this, within cross project predictions, we also examined the accuracy of inter-version predictions.
pptx file
Refactoring Sequence Diagrams for Code Generation in UML Models
Chitra M T (University of Kerala & IIIT Kottayam, India); Sherly Elizabeth (IIITM-K, Technopark, Trivandrum, India)
The UML Sequence Diagram along with Model Driven Architecture in software development helps to model the time constraint behavior that enhances the legibility of the structure and behavior of a system. The Object Constraint Language (OCL) helps to convey additional constraints and invariants required, but OCL confines into an expression language. The lack of program logic and flow of control limit these models to generate codes and also for proper verifications. This paper concentrates on refactoring XMI of Sequence diagram, an XML Meta data Interchange, with OCL constraints to build a framework for automatic code generation. The proposed model is tested in a coal mill of a Thermal Power Plant, a highly complex time constrained system. The source code generated from the refactored XMI is able to generate the set of coal mill parameters that matches to the real plant data results.
ppt file
Bootstrap Sequential Projection Multi Kernel Locality Sensitive Hashing
Harsham Mehta (Thapar University, India); Deepak Garg (Bennett University, Greater Noida, India)
In recommender system we have similarity search as a key part for making efficient recommendations. Similarity search have always been a tough task in a high dimensional space. Locality Sensitive Hashing which is most suitable for extracting data in a high dimensional data (Multimedia data). The Idea of locality sensitive hashing is that it decreases the high dimensional data to low dimensions using distance functions and then store this data using hash functions which ensures that distant data is placed much further. This technique has been extended to kernelized Locality sensitive hashing (KLSH). One limitation of regular LSH is they require vector representation of data explicitly. This limitation is addressed by kernel functions. Kernel functions are capable of capturing similarity between data points. KLSH is a breakthrough in content based systems. This method takes a kernel function, a high dimensional database for data inputs and size of hash functions to be built. These kernel functions that are being used may give different degree of result precision. Hence we try to combine these kernels with a bootstrap approach to give an optimal result precision. In this paper we present the related work that has been done in locality sensitive hashing and at the end we propose algorithms for data preprocessing and query evaluation.
pptx file
Performance Analysis of Ensemble Learning for Predicting Defects in Open Source Software
Arvinder Kaur and Kamaldeep Kaur (Guru Gobind Singh Indraprastha University, India)
Machine learning techniques have been earnestly explored by many software engineering researchers. Although at present there are no consistent conclusions on which ones are better for software defect prediction, some recent studies suggest combining multiple machine learners, that is , ensemble learning, may have a better performance. This study contributes to software defect prediction literature by systematically analyzing the performance of three important homogeneous ensemble methods - Bagging, Boosting, and Rotation Forest, based on fifteen important base learners, by exploiting the data of nine open source object-oriented systems obtained from the PROMISE repository. Results indicate while Bagging and Boosting may result in AUC performance loss, AUC performance improvement results in twelve of the fifteen investigated base learners with Rotation Forest ensemble.
pptx file
Speculation of CMMI in Agile Development
Sahil Aggarwal (UPTU & G L Bajaj Institute of Technology and Management, India); Vikas Deep (Amity University, India); Robin Singh (Wipro, India)
This paper is based on frequently changing face of software development industry and to remain in the competition, organizations have to engage with the new situations every day. Agile methodology is known to maintain the speed which is essential to remain in the competition, along with it, there is a need of improvement at each step which can be covered by following CMMI. This paper shows how the essence of CMMI can be implemented in Agile Process which makes the development mature and organization capable to handle the difficulties. It also covers every aspect of CMMI from measurement plan to documentation and amalgamates it with the Agile process.
pptx file
An Ameliorated Methodology to Establish the Analogy between Business Process Perspective Views and UML Diagrams
Shivanand M Handigund, Shivaram A M and Arunakumari B N (Bangalore Institute of Technology, Bangalore (India), India)
Unified Modeling Language (UML) is a de-facto standard design language developed by three amigos. The standardization of diagram is attributed to well defined syntactics and semantics of the language. Unfortunately, the pragmatics is vague. This made pragmatics of UML diagrams, human skill dependent. Moreover, there is a lack of granulation therefore performance cannot be determined through any metrics. The vagueness of application of UML diagrams and lack of performance determination placed UML in cul-de-sac. This paper attempts to resolve this labyrinth by eliminating the vagueness of UML applicability through reverse engineering of known business process semiotics and their mapping to defined components of semiotics of UML diagrams and then determining the vague pragmatics component of UML semiotics by solving mathematical equations. The inactive performance determination is made efficient through the mathematical interpretation of activity along with their input and output. In reverse engineering process, the syntactics and semantics abstracted through logical positivism. Initially fully or partially representative of perspective view by UML diagram is determined through initial comparison of UML syntactics with abstracted syntactics and semantics of BPPVs. The syntactics and semantics of BPPVs interleaved in different perspective views. However, we need syntactics for comparison purpose and for establishing equivalence we consider the interleaved components. Similarly, the same perception is taken for UML diagrams. The diagrams representing perspective views are determined by solving appropriate simultaneous equations. The perspective views are orthogonal to each other in the business process. There is need to granulate the atomic unit of activity so that this activity is always complete. And in the design of any perspective view for any given information system only these unit activities are reorganized so that at no point of time the partial activity is considered. Moreover, the activities should always be complete. This perception is considered with mathematical rigour to enhance its vivacity. Thus UML diagrams are made useful in the design of information system projects.
rar file
Modeling Data Races Using UML/MARTE Profile
Akshay KC, Ashalatha Nayak and Balachandra Muniyal (Manipal University, India)
Unified Modeling Language(UML) is a standard language for modeling in the domain of Object Oriented Software Development. However, it lacks the modeling construct for real time systems. The UML profile for Modeling and Analysis of Real Time Embedded Systems (MARTE) has been recently standardized by Object Management Group (OMG) to provide the necessary constructs. It provides support for Model Driven Engineering (MDE) of real time systems. The goal of this paper is to present the UML/MARTE profile in identifying a concurrency issue known as data race. The proposed approach leads to a supporting tool for automated detection of data races in which UML Sequence diagram is used to specify the temporal ordering of messages.
pdf file

S4-B: International Workshop on Software Engineering for Web Application Development (SEWAD-2014)go to top

Room: 105 Block E First Floor
Chairs: Sherly Elizabeth (IIITM-K, Technopark, Trivandrum, India), Ali G Hessami (Vega Systems & London City University, United Kingdom (Great Britain))
Optimization of the Issues in the Migration From Android Native to Hybrid Application: Case Study of Student's Portal Application
Heena Ahuja (GGSIPU, India); Rahul Johari (GGSIP University, India)
Migration of Android Native Application to Hybrid Application is a good choice since Hybrid Application is cross-platform in nature and has many other attractive features to offer but migrating an application from native to hybrid requires a lot of preparation for analyzing the complete process and dealing with the serious glitches expected during this process because any inappropriate step can drastically affect the success of an application. Therefore, the main objective of our work is to analyze and choose right migration methodology which would reduce developer's efforts, time and cost of development and would also help in optimizing the issues faced for yielding better end results. Migration Approach may vary depending upon the requirement, architecture, size/complexity and purpose of the application, there can be many other important factors also like targeted devices/browsers. Therefore, it would be wise to analyze and evaluate each and every move prior to performing the migration. Considering these aspects, our research paper has presented some concrete solutions which shall help in identifying the right methodology for effective and successful migration. To validate our research work, we picked our own application entitled Student's Portal which was built as an android application earlier and is now migrated to hybrid application by adopting the relevant and appropriate migration approach.
pptx file
Model Driven Fast Prototyping of RIAs: From Conceptual Models to Running Applications
Mario Luca Bernardi (2Research Centre on Software Technology (RCOST), University of Sannio, Italy); Giuseppe Di Lucca (Department of Engineering - RCOST, University of Sannio); Damiano Distante (Unitelma Sapienza University, Italy)
Fast prototyping is a quick and cost effective development of a (minimum) viable version of a software useful to some purpose (e.g., requirements verification or design validation), which can be discarded or refactored to become the version of the software to be delivered. In this paper we propose a model-driven approach for the fast prototyping of Rich Internet Applications (RIAs). Starting from the conceptual model of a RIA, intermediate models and the source code of a ready-to-deploy application prototype are automatically generated through a model driven development process which exploits well known model-driven engineering frameworks and technologies including Eclipse EMF, GMF, and Xpand. Compared to traditional, non model-driven, prototyping approaches, our proposal allows to drastically reduce the overall prototyping effort to juat the effort required to define the conceptual model of the application, as the rest of the process is substantially automatic. The paper describes the overall RIA prototyping approach, the supporting tools and adopted technologies, along with the results from a case study carried out for validation and verification purposes.
pdf file
Responsive, Adaptive and User Personalized Rendering on Mobile Browsers
Sowmya Sunkara (Samsung R&D Institute India, India); Ravitheja Tetali (Indian School of Business, India); Joy Bose (Samsung R&D Institute India, Bangalore, India)
Web browsers in mobile devices typically render a given web page without taking into account the difference in the visual-input requirements for different users and scenarios. It is desirable to have the browsers adapt to the individual users' visual requirements. In this paper we propose a browser that makes dynamic adjustments to the way the web content is rendered based on the context of usage. The adjustments include font size, color, contrast and web page layout. The system makes these adjustments constantly by monitoring the user's usage patterns and interactions with the mobile device, and calculating and applying the changes via a feedback mechanism. We mention the method for making corrections to font size, color and contrast, and implement a system to automatically make font size adjustments using an OpenCV library for head tracking and making the required corrections on a web page. Once the parameters for a user have been calibrated and stored, the user can access the feature on multiple devices by transmitting the relevant data via a cloud service.
pdf file
A Hybrid Authentication System for Websites on Mobile Browsers
Utkarsh Dubey and Ankur Trisal (Samsung R&D Institute India, India); Joy Bose (Samsung R&D Institute India, Bangalore, India); Mani Brabhu and Nazeer Ahmed (Samsung R&D Institute India, India)
Current biometric recognition systems for website authentication are mostly web server based, needing server support and infrastructure, and sometimes dedicated external hardware, for online authentication. Not many web servers support this kind of infrastructure for authentication. On the other hand, pure device based authentication systems are of the 'all or none' type, serving to authenticate users for every action when using the device, or having authentication once and then keeping the device free to use. In this paper we propose a hybrid model, where fingerprint authentication is used in combination with the auto complete function on the browser for logging in to certain types of websites, or accessing only certain kind of information on the browser. In this method, the fingerprint module, inbuilt on certain mobile devices like Samsung Galaxy, is triggered automatically when certain pre-configured rules are met. Also, based on the identity of the person swiping their fingerprint, the device has the ability to switch between one of a number of preconfigured security modes. Such a model can enable enhanced authentication for logging in to websites on mobile devices. We present the results of tests on a browser enabled with this system to study average response times, accuracy and effect on browser performance.
pdf file
Software Development Life Cycle Model to Build Software Applications with Usability
Suburayan Velmourougan (Pondicherry University & STQC, DeitY, Ministry of comm. and IT, India); Dhavachelvan P (Pondicherry University, India); Baskaran Ramachandran (Anna University, India); Balakrishnan Ravikumar (STQCIT, DeITY, MCIT Government of India, India)
Software usability is one of the key quality attributes for the software application to improve the human interface with an effective utilization and accurate use. Lack of usability in the software application leads loss in terms of cost, reputation and trust. Lack of focus on software usability during its development increases the latent and patent flaws in the application. Usability feature cannot be added as additional at the end of development process but it needs to be paid attention throughout the development phases of the Software Development Life Cycle (SDLC). In that view, this paper presents a new, Usability-Software Development Life Cycle model(U-SDLC) introducing usability development tasks and activities to be followed during the SDLC. This paper provides a set of activities/best practices for all stakeholders involved in the planning, architecting, coding, testing and maintaining of software applications. This paper also presents with a comparative study done on existing SDLC model and concludes that the present models are not adequately focusing usability issues while building effectively usable software products and proves with statistical results that the proposed new SDLC model is capable of building usable application.
pptx file

S5-A: Multimedia, Video Systems and Human Computer Interactiongo to top

Room: 104 Block E First Floor
Chair: Abhishek Thakur (BITS Pilani, Hyderabad Campus & BITS Pilani, Hyderabad Campus, India)
SAPRS: Situation-Aware Proactive Recommender System with Explanations
Punam Bedi (University of Delhi, India); Sumit Agarwal (University of Delhi & University of Delhi, India); Samarth Sharma and Harshita Joshi (University of Delhi, India)
Proactive recommender systems are widely used intelligent applications which automatically push relevant recommendations to the users based on their current tasks or interests, without explicit request from them. Such systems help the users in timely reception of the information of their interest. Improving user's acceptance on pushed recommendations of these systems is a challenging task. In these systems, determining right push context (situation) and finding relevant items for the target user are considered as two vital issues for achieving better user acceptance. Moreover, along with the pushed recommendations, if the target user is also shown the explanation why something is recommended to him then this transparency might help the user to make a better decision & increase his faith in the pushed recommendations for improving user's acceptance. Therefore, we present a Situation-Aware Proactive Recommender System (SAPRS) that pushes both relevant and justifiable recommendations to the target user at the right context only in order to achieve better user acceptance. SAPRS works in two phases; (i) situation assessment phase and the (ii) item assessment phase. In situation assessment phase, the proposed system analyzes the current situation i.e. whether or not the current context needs a recommendation to be pushed. In the Item assessment phase, SAPRS generates relevant recommendations for the target user using a location-aware reputation based collaborative filtering algorithm. It also enhances the transparency of the pushed recommendations by means of explanations in this phase. The prototype of SAPRS is implemented using multi-agent approach for restaurant recommendations and its performance is evaluated using precision, recall metrics and feature based comparisons.
pptx file
Content Based Classification of Traffic Videos Using Symbolic Features
Elham Dallalzadeh (Marvdasht Islamic Azad University, Iran); D. s Guru (University of Mysore, India)
In this paper, we propose a symbolic approach for classification of traffic videos based on their content. We propose to represent a traffic video by an interval valued features. Unlike the conventional methods, the interval valued feature representation is able to preserve the variations existing among the extracted features of a traffic video. Based on the proposed symbolic representation, we present a method of classifying traffic videos. The proposed classification method makes use of symbolic similarity computation and dissimilarity computation to classify the traffic videos into light, medium, and heavy traffic congestion. An experimentation is carried out on a benchmark traffic video database. Experimental results reveal the ability of the proposed model for classification of traffic videos based on their content.
A New Weighted Audio Mixing Algorithm for a Multipoint Processor in a VoIP Conferencing System
Sameer Sethi, Prabhjot Kaur and Swaran Ahuja (ITM University, India)
Audio Conferencing is the one of the main features provided by VoIP telecommunication systems. Along with factors such as background noise, low audio level, delay and packet loss, audio mixing algorithm also contributes noise to the output of a audio conferencing system. True mixing algorithm suffers from the problem of overflow/underflow which leads to addition of noise in the form of clipping. Several researchers have proposed many weighted audio mixing algorithms some of which mitigate this problem and increase the voice quality of the mixer output. But in high background noise levels these algorithms fail to maintain the voice quality and lead to lower mean opinion scores. In this paper we introduce a new weighted audio mixing algorithm with some voice enhancement algorithms such as noise reduction, automatic level control and voice activity detection. This new algorithm calculates the weighted factor based on the root mean square values of the input streams of the participants of the conference. This helps the algorithm to adaptively smoothen the input streams and provide a scaled mixer output which is better in perceived speech quality. Perceptual Evaluation of Speech Quality (PESQ) and Perceived Audio Level (PLL) measures are used to compare the results of this new algorithm with earlier work in different background noise levels. Our experimental results demonstrate better and consistent speech quality by this new algorithm in all background noise levels.
ppt file
Multicode CDMA/CI for Multimedia Services Over LEO Satellite Channel
Rajan Kapoor and Preetam Kumar (Indian Institute of Technology Patna, India)
Multicode CDMA is a promising scheme to support multirate services for multimedia communication. In this paper, a novel multicode CDMA scheme using Carrier Interferometry (CI) codes has been proposed to support multirate services over LEO satellite channel. It is observed that use of CI codes not only help in mitigating multiple access interference but also reduces the Peak to Average Power Ratio (PAPR) compared to Walsh Hadamard (WH) codes. The percentage of symbols with PAPR greater than 9 dB is reduced from 80% to 10% by employing CI codes for data rate sixteen times the basic data rate. Also, at elevation angle of 30 degrees, high rate user observes as much as 8 dB SNR gain using CI codes. Additionally, orthogonal CI codes with arbitrary integer length can be generated.
An Interactive GUI Tool for Thyroid Uptake Studies Using Gamma Camera
Sai Vignesh T (Sri Sathya Sai Institute of Higher Learning, India); Siva Subramaniyan Viswanathan (Sri Sathya Sai Institute of Higher Medical Sciences, India); Kumar Rajamani (Robert Bosch Engineering and Business Solutions Limited, India); Siva Sankara Sai Sanagapati (Sri Sathya Sai Institute of Higher Learning, India)
Thyroid uptake study is a technique that requires injection of a radio-isotope/radiotracer emitting gamma rays into the blood stream of the patient. Thyroid imaging is done by means of Thyroid Uptake Imaging system. In the absence of a sophisticated system, imaging can also done using Gamma Camera. By intravenously injecting 2 millicuries of Technetium-99m pertechnetate radio-isotope, serial thyroid images are acquired. This Uptake study provides Functional information and is useful for diagnosis and treatment of Hyperthyroidism. Thyroid uptake study done using Gamma Camera has to be calibrated at each laboratory using this technique. In our hospital it has been standardized that a tracer uptake of greater than 2 % is considered Hyperthyroidism, between 0.5 and 2 % is considered normal and less than 0.5 % is considered Hypothyroidism. Thyroid Uptake is calculated based on the counts .Counts are nothing but the sum of all intensities in the selected region of the image. Gamma Camera uses a LEAP (Low Energy All Purpose) collimator which handles only photons emitted from radio-isotopes having lower emission energies. So Technetium-99m is used which has its energy of emission around 140 keV. For a typical Thyroid Uptake Probe where Iodine-131 having greater emission energy of 364 keV is preferred, existing Thyroid Uptake software cannot be used. Therefore an Interactive GUI (Graphical User Interface) tool was developed for thyroid Uptake studies using Fiji for determination of tracer uptake by manually drawing the ROI (Region of Interest) around left and right thyroid lobes separately. Developed tool was tested on 30 real time thyroid cases (26 female and 4 male) and the uptake values obtained are compared with those obtained from the existing software tool.
pptx file
Semi-automatic Generation of Accurate Ground Truth Data in Video Sequences
Gustavo Fernández Domínguez (AIT Austrian Institute of Technology, Austria)
Generation of ground truth data from video sequences is still an intriguing problem in the Computer Vision community. The massive amount of data and the necessary effort for annotating this data make this task a challenging problem. In this paper we investigate the possibility of generating ground truth data in a semi-automatic way. Specifically, using the output of different algorithms, a new output based on robust statistics is generated. The proposed method uses results obtained from real data which is used for evaluation purposes. The generated output is proposed to be used as a basis of ground truth data reducing the necessary time for generating this data. The main contribution of this paper is to show that such methodology can be used to generate an initial ground truth data, which is accurate and reliable, in both ways semi-automatic and fast. Various results and analysis are presented to evaluate the performance of the proposed methodology. Obtained results suggest that generating ground truth data based on the output of different algorithms is possible alleviating the problem of annotating such data manually.
Implementation of Augmented Reality in Cricket for Ball Tracking and Automated Decision Making for No Ball
Nikhil Batra (MSIT, India); Amita Yadav (Inderprasth University, India); Harsh Gupta, Nakul Yadav and Anshika Gupta (MSIT, India)
Technology is an absolute necessity in the world of sport these days. It has been implemented to develop various techniques that are used to learn new strategies and hence excel in sports. The judicious use of technology could improve performance of the players, prevalent coaching methods, match analysis and prevent controversial umpiring. So, the primary objective of this research is to attain the following goals: 1) An automated multidimensional visual system that would prevent wrong interpretations due to perspective errors. 2) Simulate the pre and post match activity in a computerized graphics system for game and performance analysis of a certain team, player or playing conditions. 3) Approximating the trajectory of ball from multiple dimensions and comparing the predicted path with the actual path. This research paper presents the implementation of augmented reality in cricket using multi valued automated decision making to detect no ball and wide ball. The trajectory approximation is then used to gather information on variations in pitch and to train the players about the spin and swing of the ball.
rar file
Fast Connect Procedure for Session Initiation Protocol Using Cached Credentials
Vineet Menon, Sunil Kulgod and Jagadeesh Bangalore (Bhabha Atomic Research Centre, India)
SIP initiates a call using a three way handshake between both the User Agents (UAs). This ensures a connection with human users in between. Each time a SIP UA initiates a call, it has to do capability negotiation which involves resolving the caller, audio codec, video codec etc. In this paper, we have proposed a novel scheme for reducing the size of packets of INVITE messages by enabling a caching at the UA's. We propose to prevent the transmission of SDP messages, which are used for capability negotiation, instead we make the UA remember the capability of the other end and act upon it intelligently. We have made attempts to make our improvement backward compatible and in conformance with SIP RFC 3261. The capabilities will be stored in persistent storage with each UA and will be updated periodically to meet with the in-call INVITE messages. This proposal will effectively reduce the message size of INVITE and the impending response (180/Ringing or 200/OK) message by half. Under ideal conditions, we get two messages with half the original size, while if either end fails to conform to this procedure, SIP codec negotiation will revert back to the original Session Description Protocol (SDP) exchange without any overhead as described in [5].
pdf file
Processing of EEG Signals for Study of Coupling in Brain Regions for Eyes Open and Eyes Closed Conditions
Deboshree Bose (Amrita Vishwa Vidyapeetham, India); Shikha Tripathi (PES University & Bangalore South Campus, India); Ajeesh Tp (Amrita Vishwa Vidyapeetham, India)
This paper deals with processing the EEG signals obtained from 16 spatially arranged electrodes to measure coupling or synchrony between the frontal, parietal, occipital and temporal lobes of the cerebrum under the eyes open and eyes closed conditions. This synchrony was measured using magnitude squared coherence, Short Time Fourier Transform and wavelet based coherences. We found a pattern in the time-frequency coherence as we moved from the nasion to the inion of the subject's head. The coherence pattern obtained from the wavelet approach was found to be far more capable of picking up peaks in coherence with respect to frequency as opposed to the regular Fourier based coherence. We detected high synchrony between frontal polar electrodes that is missing in coherence plots between other electrode pairs. The study has potential applications in healthcare.
pptx file

S5-B: International Workshop on Multimedia Signal Processing Techniques and Applications (MSP-2014)go to top

Room: 104 Block E First Floor
Chair: Abhishek Thakur (BITS Pilani, Hyderabad Campus & BITS Pilani, Hyderabad Campus, India)
Lossless Hyperspectral Image Compression Using Intraband and Interband Predictors
Mamatha A. s. and Vipula Singh (R N Shetty Institute of Technology, India)
On-board data compression is a critical task that has to be carried out with restricted computational resources for remote sensing applications. This paper proposes an improved algorithm for onboard lossless compression of hyperspectral images, which combines low encoding complexity and high-performance. This algorithm is based on hybrid prediction. In the proposed work, the decorrelation stage reinforces both intraband and interband predictions. The intraband prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction which is the combination of a linear prediction (LP) and a context prediction. Eventually, the residual image of hybrid context prediction is coded by the Huffman coding. An efficient hardware implementation of both predictors is achieved using FPGA-based acceleration and power analysis has been done to estimate the power consumption. Performance of the proposed algorithm is compared with some of the standard algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. Experimental results on AVIRIS data show that the proposed algorithm achieves high compression ratio with low complexity and computational cost.
pptx file
Quality Evaluation of HEVC Main Still Picture with Limited Coding Tree Depth and Intra Modes
Pratyush Kumar Ranjan (Samsung Electronics); Dileep Pacharla (Bangalore, Karnataka & Samsung R & D Institute, India); Biju Ravindran (Samsung R&D Bangalore & Samsung, India); Devendran Mani (SRI-Bangalore & SRI-Bangalore, India)
In this paper we analyze the impact of limiting the number of modes in intra mode decision process and limiting the quad-tree structure of the coding unit (CU), prediction unit (PU) and transform unit (TU) on the quality of the High Efficiency Video Codec (HEVC) encoder supporting Main Still Picture (MSP) profile. A simplified HEVC encoder with coding-tree configuration such that the largest CU is of size 16x16 pixels with no further partitioning, PU and TU are of same size and with four intra modes- DC, Planar, Vertical and Horizontal, is used as a baseline to benchmark more complex HEVC encoder configuration supporting larger number of modes and deeper trees. Baseline encoder suffered an average quality loss of 0.86 dB when compared with HEVC encoder supporting all features. When largest coding unit (LCU) size is fixed at 16x16 pixels and modes are increased from 4 to 35, the average quality increase is of 0.72dB. Similarly when the modes are fixed at 35 and LCU size is increased from 16 to 64, the average quality increase is of 0.51dB. These results are interpretative in deciding optimal encoder configuration for achieving target objective quality.
ppt file
Towards Redundancy Reduction in Storyboard Representation for Static Video Summarization
Hrishikesh Bhaumik (RCC Institute of Information Technology, Canal South Road, Beliaghata, Kolkata, India); Siddhartha Bhattacharyya and Surajit Dutta (RCC Institute of Information Technology, India); Susanta Chakraborty (Indian Institute of Engineering Science and Technology, India)
Static video summarization techniques aim to represent the salient content in a video by extracting a set of keyframes for presentation to the user. An efficient key-frame extraction process is thus vital for effective video summarization, browsing and indexing in content-based video retrieval systems. In this paper, a three phased approach for key-frame extraction is proposed which aims to represent a static summary of a video. The first phase deals with detecting the best representative frame(s) for each shot in the video. The second and third phases comprise of techniques for intra-shot and inter-shot redundancy reduction using SURF and GIST on the extracted key-frames in the first phase. At the end of each phase, comparison between the system generated summary and user summary is performed. A comparati ve analysis between SURF and GIST for redundancy reduction is also presented. Experimental evaluation of the results on a test set of videos from sports, movie songs, music albums and documentaries show that the proposed method achieves high precision and recall values in all the cases.
ppt file

S6-A: International Workshop on Cyber-Physical Systems and Social Computing (CSSC-2014)go to top

Room: 006 Block E Ground Floor
Chair: Maxwell Christian (GLS University & Gujarat Technological University, India)
LSB Based Image Steganography Using X-Box Mapping
Ekta Dagar (Maharishi Dayanand University, India); Sunny Dagar (Manav Rachna College of Engineering, India)
Due to the high growth of Internet and its applications over the network, there is a need of high level of security for the data to transfer between the networks. Steganography is a technique of hiding the data over the medium so that no one knows that there is any communication going on except the sender and receiver. This paper introduces an approach for Least Significant Bit (LSB) insertion based on Image Steganography to enhance the level of security for data transfer over the internet. The 24-bit RGB image is chosen as a cover image which hides the secret message inside red, green and blue color pixel values. There is an X-Box mapping where several boxes are used which contains sixteen different values (X represents any integer value from 0-9). The values stored in the X-Boxes are mapped with the LSBs of the cover image. This mapping provides a level of security to the secret message which makes it difficult for the intruders to extract the hidden information. We have used the Peak Signal-to-Noise Ratio to measure the quality of images used. Larger PSNR value indicates lower distortion and hence a better quality of image.
ppt file
Towards Twitter Hashtag Recommendation Using Distributed Word Representations and a Deep Feed Forward Neural Network
Abhineshwar Tomar (Ghent University & iMinds, Belgium); Frederic Godin, Baptist Vandersmissen and Wesley De Neve (Ghent University, Belgium); Rik Van de Walle (Ghent University - iMinds, Belgium)
Hashtags are useful for categorizing and discovering content and conversations in online social networks. However, assigning hashtags requires additional user effort, hampering their widespread adoption. Therefore, in this paper, we introduce a novel approach for hashtag recommendation, targeting English language tweets on Twitter. First, we make use of a skip-gram model to learn distributed word representations (word2vec). Next, we make use of the distributed word representations learned to train a deep feed forward neural network. We test our deep neural network by recommending hashtags for tweets with user-assigned hashtags, using Mean Squared Error (MSE) as the objective function. We also test our deep neural network by recommending hashtags for tweets without user-assigned hashtags. Our experimental results show that the proposed approach recommends hashtags that are specific to the semantics of the tweets and that preserve the linguistic regularity of the tweets. In addition, our experimental results show that the proposed approach is capable of generating hashtags that have not been seen before.
pdf file
Linux Malware Detection Using non-Parametric Statistical Methods
Vinod P (SCMS School of Engineering and Technology, Ernakulam, India); Asmitha KA (SCMS School of Engineering and Technology, India)
Linux is the most renowned open source operating system. In recent years, the number of malware targeting Linux OS has been increased and the traditional defence mechanisms seems to be futile. We propose a novel non-parametric statistical approach using machine learning techniques for identifying previously unknown malicious Executable Linkable Files (ELF). The system calls employed as features extracted dynamically within a controlled environment. The proposed approach ranks and determine the prominent features by using non-parametric statistical methods like Kruskal-Wallis ranking test (KW), Deviation From Poisson (DFP). Three learning algorithms (J48, Adaboost and Random Forest) are applied to generate prediction model, from a minimal set of features extracted from the system call traces. Optimal feature vector resulted in over all classification accuracy of 97.30% to identify unknown malicious specimens.
pdf file

S6-B: International Workshop on Internet of Smart objects: Computing, Communication and Management (CCMIoS'14)go to top

Room: 006 Block E Ground Floor
Chairs: Maxwell Christian (GLS University & Gujarat Technological University, India), Komathy Karuppanan (Hindustan University, India)
A Framework for Power Saving in IoT Networks
Mukesh Taneja (Cisco Systems, India)
An IoT / M2M system may support large number of battery operated devices in addition to some mains operated devices. It is important to conserve energy of these battery operated constrained devices. An IoT / M2M Gateway used in this system is an intermediate node between IoT / M2M devices and an IoT / M2M Service Platform. It enables distributed analytics and helps to reduce traffic load in the network. This gateway could be stationary or mobile. In an IoT / M2M system, it becomes important to conserve energy of this Gateway as well. This paper proposes a framework to reduce power consumption of M2M / IoT devices as well as Gateway nodes. We buffer data at IoT Application, IoT Gateways and Devices to keep devices and Gateway nodes in sleep mode as long as possible. We allow computation of the duration to buffer this data using factors such as QoS requirements, predicted pattern of future IoT / M2M messages and congestion indicators from different network nodes. This potentially also allows intelligent aggregation of IoT messages at the Gateway node. We also enhance signaling mechanisms and present software building blocks for this framework. Mesh as well as Cellular access technologies are considered here.
pdf file
A Learning Approach for Identification of Refrigerator Load From Aggregate Load Signal
Guruprasad Seshadri (Audience Communications Systems India Pvt. Ltd. & (A Knowles Company), India); Girish Chandra and P. Balamuralidhar (Tata Consultancy Services, India)
Estimation of appliance-specific power consumption from aggregate power signal is an important and challenging problem. The problem is also known as electrical load disaggregation. This paper addresses the problem of identification of refrigerator load, since refrigerators contribute to significant power consumption in domestic scenario. The key idea is to detect events corresponding to refrigerator, which are embedded in the aggregate power signal. Firstly, features based on amplitude and duration of events are identified by observation of refrigerator-specific power signal. Secondly, these features are extracted from the aggregate power signal. Thirdly, the extracted features are utilized in both supervised and unsupervised learning schemes to identify regions of activity of refrigerator. Performance of event detection demonstrates the potential of relevant features in both supervised and unsupervised learning frameworks.
pdf file
Two Factor Remote Authentication in Healthcare
Tapalina Bhattasali (University of Calcutta, India); Khalid Saeed (AGH University of Technology, Poland)
In order to control access of health related data stored in public cloud, efficient authentication mechanism needs to be considered. Biometrics authentication is more reliable than traditional means of authentication because of its uniqueness and low intrusiveness. To ensure high accuracy level, two factor authentication mechanism is proposed here. In this framework, biometrics authentication is fused with secret PIN. The first factor uses simple and effective behavioral biometrics keystroke analysis model whilst the second factor uses secret PIN mechanism. In factor one authentication, keystroke analysis is proposed where raw data are collected first; then processed data are stored and trust scores are generated finally. End-user gets trust score at each factor of authentication and decision is taken based on final trust score of end-user. Performance analysis of the proposed mechanism shows its efficiency to authenticate end-users.
pptx file
Multi-objective Functions in Particle Swarm Optimization for Intrusion Detection
Nimmy Cleetus (SCMS School of Engineering & Technology); Dhanya A (SCMS School of Engineering & Technology, India)
The paper constitutes the study of particle swarm optimization in multi-objective functions. Swarm intelligence plays a vital role in intrusion detection. Intrusion detection system identifies the normal as well as abnormal behavior of a system. The weighted aggregation method is considered as multi-objective functions. We propose an intrusion detection mechanism based on particle swarm optimization which has a strong global search capability and used for the dimensionality optimization. Random forest is used as the classifier for modelling attacks and legitimate set. An accuracy of 91.71% at detection time of 0.22sec is obtained.
ppt file

S6-C: Second International Symposium on Education Informatics (ISEI - 2014)go to top

Room: 006 Block E Ground Floor
Chair: Maxwell Christian (GLS University & Gujarat Technological University, India)
A Survey of Intelligent Language Tutoring Systems
Mostafa Al-Emran (Universiti Malaysia Pahang, Malaysia & Al Buraimi University College, Oman); Khaled F. Shaalan (The British University in Dubai & Cairo University, United Arab Emirates)
Intelligent Languages Tutoring Systems (ILTSs) plays a significant role in evaluating students' answers through interaction with them. ILTSs implements Natural Language processing (NLP) techniques in order to allow free input of words and sentences. ILTSs have the capability of identifying the input errors and provide an immediate feedback along with the errors source. It has been observed that ILTSs were not surveyed intensively; the reason that motivates us to conduct this research. Some NLP recent trends such as Latent Sematic Analysis and entailment were demonstrated. Different ILTSs have been discussed with a dedicated section about the development of Arabic ILTSs. Arabic share many of its characteristics with Semitic and morphologically rich languages. In our presentation we point out new trends that have been emerged while conducting survey.
pptx file

S7-A: Second International Workshop on Mathematical Modelling and Scientific Computing (MMSC-2014)go to top

Room: 010 Block E Ground Floor
Chairs: Rama Krishna Bandi (Indian Institute of Technology Roorkee, India), Aditi Misra (Tata Consultancy Services Limited, India)
Portfolio Selection Using Maximum-entropy Gain Loss Spread Model: A GA Based Approach
Akhter Rather (University of Hyderabad, India); Sastry (IDRBT, India); Arun Agarwal (University of Hyderabad, India)
This paper presents a multi-objective portfolio selection model solved using genetic algorithms. In this approach an entropy measure has been added so that a well-diversified portfolio is generated. Based on literature survey, it was observed that there is a need of new portfolio selection model which is free from the limitations as observed in existing models. Hence emphasis has been put on proposing a new portfolio selection model with the aim of achieving high returns and efficient diversification. We propose a new portfolio selection model and name it as Maximum-entropy Gain Loss Spread model (ME-GLS). The proposed model overcomes the limitations identified in the existing models available in literature. We have given a comparative analysis of our proposed method with relevant methods available in literature. Since the proposed model achieves higher returns and at the same time achieves higher degree of diversification which implies risk is also minimized at the same time.
pptx filepdf file
Design and Analysis Performance of Kidney Stone Detection From Ultrasound Image by Level Set Segmentation and ANN Classification
Kalannagari Viswanath (Pondichery University & Pondichery Engineeing College (PEC), India); Gunasundari R (Pondicherry University & Pondicherry Engineering College, India)
The abnormalities of the kidney can be identified by ultrasound imaging. The kidney may have structural abnormalities like kidney swelling, change in its position and appearance. Kidney abnormality may also arise due to the formation of stones, cysts, cancerous cells, congenital anomalies, blockage of urine etc. For surgical operations it is very important to identify the exact and accurate location of stone in the kidney. The ultrasound images are of low contrast and contain speckle noise. This makes the detection of kidney abnormalities rather challenging task. Thus preprocessing of ultrasound images is carried out to remove speckle noise. In preprocessing, first image restoration is done to reduce speckle noise then it is applied to Gabor filter for smoothening. Next the resultant image is enhanced using histogram equalization. The preprocessed ultrasound image is segmented using level set segmentation, since it yields better results. In level set segmentation two terms are used in our work. First term is using a momentum term and second term is based on resilient propagation (Rprop). Extracted region of the kidney after segmentation is applied to Symlets, Biorthogonal (bio3.7, bio3.9 & bio4.4) and Daubechies wavelet subbands to extract energy levels. These energy level gives an indication about presence of stone in that particular location which significantly vary from that of normal energy level. These energy levels are trained by Multilayer Perceptron (MLP) and Back Propagation (BP) ANN to identify the type of stone with an accuracy of 98.8%.
pptx file
Estimating the Number of Prime Numbers Less Than a Given Positiveinteger by a Novel Quadrature Method: A Study of Accuracy and Convergence
Mushtaque Ahamed (PESIT South Campus, India); Snehanshu Saha (PES Institute of Technology, Bangalore South Campus, India)
Numerical Integration play a very important role in the evaluation of definite improper integral as there are no simple analytic results available to them. In this paper we explore four such quadrature formula and its performance in evaluation of Logarithmic integral which is an definite improper integral.And it is one of the important integrals in Number Theory. We also compare its performances with some well quadrature formula like Simpson's rule, Trapezoidal rule , Weddle's rule etc. AMS Subject Classification: Numerical integration, 65D32 Quadrature and cubature formulas.
pdf file
Codes Over $\mathbb{Z}_4+v\mathbb{Z}_4$
Rama Krishna Bandi and Maheshanand Bhaintwal (Indian Institute of Technology Roorkee, India)
In this paper we study linear codes over the ring $R=\mathbb{Z}_4 + v\mathbb{Z}_4$ , where $v^2=v$. Using a Gray map on $R$ we obtain the MacWilliams identities for both Lee and Gray weight enumerators for codes over $R$. We also briefly discuss self-dual codes over $R$. We give some construction methods of self-dual and self-orthogonal codes, and illustrate them with some examples.
pdf file
Perturbations on Prey Predator Equilibria: An in Silico Approach
Traditional models state that both prey and predator population depend each other and they follow same cycle indefinitely until an external force is used. In this manuscript we have studied the effect of epidemics on two/three species prey-predator system under the influence of epidemics and we have used time delay models for our study. We have introduced a z-shaped function in prey-predator mathematical model to find the effect of epidemics. After the epidemics both prey and predators showed a constant rate of change in population. The second predator and prey species are not directly dependent so the effect of less number of prey species affects the second predator gradually so it does not show large variations whereas it keeps a linear behavior in three species model.
pptx file
Numerical Solution of Some Nonlinear Wave Equations Using Modified Cubic B-spline Differential Quadrature Method
Ramesh Chand Mittal (IIT Roorke, India); Rachna Bhatia (Indian Institute of Technology Roorkee, India)
This paper presents a relatively new approach to solve second order one dimensional nonlinear wave equation. We use modified cubic B-spline basis functions based differential quadrature method for space discretization, which gives results in an amenable system of differential equations. The resulting system of equation has been solved using SSP-RK43 scheme. The SSP-RK43 scheme needs less storage space and causes less accumulation of numerical errors. The utility of the scheme is that it don't need any linearization or transformation for handling the nonlinear terms and hence reduces the computational effort. The accuracy of the approach has been confirmed with numerical experiments. $L_2$ and $L_\infty$ error norms are computed for each example and it is shown that the results obtained are acceptable and are in good agreement with the earlier studies.
pdf file
Hysteresis in CaMKII Activity: A Stochastic Model
Arun Anirudhan and Ranjith G (Sree Chitra Thirunal Institute for Medical Sciences and Technology, India); Omkumar RV (Rajiv Gandhi Centre for Biotechnology, India)
Calcium/calmodulin dependent protein kinase II (CaMKII) plays a crucial role in the induction of Long Term Potentiation. Several mathematical models have been developed to predict the activity levels of CaMKII in the physiological condition and its molecular switch properties. This paper attempts to study the hysteresis property of CaMKII using a stochastic model. It was observed that hysteresis exists under physiological conditions, but the width of the hysteresis band depends on the concentration of protein phosphatase 1 (PP1). The hysteresis band was maximum at low concentrations of PP1 and reduced with increasing concentrations of PP1. It was also observed that the number of phosphorylated subunits saturates beyond a threshold value of calcium signal.
pptx file

S7-B: International Workshop on GPUs and Scientific Computing (GSC 2014)go to top

Room: 010 Block E Ground Floor
Chair: Aditi Misra (Tata Consultancy Services Limited, India)
Intraoperative Cardiac MRI Processing Using Threshold Logic Cells
Sherin Sugathan (Siemens Healthcare Pvt. Ltd., India); Alex Pappachen James (Nazarbayev University, Kazakhstan)
Intraoperative MRI is gaining importance in surgery as it enables surgeons to have a look on real-time MRI scans during the surgery. In addition to the large spatial and temporal resolution of the MRI scans, the process of crunching medical data in real-time can be very challenging and crucial. This paper presents a hardware based approach for processing real-time MRI with an application to heart rate analysis.
zip file

Wednesday, September 24, 16:00 - 18:00 (Asia/Calcutta)

S8: Third International Workshop on Recent Advances in Medical Informatics (RAMI-2014)go to top

Room: 007-A Block E Ground Floor
Chairs: Manoj Kumar T K (IIITM-K, India), Aditi Misra (Tata Consultancy Services Limited, India)
Simulation of Physiological Response to Dynamic Contents
Fatima Isiaka (Sheffield Hallam University, United Kingdom (Great Britain)); Adamu Mailafiya Ibrahim (University of Leeds, United Kingdom (Great Britain))
In web usability, changes in contents of a page result in spontaneous reaction on users, which may lead to affect physiological processes occurring with direct interaction with a webpage. Our goal is to model mechanical interactions of body and eye movement behaviour systems. We are particularly concerned with physiological reactions to stimulus to build a proposed system that detects response peaks and its correlates to web widgets. This consists of four modules: (a) develop a set of integral equations, which represent the underlying physiological components (b) construct hypothesis for suggesting structural alterations that result from the direct interaction with webpages (c) set parameters for the equations on experimental data and (d) validate the model on the original physiological measures. Model output shows conformity with the criteria required for verification and validity of the prototypical simulation algorithm.
pdf file
Analysis of Flow Characteristics Through an Artery with Time Dependent Overlapping Stenosis
Amit Bhatnagar (F E T Agra College Agra, India); R K Shrivastav (Agra College, Agra, India)
The present study deals with blood flow through an arterial segment with overlapping stenosis depending on time to a certain extent. The problem is examined by a combined effort of analytical and numerical techniques. The influence of time and slip velocity on wall shear stress, resistance to flow, axial velocity and volumetric flow rate are expressed graphically. The characteristic velocity for protuberance is taken into consideration under which critical value of Reynolds number where separation occurs has been obtained. The expressions of dimensionless discharge variable, dimensionless shear stress variable are found. It is observed that the wall shear stress increases with time parameter but decreases with increasing slip velocity. The axial velocity of blood diminishes with time parameter. This investigation of blood may be useful in designing and fabrication of artificial organs.
pptx file
Effects of Fingertip Orientation and Flash Location in Smartphone Photoplethysmography
Aishwarya Visvanathan and Rohan Banerjee (Tata Consultancy Services, India); Aditi Misra (Tata Consultancy Services Limited, India); Anirban Dutta Choudhury and Arpan Pal (Tata Consultancy Services, India)
Smartphone based reflective Photoplethysmography (PPG) measures reflected light from blood capillaries, typically from the fingertip of a user. It has gained popularity as a means of unobtrusive affordable physiological sensing. The orientation and relative distance between smartphone flash and camera; finger-tip placement direction etc. highly influences the captured PPG signal quality, and hence affects the estimation quality of physiological parameters. In this paper, the frames captured by smartphone camera are divided into smaller blocks and all the blocks are compared for heart rate estimation. Results indicate that the blocks having similar distance to the flash yields similar quality PPG. The observation is validated against two popular off-the-shelf smartphones, having different flash position with respect to the camera.
pptx file
Automatic Diagnosis of Astigmatism for Pentacam Sagittal Maps
Mandeep Singh (Thapar University, India); Sarah Ali Hasan (None, Iraq)
Astigmatism is a very common refractive error, also it coexist with other refractive errors. Two thirds of the population worldwide who have myopia has astigmatism as well. Diagnosing of these visual disorders has been developed vastly in the last 10 years, and like everything around us nowadays, medical treatment and diagnosis all are becoming computerized and automated more day by day. In this paper we present a straight forward method on diagnosing astigmatism and classifying its types and degrees by implementing morphology and shape descriptors on the latest topographic images.
pptx file
Automatic Organ Validation of B-mode Ultrasound Images for Transmission to Cloud
Ramkrishna Bharath (Indian Institute of Technology, India); P Rajalakshmi (Indian Institute of Technology Hyderabad, India)
Miniaturization in size of Medical ultrasound scanning machine made it to use in point of care applications. Lack of sonographers and their unwillingness to work in rural areas limits the benefits of ultrasound system in rural healthcare. Diagnosis of patients through ultrasound is done by visualizing the ultrasound scanned images of organs. Diagnosis through telemedicine involves transmitting of ultrasound images from rural locations to cloud, where sonographer can remotely access the ultrasound data from cloud and generate the report, thus reducing the geographical separation of patients in healthcare. Due to lack of adequate sonographers, ultrasound scanning in remote areas is operated by semi-skilled clinicians. Most of the images generated by semi-skilled clinicians are not useful for diagnosis. Transmitting all these images increases the data in cloud, drains the battery of portable ultrasound machine and increases latency in medication. This paper provides automatic B-mode ultrasound image validation based on organ information present in the image for diagnosis, thus avoiding transmission of invalid images to cloud. Linear kernel SVM classifier trained with first order statistic features of image with/without organs is used to classify the images into valid and invalid for diagnosis. The algorithm resulted with a recognition efficiency of 94.2\% in classifying the ultrasound images.
pptx file

Wednesday, September 24, 16:00 - 19:00 (Asia/Calcutta)

S9: International Workshop on Advances in Satellite Communications and Networking (SatComNet'14)go to top

Room: 006 Block E Ground Floor
Chair: Maxwell Christian (GLS University & Gujarat Technological University, India)
Double H Shaped Metamaterial Embedded Compact RMPA
Preet Kaur and Sanjay Aggarwal (YMCAUST, India); Asok De (NIT patna, India)
This paper presents a compact double H shaped metamaterial-embedded rectangular microstrip patch antenna. The slots are cut in rectangular microstrip patch antenna (RMPA) to decrease the resonant frequency, but this leads to impedance mismatch. To overcome this effect a double h shaped metamaterial is embedded inside the slot. This technique not only provides a good impedance matching and bandwidth, but in addition, also provides the 34% compactness in size. The proposed antenna is simulated and optimized using HFSS software. The prototype antenna has been fabricated and measured results of the proposed antenna are found to be in good agreement with simulated results.
pptx file
Analysis and Design Rectangular Patch with Half Circle Fractal Techniques
Sanjeev Yadav (Govt. Women Engineering College Ajmer, India); Pushpanjali Jain and Ruchika Choudhary (Govt. Engineering College Ajmer, India)
In this paper, Rectangular patch with circular half geometry design is proposed. The proposed design has operate at dual band of frequency 2.7-2.9 GHz and 7.8-8.5 GHz. The proposed design can be used in the military for meterological purpose and satellite communications. The proposed design is fabricated on RTduroid/5880 with relative permittivity 2.2 and having dimension 42× 42×32〖mm〗^3.To get the proper impedance matching of 50Ω, coaxial feed line is used. The proposed design has return loss -18.31dB and-23.31dB in frequency band 2.7-2.9Ghz and 7.8-8.5Ghz. Radiation pattern of proposed design in E-plane and H-plane is acceptable. The proposed design is simulated using High Frequency structure simulator.
pptx file
Interference Mitigation in Downlink Multi-Beam LEO Satellite Systems Using DS-CDMA/CI
Rajan Kapoor, Ramu Endluri and Preetam Kumar (Indian Institute of Technology Patna, India)
The interference from adjacent beams of a low earth orbiting (LEO) satellite degrades the performance of users in area illuminated by desired beam. This paper investigates the advantages of employing Carrier Interferometry (CI) codes in downlink of a DS-CDMA based LEO satellite link to mitigate such interference. Qualitatively, use of these codes in a downlink LEO satellite channel offers following advantages: (1) Improved BER performance in interference dominant satellite channel resulting in improved capacity; (2) Significant reduction in bit error rate (BER) floor; (3) Lesser variation in performance with changes in number of active users; (4) Uniform cross-correlation characteristics over code space for uniform quality of service; and (5) Orthogonal CI codes with arbitrary integer spreading length can be generated. Quantitatively, these codes provide improvement in capacity by at least 50 percent at 15 dB and 20 degrees elevation, which further improves with increase in elevation angle. Moreover, the achievable BER floor is atleast as low as that provided by 10 degree elevation gain in traditional codes. To obtain realistic results, Loo's statistical model for land mobile satellite (LMS) channel is assumed and the channel parameters for this model were taken from linear regression fits of experimental measurement results of real-world LMS channel.
Sub-band Filtering in Compressive Domain
Chandra Prakash (Space Applications Centre & ISRO, India)
This paper proposes a novel and computationally efficient method of compressive domain sub-band filtering. The technique utilizes the conjugate symmetry property of DFT matrix to achieve sub-band filtering of a wideband multiband input signal sensed by the Modulated Wideband Converter (MWC) architecture. The proposed technique has flexibility of filtering single or multiple sub-band of a wideband input multiband signal, which is sparse in frequency domain, without increase in additional computational cost. The filter bandwidth can also be controlled by increasing the number of channels in sensing architecture. The simulation result shown in this paper includes sub-band filtering of a multiband frequency domain sparse input signal with or without noise. BER performance of a BPSK modulated signal after filtering and demodulation using the proposed technique is also presented to analyse the impact of sub-band filtering.
pdf file
Ku-Band Low Noise Multistage Amplifier MIC Performance Comparable to MMIC
A novel common source three stage microwave integrated circuit (MIC) low noise amplifier is designed, analyzed, fabricated and tested at Ku-band. Each stage is operated at 2V/15mA rating to achieve optimum desired gain-noise performance taking care of trade-offs. 30dB gain with 1.4dB of minimum noise figure is achieved at 13GHz. Input port is noise matched and input return loss is -13dB. Power matched output port provides output return loss of -27dB. It has isolation -57dB isolation to ensure the unconditional stability. However, the circuit is also tested with 1V/2.6mA lower operating bias point. It keeps the DC power consumption as low as 7.8mW. In this case, the measurement show 26dB gain with 2.14dB noise figure, -9dB input return loss, -18dB output return loss and -60dB isolation. MIC results show circuits potential to achieve comparable response with that of Monolothic microwave integrated circuits (MMIC). Device used here is a high electron mobility transistor (HEMT). The circuit is fabricated on 25mil alumina substrate (εr=9.9, tanδ=0.0007 at 10GHz) with gold plated Kovar carrier plate.

Wednesday, September 24, 16:00 - 18:00 (Asia/Calcutta)

T1: Tutorial 1 - NoSQL DatabasesDetailsgo to top

Mr. G C Deka, Ministry of Labour & Employment Govt. of India
Room: 003 Block E Ground Floor

Distributed data replication and partitioning are the two fundamentals to sustain enormous growth in data volume, velocity and value in the cloud. In a traditional database cluster, data must either replicate across the members of the cluster or partitioned between them. Shipping data manually to distant cloud servers are time-consuming, risky and expensive process and hence network is the best option for data transfer among distributed and diverse database systems. Relational databases are difficult to dynamically and efficiently provision on demand to meet cloud requirement. So NoSQL databases are a new breed of databases in order to overcome all the identified limitations and drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other limitations of RDBMS for cloud computing.

The common motivation of NoSQL design is to meet scalability and failover. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB mostly used and they can be termed as the representative of NoSQL world. CAP theorem states that optimization for only 2 out of 3 priorities of a distributed database i.e. Consistency (C), Availability (A), and Partition Tolerance (P) are possible leading to combinations of CA, CP and AP. There are a number of NoSQL databases with different features and functionality. This tutorial discusses 10 popular NoSQL databases under 5 categories for CAP analysis.

Outline

Introduction to cloud computing (Historical Background 10 slides) NoSQL (5 Categories of NoSQL 10-15 slides) Discussion of Cloud Database (with a focus on CAP theorem 5-10 slides) CAP analysis of 10 popular databases (10-20 slides) Practical session on MongoDB

T2: Tutorial 2 - Systems Safety, Security & SustainabilityDetailsgo to top

Prof. Ali Hessami, Vega Systems, UK
Room: 004 Block E Ground Floor

The incessant demand for better value, increased functionality and enhanced quality underlies the drive towards innovation and exploitation of emerging technologies. Whilst these bring a mixed bag of desirable properties in modern products, services and systems, they are often accompanied by complexity, uncertainty and risk. The performance of products, services, systems and undertakings is a measure of their utility, output and perceived or real emergent properties. The key facets to performance are technical, reliability/availability, commercial, safety, security/vulnerability, environmental/sustainability, quality & perceived value/utility.

Whilst the above dimensions are reasonably distinct and often inter-related, the key differentiation between safety and security aspects is broadly as follows; safety is freedom from harm to people caused by unintentional or random/systematic events whilst security is freedom from loss caused by deliberate acts perpetrated by people. In this spirit, security is principally characterized by intent and causation as opposed to strictly being an output performance indicator reflecting degrees of loss or gain. Sustainability is a more complex attribute and encompasses societal, economic, environmental, resource and technological dimensions.

Other than hard (Technical, Commercial) and soft (Quality and Value) performance criteria, the rest are mainly measured probabilistically in terms of risk or reward due to inherent uncertainties. The overall utility and success of any endeavor essentially amounts to getting the correct balance between these hard and soft performance attributes of the goal being pursued. The optimization of these factors poses a major challenge to the duty holders and decision makers today since it demands understanding and competence in social, behavioral, commercial, legal as well as technical engineering disciplines. In this spirit, systems assurance comprises the portfolio of methods, processes, resources and activities adopted to ensure products, services and systems are designed and operated to deliver a required blend of desired performance measures whilst remaining free from undesirable emergent properties which pose a threat to health, safety and welfare of people, commercial damage to the businesses and harm to the natural habitat.

The tutorial on systems oriented safety, security & sustainability would endeavor to cover the following facets of systemic assurance:

Systems specification Requirements Analysis/Specification and Target Setting High Integrity Systems Design Systems Modeling and Simulation Qualitative and Quantitative Systems Safety, Security & Sustainability Assessment Probabilistic Safety and Security Performance Forecasting Systems Risk and Reward Management VIII. Demonstration of Compliance against Standards and Legal Requirements

Thursday, September 25

Thursday, September 25, 09:00 - 13:30 (Asia/Calcutta)

R2: Conference Registrationgo to top

Room: Block E, Ground Floor (Reception)

Thursday, September 25, 09:30 - 10:30 (Asia/Calcutta)

K3: Keynote - Are Cyber Physical Systems (CPS) ready for the reality?Detailsgo to top

Dr. Axel Sikora, University of Applied Sciences Offenburg, Germany
Room: Auditorium Block D Ground Floor

Cyber Physical Systems (CPS) are around for several years now. Market forecasts predict their broad utilization in the Internet of Things (IoT), as they promise cost reduction or an added value through the increase of services and quality and the availability of information. But, are CPS already well prepared for large-scale implementation in real life?

Mission critical points are scalability, energy-efficiency, safety and security. However, also cost issues in the development, the commissioning and the operation phases play an important role. This talk gives an overview on the state of the art in CPS and shows some selected examples from the extensive list of R&D projects in the author's team in the last 15 years. It will also give hints for further R&D opportunities.

Thursday, September 25, 10:30 - 11:20 (Asia/Calcutta)

K4: Keynote - Security and Trust Convergence: Attributes, Relations and ProvenanceDetailsgo to top

Dr. Ravi Sandhu, University of Texas at San Antonio, USA
Room: Auditorium Block D Ground Floor

Security and trust are interdependent concepts which need to converge to address the cyber security needs of emerging systems. This talk will lay out a vision for this convergence. We argue that security and trust are inherently dependent on three foundational concepts: attributes, relations and provenance. Security researchers have dealt with these three concepts more or less independently. In the future convergence of these three is required to achieve meaningful cyber security. The talk will speculate on some research and technology challenges and opportunities in this respect.

Thursday, September 25, 11:40 - 12:30 (Asia/Calcutta)

K5: Keynote - Enabling Technology for Quantum ComputingDetailsgo to top

Dr. Peter Mueller, IBM Zurich Research Laboratory, Switzerland
Room: Auditorium Block D Ground Floor

Thirty years ago, Richard Feynman thought up the idea of a ‘Quantum Computer', which at that time was recognized as a topic of science fiction. But with the advances in science and technologies of computing, communications and informatics, the fiction is becoming reality. Quantum algorithms have been developed that have the potential to solve problems in the fields of number theory and model simulation. To build quantum computing systems, and the dedicated software onto it, requires developments in many scientific areas. We will take a look at the basic quantum computing hardware, and at a possible quantum logic technology and how to use it. Questions like device architecture and programmability, scalability, reliability and error correction will be addressed and compared with the related topics in our current areas of research on classical computation.

Thursday, September 25, 12:30 - 13:30 (Asia/Calcutta)

K6: Keynote - Hybrid Classifiers - Theory and PractiseDetailsgo to top

Dr. Michal Wozniak, Wroclaw University, Warsaw, Poland
Room: Auditorium Block D Ground Floor

The main aim of this talk is to deliver a either definite or compact knowledge on how hybridization can help improving the quality of computer classification systems. The talk is based on his recently published book on Hybrid classifiers: Method of Data, Knowledge, and Data Hybridization, Springer, 2014. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered.

Thursday, September 25, 13:30 - 14:30 (Asia/Calcutta)

L2: Lunch Breakgo to top

Room: Lawn Area Block E

Thursday, September 25, 14:30 - 19:00 (Asia/Calcutta)

S13: Industry TrackDetailsgo to top

Keynote: Network Functions Virtualization Research - Emerging Directions
Dr. Dilip Krishnaswamy, IBM Research - India
Room: Auditorium Block D Ground Floor
Chair: Dilip Krishnaswamy (IBM Research, India)

Network Functions Virtualization is an emerging area of research that enables hardware appliances in networks to be replaced by software appliances in data centers. The talk will discuss various emerging research directions in this area such as policy-based resource management, data center resource optimization, viral content management, security in NFV systems, and open interfaces for information exchange in such systems.

Application of Software Failure Mode and Effect Analysis for On-board Software
Software is an integral part of Spacecrafts. Need of complex mission, operator & autonomy requirements have initiated comprehensive On-board software requirements. In order to analyze the design of this complex and evolving requirements, an analysis technique is worked out to identify faults at the lowest level, their propagation and effect on the system. "Software Failure Mode Effect Analysis (SFMEA)" is a technique which can be used to detect single point failure in the software and also effect of hardware failure on software. System & mission engineers can utilize the outcome of SFMEA for improvement during design phase and error free-handling during on-orbit phase respectively. Authors of this paper have come out with an innovative approach to carry out SFMEA. This paper demonstrates the approach and outcome along with the case study applied to one of the modules of On-board software. A template has also been created to record the SFMEA findings.
pdf file
Efficient Implementation of Low Density Parity Check Codes for Satellite Ground Terminals
Narender Kumar (Space Applications Centre & Indian Space Research Organization, India); Chandra Prakash (Space Applications Centre & ISRO, India)
Low Density Parity Check (LDPC) codes have gained lot of importance in the channel coding arena, because these can provide excellent performance close to Shannon limit & can easily beat the best known turbo codes for large block lengths. This paper explains generic algorithms for hardware efficient implementation of Encoder & Decoder for LDPC codes adopted by Digital Video Broadcast-Satellite-Second Generation (DVB-S2) standard. Low complexity & high throughput architectures have been proposed for LDPC Encoder & Sum Product Algorithm based LDPC Decoder. MATLAB & HDL simulation results are presented for proposed encoder & decoder architectures. Satellite link test results of a scaled down FPGA based implementation with IESS satellite modem are also presented.
pptx file
Frequency Reconfigurable Multi-band Inverted T-slot Antenna for Wireless Application
Ratnesh Kumari (Sardar Vallabhbhai National Institute of Technology, SURAT GUJRAT, INDIA, India); Mithilesh Kumar (Rajasthan Technical University, Kota, Rajasthan-INDIA, India)
In the modern era of wireless communication reconfigurable radios are becoming very popular due to its ability to operate with diverse frequency range with the same hardware. One of the important aspects of such a radios is the antenna. This paper reports a new, compact reconfigurable antenna with the size of 22 X 16 X 1 mm3. The designs are carried out using FR-4 substrate of thickness of 1 mm with a dielectric constant of 4.05 with a loss tangent of 0.02. The proposed antenna uses the concept of T-slot in the radiating patch, which separates antenna into three parts and an E-slot in the ground plane. The radiating patch is connected by two PIN diodes for the reconfigurable operation. When the diode D1 is OFF and D2 is ON then this antenna works for 3.9 GHz, 8.9 GHz and 11.2 GHz frequencies. When the diode D1 is ON state and D2 is OFF state then this antenna is switched at 4.1 GHz, 8.4 GHz and 11.3 GHz resonant frequencies.This antenna is useful for the Wimax, C-band, X-band and fixed satellite communication systems.
pptx file
Analytical Study of Implementation Issues of NTRU
Nitesh Emmadi (International Institute of Information Technology Hyderabad, India); Harika Narumanchi (Tata Consultancy Services, India); Praveen Gauravaram (Tata Consultancy Services, Australia, Australia)
NTRU is a lattice-based public-key cryptosystem that offers encryption and digital signature solutions. It was designed by Silverman, Hoffstein and Pipher. The NTRU cryptosystem was patented by NTRU Cryptosystems Inc. (which was later acquired by Security Innovations) and available as IEEE 1363.1 and X9.98 standards. NTRU is resistant to attacks based on Quantum computing, to which the standard RSA and ECC public-key cryptosystems are vulnerable to. In addition, NTRU has higher performance advantages over these cryptosystems. Considering this importance of NTRU, it is highly recommended to adopt NTRU as part of a cipher suite along with widely used cryptosystems for internet security protocols and applications. In this paper, we present our analytical study on the implementation of NTRU encryption scheme which serves as a guideline for developers. In particular, we show some non-trivial issues that should be addressed towards a secure and efficient NTRU implementation. While our implementation has not been targeted towards any specific platform or application, such implementations would still benefit from our study.
pdf file
A Novel Approach of Triangular-Circular Fractal Antenna
Sanjeev Yadav (Govt. Women Engineering College Ajmer, India); Pushpanjali Jain and Ruchika Choudhary (Govt. Engineering College Ajmer, India)
In this paper a combination of triangular and circular shape microstrip antenna design is proposed. The proposed design of antenna will work in wireless application and provide contribution in the field of ultra wide band application. The proposed design is fabricated on low cost FR-4 epoxy substrate with relative permittivity 4.4 and having dimension 17.89×21.45×1.6〖mm〗^3.This proposed antenna design operates over the frequency range 4.12 GHz-6.8 GHz. For the proposed design return loss -33.74dB has achieved at resonant frequency 5.6 GHz. Also VSWR<2 for entire operating frequency range. In this paper, microstrip feed line has used to feed the proposed antenna .The proposed design exhibits gain up to 2.2dB over frequency range. The radiation pattern of the proposed design exhibits symmetry over frequency range in E-plane and omni-direction in H-plane. The simulation of the proposed antenna design has done by High Frequency Structure Simulator.
pptx file
Comparative Analysis of Controllers Designed for Pure Integral Complex Delayed Process Model
Ruchika Jangwan (Graphic Era University, India); Pradeep Kumar Juneja (Graphic era University, India); Mayank Chaturvedi (Graphic Era University, India); Sandeep Sunori (GEHU, India); Priyanka Singh (Graphic Era University, India)
In present analysis, controllers based on different tuning techniques for a pure integral process with time delay are designed for different approximation of the time delay. The controllers are compared for the approximated process for the set point tracking capability. The set point tracking capability is determined on the basis of transient as well as the steady state analysis of a step response.
pptx file
Mammogram Mass Classification Based on Discrete Wavelet Transform Textural Features
Abdul Jaleel and Sibi Salim (TKM College of Engineering, Kollam, Kerala, India); Archana S (TKM College of Engineering, Kollam, Kerala & Kerala University, India)
This paper proposes an algorithm for early detection of breast cancer. This work incorporates Manual segmentation and textural analysis for the mammogram mass classification. Discrete Wavelet Transform (DWT) features act as a powerful input to the classifiers. A total of 148 mammogram images were taken from authentic mini MIAS database and under the supervision of classifiers, solid breast nodules were classified into benign and malignant. The classifiers used are K- Nearest Neighbor (K-NN), Support Vector Machine (SVM), Radial Basis Function Neural Network (RBFNN). It is found that RBFNN with DWT features outperform SVM and K-NN with 94.6% accuracy. The proposed system has a high potential for cancer detection from digital mammograms.
pdf file
Real-time Simulator of Collaborative Autonomous Vehicles
Farid Bounini (LIV – Université de Sherbrooke); Denis Gingras (Université de Sherbrooke, Canada); Vincent Lapointe (Opal-RT Technologies Inc, Canada); Dominique Gruyer (LIVIC-IFSTTAR, France)
Collaborative autonomous vehicles will appear in the near future and will transform deeply road transportation systems, addressing in part many issues such as safety, traffic efficiency, etc. Validation and testing of complex scenarios involving sets of autonomous collaborative vehicles are becoming an important challenge. Each vehicle in the set is autonomous and acts asynchronously, receiving and processing huge amount of data in real time, coming from the environment and other vehicles. Simulation of such scenarios in real time require huge computing resources. This paper presents a simulation platform combining the real-time OPAL-RT Technologies for processing and parallel computing, and the Pro-SiVIC vehicular simulator from Civitec for realistic simulation of vehicles dynamic, road/environment, and sensors behaviors. The two platforms are complementary and their combining allow us to propose a real time simulator of collaborative autonomous systems.
pptx filepdf file
An Approach for Frequent Access Pattern Identification in Web Usage Mining
Murli Sharma and Anju Bala (Thapar University, India)
In the consent of this internet world, nobody is untouched with internet for their usage. For such kind of scenario, data mining becomes an essential part of computer science. Data mining is a sub-field, which computationally processes the data collected data and is able to help the analyst for proposing the ideas for some betterment of the company. The user access is recorded in log files. The web server logs provide important information. In the field of web mining the analysis of the web logs is done to identify the user search patterns. In the usual approaches of finding the patterns, pattern tree is created and then the analysis is done, but in this proposed algorithm there is no need of tree creation and the analysis is done based on the website architecture, which will increase the efficiency of the other pattern matching algorithms and needs only one database scan.
pptx file
cPCI Based Hardware-In-Loop Simulation System Development Under Real Time Operating System
Rajesh Karvande (Defence Research and Development Organisation, India)
Hardware-In-Loop simulation (HILS) platform is the only platform used to validate the on-board software for aerospace applications along with the other flight subsystems. HILS system should be capable of high speed data acquisitions with traditional interface protocols to latest technology based interfaces. With PCI/ISA based computer system the system could able to integrate with limited resources and limited speed. Compact Peripheral Interface (cPCI) protocol which is superset of PCI follows state-of-the art technology with significantly eases the system design task, shortening the design cycle and time-to-market and robust mechanical form factor than desktop PC. This paper provides a technical overview of development of the cPCI system for HILS and real time modeling and simulation software development. The integration cycle in HILS environment and testing of system with each configuration is also explained.
ppt file

S14: Second International Symposium on Control, Automation, Industrial Informatics and Smart Grid (ICAIS'14)go to top

Room: 108-A Block E First Floor
Chair: Sang Bong Kim (Pukyong National University, Korea)
Design Implementation of High Performance DC Motor Drive
Karan Mehta (Nirma University, India); Paril Jain (Nirma University); Akash Mecwan (Nirma University, India); Dilip Kumar Kothari (Nirma University of Science And Technology, India); Mihir Chauhan (Institute of Technology, Nirma University, India)
DC Motors are quite common in large number of applications ranging from small embedded applications to large industrial machines. Efficiency of such machines depends on proper operation of these motors to a much larger extent. Moreover, heavy duty applications require high current demanding motors. High current also leads to addition of other noise issues. Here we put forward a high performance motor drive circuit tested for heavy duty applications demanding DC currents up to 200 A where switching of motors was done at 32KHz to remove the switching noise. The usage of dynamic switching frequency is represented here which enables us to achieve higher duty cycles up to 99.6% which is not possible with static switching frequencies in absence of charge pump. The drive circuit having dimensions 5cmx5cm has been tested for extreme load and noise conditions and several protection measures implemented in this circuit are described here.
pdf file
Dynamic Modeling and Stabilization of Quadrotor Using PID Controller
Shahida Khatoon (Jamia Millia Islamia, India); Mohammad Shahid (Jamia Millia Islamia & Jamia Millia Islamia, India); Dr. Ibraheem and Himanshu Chaudhary (Jamia Millia Islamia, India)
Quadrotor UAVs is one of the emerging fields in autonomous robotics. Due to its simple design, low maintenance, and capability of hovering and Vertical Take-Off and Landing it has inspired many researchers to work on its modeling and control. This paper presents dynamic modeling and stabilization of flight parameters of quadrotor. First part of the paper presents the dynamic modeling of quadrotor and second part of the paper presents the control of modeled quadrotor using simple 2 Degree of freedom PID controller. In the last section both controlled and without controller characteristics are compared.
ppt file
Fault Detection Algorithm for Automatic Guided Vehicle Based on Multiple Positioning Modules
Pandu Sandi Pratama (Pukyong National University, Korea); Yuhanes Dedy Setiawan (McGill University, Canada); Dae Hwan Kim (Pukyong National University, Korea); Y. Jung (Pknu, Korea); Hak Kyeong Kim and Sang Bong Kim (Pukyong National University, Korea); Sang Kwun Jeong (Han Sung Well Tech Co, Ltd., Korea); Jin Il Jeong (YAHOTEC CO., LTD, Korea)
This paper presents implementation and experimental validation of fault detection algorithm for sensors and motors of Automatic Guided Vehicle (AGV) system based on multiple positioning modules. In this paper, firstly the system description and mathematical model of differential drive AGV system are described. Then, characteristics of each positioning modules are explained. On the next step, the fault detection based on multiple positioning modules is proposed. The fault detection method uses two or more positioning systems and compares them to detect unexpected deviation effected by drift or different characteristics of each positioning systems. For fault detection algorithm, an Extended Kalman Filter (EKF) is used. EKF calculates the measurement probability distribution of the AGV position for nonlinear models driven by Gaussian noise. Using the probability distribution of innovation obtained from EKF, it is possible to test if the measured data are fit with the models. When the faults such as sensors malfunction, wheel slip or motor broken, the models will not be valid and the innovation will not be Gaussian and white. The pairwise differences between the estimated positions obtained from sensors are called as residue. Fault isolation is obtained by examining the biggest residue. Finally, to demonstrate the capability of the proposed algorithm, the algorithm is implemented on a differential drive AGV system, which uses encoder, laser scanner, and laser navigation system to obtain position information. The experimental result shows that the proposed algorithm successfully detects faults when the faults occur.
zip file
Object Following Control of Six-legged Robot Using Kinect Camera
Amruta Vinod Gulalkari, Giang Hoang, Pandu Sandi Pratama, Hak Kyeong Kim and Sang Bong Kim (Pukyong National University, Korea); Bong Huan Jun (Korea Research Institute of Ships and Ocean Engineering, Korea)
This paper proposes a vision-based object following system for the six-legged robot using Kinect camera. To do this task, the followings are done. First, for image processing, a Kinect camera is installed on the six-legged robot. The interesting moving object is detected by a color-based object detection method. The local coordinates of the detected object are obtained to provide the position of the object. Second, the backstepping method using Lyapunov function is adopted to design a controller for the six-legged robot to achieve object following. Finally, the simulation and experimental results are presented to show the effectiveness of the proposed control method.
zip file
Active Real-time Tension Control for Coil Winding Machine of BLDC Motors
Van Tu Duong, Phuc Thinh Doan, Jung Hu Min, Hak Kyeong Kim and Sang Bong Kim (Pukyong National University, Korea); Jae Hoon Jeong (YAHOTEC CO. LTD, Korea); Sea June Oh (Korea Maritime University, Korea)
This paper proposes a new active tension system which is used in coil winding machine for BLDC motor manufacturing. A comparison of winding condition between normal coil such as round shape coil, rectangle shape coil and BLDC coil is presented. To overcome the harsh winding condition of BLDC coil winding, a new wire accumulator is proposed to store or release a wire when the wire is stretched or sagging. Wire accumulator consists of a pneumatic cylinder which is driven by a servo valve and a spring. The system modeling result shows that this tension system is a MISO system. The traditional PID controller is adopted and drives the real tension according to the given reference tension. In order to evaluate the effectiveness of wire accumulator and the performance of the proposed controller, some simulation results are carried out.
rar file
Path Replanning and Controller Design for Trajectory Tracking of Automated Guided Vehicles
Yuhanes Dedy Setiawan (McGill University, Canada); Pandu Sandi Pratama, Jin Wook Kim and Dae Hwan Kim (Pukyong National University, Korea); Y. Jung (Pknu, Korea); Sup Hong, Tae Kyeong Yeo and Suk Min Yoon (Korea Research Institute of Ships and Ocean Engineering, Korea); Sang Bong Kim (Pukyong National University, Korea)
This paper proposes path replanning and controller design for trajectory tracking of Automated Guided Vehicles (AGVs). These algorithms are essential for AGVs since AGVs have to work in factory environments which have various kinds of objects such as objects that are stationary, moving, known and unknown. To do this task, the followings are done: system modeling, path replanning and controller design. Simulations and experiments are conducted for verification. The results show that AGVs can work well by replanning the path that passes unknown obstacle as well as tracking trajectory with very small errors.
rar file
Implementing an Integrated Security Management Framework to Ensure a Secure Smart Grid
Nampuraja Enose (Principal Consultant & Infosys Technologies Limited, India)
A careful study of the transformation in the evolving modern grid can be rightly described by a single term, 'convergence'. Traditionally utilities have implemented Operations Technology (OT) independent of Information Technology (IT). However, the major opportunity in the today's 'smart grid' is not in the implementation of a technology or an application, but rather in the perfect convergence of IT systems, that employs an overlay communications and information network, and OT systems, that manage the grid's electric energy from the point of generation to consumption. While this transformation promises immense operational benefits to the utilities; for all of the operational benefits, it brings along significant security concerns in terms of increasing the enterprise-class security risk. Unlike IT systems, that are constantly updated with service packs, new releases and bug fixes, these OT devices with virtually limited security capabilities, when brought into the IT environment, increase the vulnerability and increasingly open up new points to attack the grid. The challenge for the utilities therefore, is to implement new approaches and tools in building a secure smart grid network that is reliable and resilient. This paper therefore explains the complexities in a converging smart grid network and introduces an 'integrated security management framework'. The integrated approach that offers critical infrastructure-grade security, to multiple technologies, communication devices, grid systems, sensors, and data, also continuously monitors the network in establishing an enterprise wide integrated security management system. This comprehensive security architecture offers improved interconnect of diverse systems, and establishes both physical security and cyber-security, integrated to all operational aspects of the grid. This framework should therefore be an integral part of the IT/OT convergence, thereby enhancing the overall security and establishing a more strategic, enterprise-wide view of security risk and security management.
pdf file
Design and Characterization of a Wideband p-HEMT Low Noise Amplifier
Arpit Kumar (IIT Roorkee, India); Nagendra Prasad Pathak (Indian Institute of Technology, Roorkee, India)
This paper reports a pseudomorphic HEMT wide band low noise amplifier (LNA) for WLAN, vehicle communication systems and point to point communication applications. The LNA had been designed by using a single ATF36163 transistor. A wide band bias network has been designed and verified over the desired frequency range. The fabricated prototype of the proposed LNA has a gain of 2.5 dB with a noise figure (NF) of 1.3 dB over the frequency range of 5-6 GHz. The designed amplifier has bandwidth of 1 GHz over the frequency range from 5-6 GHz
pptx file
Development and Controller Design of Wheeled-Type Pipe Inspection Robot
Jung Hu Min (Pukyong National University, Korea); Yuhanes Dedy Setiawan (McGill University, Canada); Pandu Sandi Pratama, Hak Kyeong Kim and Sang Bong Kim (Pukyong National University, Korea)
A wheeled-type pipe inspection robot designed to work in 300 to 500 mm diameter pipe with multiple elbows is introduced in this paper. This robot consists of two modules-active module and passive module. Each module has three wheel configurations with different mechanism to expand the wheels according to diameter change of pipe. A PID controller is designed for robot to follow desired linear and angular velocity references. Simulation and experiment are conducted to verify performances of the proposed controlled robot. The results demonstrate that the robot can work well with the designed controller by following the reference velocity very well.
zip file

S15-A: Robotics, Machine Vision, Control Systems and Applications-Igo to top

Room: 110 Block E First Floor
Chairs: Sabu M Thampi (Indian Institute of Information Technology and Management - Kerala, India), Jagannath Nirmal (Mumbai University, India)
PID & LQR Control for a Quad Rotor: Modeling and Simulation
Dhiraj Gupta (Jamia Millia Islamia University, India); Shahida Khatoon (Jamia Millia Islamia, India); Lalit Das (Indian Institute of Technology, Delhi, India)
This paper aims to present a comparison between various controllers to be used in dynamic model of a quad rotor platform. The controllers assumed in this work are conventional PID and a classic LQR controller. The PID controller is chosen for quad rotor model because of its versatility and facile implementation, while also providing a good response for the model dynamics attitudes. The LQR controller seemed to be a good comparative controller due to its great performance and robustness in the plant. In this study, it is accomplished that both the controllers provide satisfactory feedback for quad rotor stabilization.
pptx file
Invariant Extended Kalman Filter-based State Estimation for MAV in GPS-denied Environments
Dachuan Li, Qing Li, Nong Cheng, Sheng Yang and Jingyan Song (Tsinghua University, P.R. China); Liangwen Tang (Flight Automatic Control Research Institute, P.R. China)
This paper presents a RGB-D aided inertial navigation system that uses RGB-D sensor and low cost inertial measurement sensors (IMU) to provide state estimates for micro aerial vehicles (MAV) in GPS-denied indoor environments. The state estimation approach is based on the invariant observer theory which is developed for systems possessing symmetries. In our system, we review the invariant observer theory and design the invariant observer (Invariant Extended Kalman Filter, IEKF) based on the analysis of system symmetry for the RGB-D aided inertial navigation model evolving on a Lie group. In addition, a robust RGB-D based motion estimation approach is developed to provide relative pose estimates using feature correspondences captured by the RGB-D sensor. The RGB-D estimates are fused with inertial measurements through the IEKF-based observer which yields a simplified error dynamics and simplifies the calculation of gain matrices. The resulting framework is implemented and validated on a MAV, and experimental results from actual indoor flight tests demonstrate the effectiveness of the approach.
Fuzzy Based Sliding Mode Control for Vector Controlled Induction Motor
Shoeb Hussain (Institute of Technology, Kashmir University, India); Mohammad Abid Bazaz (National Institute of Technology Srinagar, India)
A vector controlled induction motor drive with fuzzy based sliding mode controller (SMC) is presented in this paper for improving the dynamic performance of the drive. The fuzzy based SMC employed for speed control compensates for the chattering effect otherwise present with SMC. MATLAB simulation of the scheme for a 5HP, 460 V (50Hz) induction motor is presented to analyse the performance of fuzzy based SMC.
ppt file
Comparative Study of Dc-Dc Converters in Solar Energy Systems
Ahana Malhotra (JRE Group of Institutions, India); Prerna Gaur (Netaji Subhas Institute of Technology, Delhi University, India)
The object of this paper is to design, simulate and compare the performance of different dc-dc converters in solar energy systems. This is designed for using single phase asynchronous motors in home appliances such as washing machines, refrigerator etc. and is simulated in MATLAB 8.1 using simulink, SimPower system. Perturb and Observe algorithm is used as MPPT algorithm to achieve the maximum power transfer from solar array. The PV input voltage is fed to the dc-dc converter and given to the single phase inverter.
pptx file
Discrete Multi-Tone Loading Algorithms for Underwater Acoustic Communication
Saraswathi K (R V College of Engineering, India); Ravishankar Sankaranarayanan (RV College of Engineering, India)
Underwater acoustic (UWA) channel is characterized as a severe multipath propagation channel due to signal reflections from the surface and bottom of the sea, that is affected by a variety of ambient noise profiles unique to underwater environment. Further the motion of water introduces a Doppler. Multicarrier modulation schemes can be adapted to meet a given Quality of service (QOS) in such environments. In this paper the underwater acoustic channel is studied using the Thorp and Ainslie-McColm models for absorption and their limitations are analyzed. Signal to noise ratio (SNR) profiles at various distances employing Discrete Multitone Modulation are obtained considering different types of ambient noise sources such as shipping, wind, thermal, bubble and turbulence. To meet the QOS different Rate Adaptive and Margin Adaptive Tone loading algorithms are simulated and compared based on system margin, utilizing these SNR profiles.
pdf file

S15-B: Robotics, Machine Vision, Control Systems and Applications-IIgo to top

Room: 110 Block E First Floor
Chairs: Sabu M Thampi (Indian Institute of Information Technology and Management - Kerala, India), Jagannath Nirmal (Mumbai University, India)
Analysis and Design of 2nd Order Sigma-Delta Modulator for Audio Applications
Ganesh Raj (CEERI Pilani, India); Abhijit Karmakar and S. C. Bose (CEERI, Pilani, India)
This paper describes a top-down approach to design a Sigma-Delta based modulator for analog to digital conversion. The primary focus is on designing an ADC for audio application, thus, the conversion is done on a 22.5 kHz bandwidth with a 16- bit resolution (CD quality). The behavioural code of the modulator is created first to check the signal ranges on internal nodes. Next, the circuit non-idealities, such as, finite Op-Amp gain, finite latency of Op-Amp and comparator offset are introduced to derive the tolerable limits and the specification of sub-blocks. Later, the individual blocks are carefully designed in 0.35 μm CMOS technology and are integrated. SPICE-based simulation is carried out on the entire circuit and the results are validated with MATLAB.
ppt file
Design of PI Controller: A Multiobjective Optimization Approach
Lalitesh Kumar (Dr A. P. J. Abdul Kalam Technical University & Ajay KumarGarg Engineering College, India); Prawendra Kumar (Mewar University, India); Subhojit Ghosh (National Institute of Technology, Raipur, India)
This paper addresses the design of a PI controller as a multiobjective problem. For a given plant the design of a PI controller satisfies a given set of specifications and for acceptable design the parameters of the PI controller are to be adjusted so that whole closed loop system satisfies the specifications. A multiobjective optimization approach is implemented here to adjust the parameters of the PI controller and since the design presented here is multiobjective, hence for acceptable design some tradeoff is made among the specifications given. Since the robustness and the performance is not guaranteed both at the same time, due to this reason only some tradeoff is necessary and this paper addresses the contradiction between robustness and performance of PI controller by framing it into a multiobjective optimization problem. The simulation of multiobjective problem generates the Pareto front from which on can make the compromise between performance and robustness. The simulation results derived from the above said approach is applicable for large number of systems.
pptx file
Bidirectional DC/DC Converter for Hybrid Electric Vehicle
Atul Kumar (University of Delhi & NSIT, DWARKA, India); Prerna Gaur (Netaji Subhas Institute of Technology, Delhi University, India)
Hybrid electric vehicle (HEV) technology is an effective and efficient alternative for conventional vehicles. It provides fuel efficiency, reduces harmful emission and enhances performance. The technology has gain enormous attention because of depleting conventional resources and measured carbon emission. This paper proposes a bidirectional buck boost converter with interleaved control, which minimizes input current and output voltage ripples. This leads to reduce size of passive components with higher efficiency and make whole system more reliable.
pptx file
Modeling and Simulation Study of Speed Control of a Photovoltaic Assisted Hybrid Electric Vehicle
Probeer Sahw (JRE Group of Institutions, India); Prerna Gaur (Netaji Subhas Institute of Technology, Delhi University, India)
In this paper speed control of a Hybrid Electric Vehicle (HEV) is presented. The HEV is operated in parallel configuration. The HEV is driven by an Internal Combustion Engine (ICE) and a Permanent Magnet Synchronous Motor (PMSM) drive. The PMSM is driven by a battery and a Photovoltaic (PV) module. In this paper the performance of the vehicle in conventional (driven by ICE only) and hybrid modes is compared. In this paper the performance of a PV module and its use to assist the battery to drive PMSM drive are also investigated.
pptx file
PVC-MWCNT Based 3-dB Optical Coupler Design
Vishal Parsotambhai Sorathiya (Marwadi Education Foundation Groups of Institute, India); Amit Kumar (Gujarat Technical University, India)
Carbon nano tube is very promising material with better electric and optical properties. Here we present the idea about three 2×2 coupler designs with PVC-MWCNT material. The couplers are simulated on FDTD environment. All the couplers are designed in maximum footprint of 3μm×6μm. The operating wavelength range of coupler is 0.7μm-1.0μm with refractive index of 2.7 and 1.47 for core and cladding respectively. The simulation results showed the variation in excess loss and splitting ratio in given wavelength range. Here we got the minimum excess loss 0.82 dB in Directional Coupler and minimum splitting ratio variation 32.76% in CGC 50°. Overall the moderate excess loss and splitting ratio is got in CGC60°.
pdf file
Effect of Implementing Different PID Algorithms on Controllers Designed for SOPDT Process
Mayank Chaturvedi (Graphic Era University, India); Pradeep Kumar Juneja (Graphic era University, India); Prateeksha Chauhaan (Graphic Era University, India)
The most important control objective in a control system analysis is to design an optimal controller for the known process model. In the present analysis, various PID controllers based on different tuning methods have been designed for a selected Second Order plus Dead Time model. Also different algorithms of PID controllers such as parallel, parallel with derivative filter, series, series with derivative filter and cascade form are implemented. The steady state and dynamic characteristics of closed loop responses for the designed controllers are compared and analyzed.
pptx file
Controller Design and Its Performance Analysis for a Delayed Process Model
Prateeksha Chauhaan (Graphic Era University, India); Pradeep Kumar Juneja (Graphic era University, India); Mayank Chaturvedi (Graphic Era University, India)
The Dynamics of most of the industrial processes exhibits delay in a first order process model commonly known as first order plus dead time process model. In the present investigation, a second order plus dead time process model is converted into first order plus dead time process model with two model order reduction techniques namely Skogestad's and Taylor's series approximation and open loop responses have been compared. PID controllers for both the FOPDT process models have been designed and compared using Ziegler Nichols, Chein Hrones Reswick and Wang Juang Chan tuning techniques.
pptx file
A New Approach of Path Planning for Mobile Robots
Jitin Kumar Goyal (G. L. Bajaj Institute of Technology & Management, Greater Noida, India); Ks Nagla KS (Dr BR Ambedkar National Institute of Technology, India)
Path planning is fundamental task of the mobile robots navigations, where the accuracy of path depends upon the environmental mapping and localization. Several path planning approaches are already used for accurate path planning such as Dijkastra's Algorithm, Visibility Graphs, Cell Decomposition Technique, A* and Modified A* Algorithms etc. A* method does not support the accurate path planning if the size of the robot is larger than the size of the cell. In such situations it is difficult to move the mobile robot through narrow door or passage. This paper presents the new approach of path planning technique in which the virtual size of the obstacle present in the environment is assumed to be increased (2n+1) times of the size of the cell. The experimental analysis of the proposed method shows the improvement in the path planning which reduces the chances of collisions. The paper is organized as follows: the first section of the paper represents the detail literature review and path planning strategies. The second part of the paper deals with the problem statement and proposed methodology. The last section shows the simulation results for indoor environmental path planning.
ppt file
High Accuracy Depth Filtering for Kinect Using Edge Guided Inpainting
Saumik Bhattacharya (IIT Kanpur, India); Sumana Gupta (IIT, Kanpur - INDIA, India); Venkatesh K Subramanian (IIT Kanpur, India)
Kinect is an easy and convenient means to calculate the depth of a scene in real time. It is used widely in several applications for its ease of installation and handling. Many of these applications need a high accuracy depth map of the scene for rendering. Unfortunately, the depth map provided by Kinect suffers from various degradations due to occlusion, shadowing, scattering etc. The major two degradations are the edge distortion and shadowing. Edge distortion appears due to the intrinsic properties of Kinect and makes any depth based operation perceptually degraded. The problem of edge distortion removal has not received as much attention as the hole filling problem, though it is considerably important at the post processing stage of a RGB scene. We propose a novel method to remove line distortion in order to construct high accuracy depth map of the scene by exploiting the edge information already present in the RGB image.
rar file

S16: International Workshop on Recent Advances in Adaptive Systems and Signal Processing (RAASP-2014)go to top

Room: 204 Block E Second Floor
Chair: Mahesh Chandra (BIT, Mesra, Ranchi, India)
Comparison of Peak to Average Power Reduction Techniques in OFDM
Poonam Kundu and Prabhjot Kaur (ITM University, India)
OFDM (Orthogonal Frequency Division Multiplexing) is generally preferred for high data rate transmission in digital communication. High peak to average power ratio (PAPR) is the major limitation of OFDM system. In this paper, we explain different PAPR reduction techniques and present a comparison of the various techniques based on theoretical and simulated results. It presents a survey of the various PAPR reduction techniques and the state of the art in this area.
pptx file
Efficient Suppression of Remnant Non-linear Echo Caused Due to Harmonic Distortions
Asutosh Kar (BITS Pilani, Hyderabad, India); Pankaj Goel (Birla Institute of Technology, Mesra, Ranchi, India); Mahesh Chandra (BIT, Mesra, Ranchi, India)
Voice communication has today become an important and an integral part of human kind. Today the speed and the quality of communication is very crucial. Speech quality has always been degraded due to end user devices. This kind of a problem can be tackled effectively when efficient devices and algorithms are used to cancel the echo which is introduced into the channel. With the advent of sophisticated technology low cost loudspeakers and power amplifiers have been devised which bring in non-linearity into the communication channel. These cheap loudspeakers bring in such non-linearity that it can't be cancelled by conventional linear acoustic cancellation techniques. In this paper we try to devise a solution for the cancellation of the non-linear echo to restore the speech quality in a communication channel. The proposed algorithm very efficiently deals with the non-linear acoustic echo cancellation as it has a moderately low computational complexity and fast convergence. The whole echo cancellation process starts with a linear echo cancellation process where an adaptive algorithm is used to update the weights of the adaptation process. Then the HDNRES comes into action and suppresses the remnant echo in the communication channel after passing through the linear AEC.
pptx file
Performance Evaluation of Front End Speech Enhancement Techniques
Anirban Bhowmick (Birla Institute of Technology Ranchi, India); Mahesh Chandra (BIT, Mesra, Ranchi, India); Astik Biswas (Department of Electrical Engineering & NIT RKL, India); Prasanna Kumar Sahu (N. I. T. Rourkela, India)
In this paper the performance of speech enhancement algorithms is evaluated and compared in terms of improvement in Signal-to-Noise Ratio (SNR) and speech quality. Clean and noisy version of one Hindi speech sentence is used for performing the experiments. Noisy version is obtained by mixing F16 noise, operations room noise and machine gun noise to clean speech signal at different SNR levels. The study of this paper leads to the utility of speech enhancement algorithms for different noises based on their characteristics.
ppt file
Soft Computing Technique for Cost Reduction in Cellular Network
Smita Parija (NIT Rourkela, India); S Singh (KIIT University, India); Prathima Addanki (NIT Rourkela, India); Prasanna Kumar Sahu (N. I. T. Rourkela, India)
In cellular network location management is a fundamental and complex problem which deals how to track the subscriber on move. Some amount of cost is incurred for the subscriber during the movement in a particular service area. This cost basically involved location update cost and paging cost. The main objective of this work is to reduce this total cost which includes this location update cost and paging cost by using different evolutionary techniques. This paper presents binary genetic algorithm to solve the location management problem by partitioning the given cellular network into location areas so as to minimize the location management cost. Binary Genetic Algorithm (BGA) is a meta-heuristic method which has presented to be a very powerful widely used, yet simple, prominent and a population-based optimization approach. Among the entire evolutionary techniques Genetic algorithm is a biological yet simple inspired optimization with reduced complexity. With the help of this algorithm optimal location areas are obtained corresponding to the minimized cost. Simulation results and optimal location area planning for different networks are demonstrated and discussed. The effectiveness of GA result is shown to be effective with less number of iteration.
ppt file
An Automatic Flower Classification Approach Using Machine Learning Algorithms
Hossam M. Zawbaa (Beni-Suef University, Romania & Babes-Bolyai University, Egypt); Mona Abbas (Central Lab. for Agricultural Expert System, Agricultural Research Center, Egypt); Sameh Basha (SRGE & University of Cairo, Egypt); Maryam Hazman (Central Lab. for Agricultural Expert System, Agricultural Research Center, Egypt); Aboul Ella Otifey Hassanien (University of Cairo, Egypt)
This paper aims to develop an effective flower classification approach using machine learning algorithms. Eight flower categories were analyzed in order to extract their features. Scale Invariant Feature Transform (SIFT) and Segmentationbased Fractal Texture Analysis (SFTA) algorithms are used to extract flower features. The proposed approach consists of three phases namely: segmentation, feature extraction, and classification phases. In segmentation phase, the flower region is segmented to remove the complex background from the images dataset. Then, flower image features are extracted. Support Vector Machine (SVM) and Random Forests (RF) algorithms were applied to classify different kinds of flowers. An experiment was carried out using the proposed approach on a dataset of 215 flower images. It shows that Support Vector Machine (SVM) provides better accuracy compared to the Random Forests (RF) algorithm when using the SIFT as a feature extract algorithm. While, Random Forests (RF) provides its better accuracy with SFTA. Moreover, the system is capable of automatically recognize the flower name with a high degree of accuracy.
pptx file
Comparative Analysis of Various Communication Systems for Intelligent Sensing of Spectrum
Mrinal Sharma (Lovely Professional University, India); Rajan Gupta (University of Delhi, India)
With the increasing data communication these days, the congestion and quality of transmission are degrading big time. The availability of spectrum is becoming a big issue for the private organizations to manage the huge number of users. Thus there is an urgent need of Smart Spectrum allocation through the Intelligent Spectrum Sensing. Cognitive Radios are designed for the same. But which communication systems (TDCS or WDCS) are better suited for intelligent sensing and which transformation yields better results is a problem that needs to be resolved. This study is a work in progress case which presents a comparative analysis of the systems with various transforms. The study reveals that WDCS based on wavelet transforms are better suited for the Intelligent sensing of Spectrum and can be utilized in the designing of Cognitive Radios.
ppt file
Family of Adaptive Algorithms Based on Second Order Volterra Filters for Non Linear Acoustic Echo Cancellation: A Technical Survey
Trideba Padhi (Sambalpur University Institute of Information Technology); Asutosh Kar (BITS Pilani, Hyderabad, India); Mahesh Chandra (BIT Mesra, India)
Acoustic echo arises when an audio signal is radiated in the real environment, resulting in the signal plus its attenuated, time delayed images. With time delay as the constraint, the prominence of the echo over communication channels is determined. Modern times have seen extensive research over the removal of nonlinear echoes caused by portable communication systems and low cost audio equipment. Volterra filters employing adaptive algorithms have traditionally been a very important tool for nonlinear acoustic echo cancellation. This article presents a technical survey of the adaptive algorithms used in the design of a nonlinear equalizer modeled using a second order Volterra series expansion. In the initial sections, the problem of nonlinear echo cancellation is defined and a mathematical analysis of the Volterra series expansion is given. In further sections, a truncated version of the Volterra series which is called the second order Volterra filter is discussed. Finally, a detailed analysis has been carried out to determine the adaptive algorithm that is best in the business to solve the problem of nonlinear acoustic echo cancellation.
ppt file
A Distributed Control Law for Optimum Sensor Placement for Source Localization
Hema Achanta (University of Iowa, USA); Soura Dasgupta (The University of Iowa, USA); Weiyu Xu, Raghuraman Mudumbai and Erwei Bai (University of Iowa, USA)
We formulate a nonlinear distributed control law that guides the motion a group of sensors to achieve a configuration that permits them to optimally localize a hazardous source they must keep a prescribed distance from. Earlier work shows that such a configuration involves the sensors being placed in an equispaced manner on a prescribed circle. The nonlinear control law we propose assumes that each sensor resides and moves on the prescribed circle, by accessing only the states of its two immediate clockwise and counterclockwise neighbors. We theoretically prove and verify through simulations, that the law allows the sensors to achieve the desired configuration while avoiding collisions.

S17: International Symposium on Computer Vision and the Internet (VisionNet'14)/International Workshop on Advances in Computer Graphics and Visualization (ACGV 2014)go to top

Room: 104 Block E First Floor
Chair: Vikrant Bhateja (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India)
Automatic Generation of Second Image From an Image for Stereovision
Deepu R (Maharaja Institute of Technology Mysore, India); Honnaraju B (Maharaja Institute of Technology, India); Murali S (Maharaja Institute of Technology Mysore, India)
Stereovision is the process of finding depth when two or more images of a scene are known from different positions. It is the basic phenomena in the construction of 3D from multiple images. Stereo vision is useful in many applications such as robotics, tracking object in 3D space and constructing a 3D model of a scene. Vision based remote control system exploits stereo vision to control the machine in a touch-free environment. Most of the cameras are monocular in nature. In our work, we have developed a generic model wherein, second image is automatically generated and can be used for 3D reconstruction from a single image. Relationship between depth and disparity for any focal length has been established. Exhaustive study on literature has indicated that no method exists to generate second image from the first for stereovision. Experiments are carried out to validate the proposed model where we got higher accuracy. The model has been tested with middlebury stereo dataset. With this model, the existing monocular cameras are sufficient to build 3D view of any scene using a single 2D image.
pptx file
Syntactic-Based Region Algorithm for Volumetric Segmentation
Dumitru-Dan Burdescu (University of Craiova & University of Craiova, Romania)
The problem of partitioning images into homogenous regions or semantic entities is a basic problem for identifying relevant objects. Visual segmentation is related to some semantic concepts because certain parts of a scene are pre-attentively distinctive and have a greater significance than other parts. Unfortunately there are huge of papers for planar images and segmentation methods and most of them are graph-based for planar images and very few papers for volumetric segmentation methods. The major concept used in graph-based volumetric segmentation method is the concept of homogeneity of regions and thus the edge weights are based on color distance. A number of approaches to segmentation are based on finding compact regions in some feature space. A recent technique using feature space regions first transforms the data by smoothing it in a way that preserves boundaries between regions. In this paper we extend our previous work for planar images by adding a new step in the volumetric segmentation algorithm that allows us to determine regions closer to it. The key to the whole algorithm of volumetric segmentation is the honeycomb cells. The pre-processing module is used mainly to blur the initial RGB spatial image in order to reduce the image noise. Then the volumetric segmentation module creates virtual cells of prisms with tree-hexagonal structure defined on the set of the image voxels of the input spatial image and a spatial triangular grid graph having tree-hexagons as cells of vertices.
pdf file
Comparative Analysis of Indian Wheat Seed Classification
Radhika Ronge (Sinhgad Academy of engineering, Kondhwa, Pune,India.); M M Sardeshmukh (Sinhgad Academy of Engineering Pune, India)
In this study 2-layer ANN (artificial neural network) a linear classifier and k-NN (k-nearest neighbor) a non-linear classifier were applied for identification and classification of images of four Indian wheat seed species into four classes of wheat seeds on the basis of their varieties. 120 images (40 images of four classes, 10 images of each class) from three different places were taken under same illumination condition. These images were cropped to 320×240 resolution and converted into gray scale images. We had extracted 131 texture features of wheat species using various textural algorithms which contain LBP(local binary pattern),LSP(local similarity pattern),LSN(local similarity numbers),GLCM(gray level co-occurrence matrix),GLRM(gray level run length matrix) matrices of gray image. The feature group which gave highest percentage of accuracy in classification was determined. The determined feature group showed maximum average accuracy of 100% for inter-class classification and 66.68% for intra class classification when it was classified linearly i.e. using ANN. On the other hand it gave 85% of average accuracy for inter-class classification and 39% for intra class classification with non-linear classification method, using k-nearest neighbor (k-NN). Thus, results shows the linear classifiers are outperformed to non-linear one as features are linear in nature.
pptx file
Efficient Method for Moving Object Detection in Cluttered Background Using Gaussian Mixture Model
Dileep Kumar Yadav (Jawaharlal Nehru University New Delhi, India)
Foreground object detection is a fundamental step for automated video surveillance system and many computer vision applications. Mostly moving foreground object is detected by background subtraction techniques. In dynamic background, Gaussian Mixture Model (GMM) performs best for object detection. In this work, a GMM based Basic Background Subtraction (BBS) model is used for background modeling. The connected component and blob labeling has been used to improve the model with a threshold. Morphological operators are used to improve the foreground information with a suitable structure element. The experimental study shows that the proposed work performs better in comparison to considered state-of-the-art methods in term of error.
ppt file
Vision Based Hand Gesture Recognition Using Eccentric Approach for Human Computer Interaction
Vishal Bhame (University of Pune & Pune Institute of Computer Technology, India); R Sreemathy (Pune Institute of Computer Technology, University of Pune, India); Hrushikesh S Dhumal (Hyper-Ions & Motion vista, India)
There has been growing interest in the development of new approaches and technologies for bridging the human-computer barrier. Hand gesture recognition is considered as an interaction technique having potential to communicate with machines. Human computer interaction (HCI) was never an easy task and lots of approaches are available to build such systems. Hand gesture recognition (HGR) using wearable data glove provides a solution to build a HCI system, but it lags in terms of its computational time and poor interface. Pattern matching is one more solution which uses vision based techniques and provides strong interface to build HCI systems. But again, it requires complex algorithms which takes lots of computational time and hence limits its use in real time HCI applications. In this paper, we presented an eccentric approach for hand gesture recognition which is simple, fast and user independent and can be used to develop real time HCI applications. Based on proposed algorithm we built a system for Indian Sign Language recognition which converts Indian Sign numbers into text. The algorithm first captures the image of single handed gesture of speech/hearing impaired person using simple webcam and then using our proposed algorithm it classifies the gesture into its appropriate class. It uses simple logical conditions for gesture classification which make its use in real time HCI applications.
ppt file
Active Principal Components of Image Histogram Sets for Affine and Non-Affine Aspects
Watit Benjapolakul (Chulalongkorn University, Thailand); Bongkarn Homnan (Dhurakij Pundit University, Research Service Center & Chulalongkorn University, Thailand)
Assembly object components in any image can gives their correspondences. This paper inspects the object of the cylinder 3 dimension model with the homogeneous coordinate system, conformed to Lipschitz and H\"{o}lder conditions, giving its fundamentals of eccentricity and eccentricity angle. Based on the cylindrical and spherical coordinates, inspected perimeters in affine and non-affine projective views can be projected and analyzed. Active principal components of the image histogram set of object components can be retrieved. In addition, the true depth of the cylinder body of the rotated/rotating object pertaining to the inspector can be determined.
Performance Analysis of Possisblistic Fuzzy Clustering and Support Vector Machine in Cotton Crop Classification
Madhuri Kawarkhe (MGM's Jawaharlal Nehru Engineering College, India); Vijaya Musande (Babasaheb Ambedkar Marathwada University, India)
Cotton crop classification is found to be a significant task in crop management. Literature has exploited unsupervised fuzzy based classification and various vegetation indices for cotton crop classification. However, fuzzy based classification has negative effect on performance, because of inliers and outliers in the image. Hence, it is not reliable to investigate the performance of vegetation indices and for cotton crop classification. To overcome this drawback, this paper introduces possiblistic fuzzy c-means (PFCM) clustering for labeling the learning data and exploits support vector machine (SVM), which enables supervised learning, for cotton crop classification. Subsequently, five vegetation indices namely, simple ratio (SR), Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index (SAVI), Triangular Vegetation Index (TVI) and Transformed Normalized Difference Vegetation Index (TNDVI) are considered for investigation. LISS - III multi - spectral images of IRS - P6 sensors are acquired from Aurangabad region, India and they are subjected to experimental study. Three image sets are subjected to experimental investigation and the proposed classifier is compared with an existing classifier. The proposed classifier outperforms the existing classifier in all the image sets. Comparison in terms vegetation indices demonstrate that SR outperforms other vegetation indices by achieving 88.72%, 88.71% and 89.15% accuracy values for image sets 1, 2 and 3 respectively.
ppt file
Pre-processing Image Database for Efficient Content Based Image Retrieval
Kommineni Jenni (Universiti Teknologi Malaysia, Malaysia); Satria Mandala (Universitas Telkom, Indonesia)
Content Based Image Retrieval (CBIR) has been an active area of research for more than a decade. In this area of research, selection of features to represent an image in the database is still an unresolved issue. Unfortunately, the existing solutions regarding the problem are only focusing on the relevance feedback techniques to improve the count of similar images related to a query from the raw image database. These approaches are inefficient and inaccurate to query the image. We propose a new efficient technique to solve these problems by exploiting a new strategy called preprocessing image database using k-means clustering and genetic algorithm. This technique utilizes several features of the image, such as color, edge density, boolean edge density and histogram information as the input of retrieval. Furthermore, several performance metrics, such as confusion matrix, precision graph and F-measures, have also been used in measuring the accuracy of the proposed technique. The experiment results show that the clustering purity in more than half of the clusters has been above 90 percent purity.
pptx file
A Novel Technique of Iris Identification for Biometric Systems
Vishwanath G Garagad (B. V. Bhoomaraddi College of Engineering and Technology & Visvesvaraya Technological University, India); Nalini C Iyer (B.V.Bhoomaraddi college of Engg and Technology, India)
The biometric human identification technique based on the iris of an individual is well suited in providing authentication features for any system that demands high security. This paper examines a novel technique for implementation of Iris Identification in biometric systems [2] that is invariant to distance and tilt variations. The methodology explains relative normalization to compensate the variations and radial trace for feature extraction. Unique binary signature code for every iris is generated.
pptx file
New Dynamic Pattern Search Based Fast Motion Estimation Algorithm
Shaifali Madan Arora (Guru Gobind Singh Inderpratha University, Dwarka, New Delhi & Maharaja Surajmal Institute of Technology, India); Navin Rajpal (GGSIP University, India); Ravindra Kumar Purwar (GGS Indraprastha University, India)
In the development of fast block based motion estimation (BME) algorithms, the focus is always on reduction of computational burden with quality as good as that of Full Search algorithm. Fast fixed search BME algorithms like TSS, DS etc. have been proposed in the literature but these suffer from over or under search for slow or fast motion video sequences. This problem is eradicated by using the coherence of the neighboring blocks to predict the motion of current block. A new dynamic pattern search algorithm for fast BME is proposed in this manuscript, which uses the coherence of temporal right neighboring block along with the spatial and temporal left neighboring blocks and dynamically adapts its search pattern for motion vector estimation of the candidate block. Experimental results show that the proposed algorithm results in improvement in PSNR by 0.2116dB, 0.5043dB, and 2.0160dB with only 1.027, 1.058 and 1.43 times intensification in number of search points as compared to ARPS, DPS and MDPS algorithms respectively. Further proposed algorithm shows better bit compression ratio by 1.0047, 1.0014 and 1.0058, 1.0354 times and also better structural similarity index measurement by 1.0107, 1.0010, 1.0112 and 1.0376 times compared to DS, ARPS, DPS and MDPS algorithms respectively.
pptx file
Medical Image Fusion Using Combination of PCA and Wavelet Analysis
Abhinav Krishn, Vikrant Bhateja, Himanshi Patel and Akanksha Sahu (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India)
Medical image fusion facilitates the retrieval of complementary information from medical images for diagnostic purposes. This paper presents a combination of Principal Component Analysis (PCA) and Wavelet analysis as an improved fusion approach for MRI and CT-scan. The proposed fusion approach involves image decomposition using 2D-Discrete Wavelet Transform (DWT) in order to preserve both spectral and spatial information. This is followed by application of PCA as a fusion rule to improve upon the spatial resolution. The selection of optimal variant of the daubechies family is also made during the simulations. Entropy (E), Standard Deviation (SD) and Fusion Factor (FF) are used as fusion metrics for performance evaluation of the proposed approach. Simulation results demonstrate an improvement in visual quality of the fused image supported by higher values of fusion metrics; this further justifies the effectiveness of the proposed approach in comparison to other approaches.
ppt file

S18: International Workshop on Image Analysis and Image Enhancement (IAIE-2014)Detailsgo to top

Keynote: Classification Of Moving Objects In Surveillance Videos Using Deep Neural Network
Dr. Elizabeth Sherly, IIITM-K, India
Room: 108-B Block E First Floor
Chair: Ajinkya S. Deshmukh (Uurmi System Pvt. Ltd., India)

The use of video is becoming prevalent in many surveillance applications such as detection of pedestrians, identification of anomalous behaviour in a crowded area, activities of the terrorist, smuggling activities, armed robbery, illegal intrusions, monitoring of traffic and more. The talk concentrates on developing an intelligent video surveillance system capable of analyzing and interpreting the video data in terms of objects identification, classification and deducing the hours of video to most significant segments having salient events. Mixture of Gaussian is used for preprocessing, as it models multimodal distribution of background and the foreground pixels are then segmented into regions using the connected components algorithm and identify bounding rectangles around the regions of interest. A deep neural network based object classification is then applied to classify each region of interest as a vehicle, type of the vehicle, or a person.

Blind Estimation of Motion Blur Kernel Parameters Using Cepstral Domain and Hough Transform
Mayana Shah (CKPCET College of Engineering, India); Upena D. Dalal (Sardar Vallabhbhai National Institue of Technology, Surat, India)
Motion blur results when the scene is not static and image being recorded changes during the recording due to long exposure or motion. Because of motion blur projected image is smeared over the sensor according to motion.Motion blur PSF is characterized by two parameters, namely blur direction and blur length. Faithful Restoration of the motion blurred image requires correct estimation of these parameters. In this paper we present a Hough transform based motion direction estimation under spatially variant condition. The blur direction is identified using Hough transform of the fourth bit plane of the modified cepstrum to detect the orientation of line in the log magnitude spectrum of the blurred image. Experiments performed on simulated motion blurred image showed that Compare to already existing Hough transform method it successfully estimate PSF parameters even without any preprocessing steps. The blur length is found by rotating the spectrum of the blurred image in the estimated direction then by collapsing the 2-D cepstrum in to 1-D cepstrum and finally by taking the inverse Fourier transform and finding the first negative value. These parameters are then used to restore the images.
pptx file
Low Cost Distance Estimation System Using Low Resolution Single Camera and High Radius Convex Mirrors
Narayan Murmu (National Institute of Technology, India)
This paper proposes a single camera based stereo vision system to determine disparity map and the distance of the object from the camera system. The proposed system uses two convex mirrors with a sufficiently long radius of curvature. These two mirrors help to capture a pair of images of the same scene from two different viewpoints just like a conventional two camera based stereo vision system. The use of single camera makes the calibration and rectification process easier. The identical intensity response of the stereo images improves the accuracy of the disparity map for computing the object distance. Graph cut based algorithm is used to compute the disparity and the object distance from the camera system. The experimental results justify the claims.
pptx file
Efficient Content-based Dynamic Search Algorithm for Motion Estimation From Videos
Mallesham Dasari (Stony Brook University, USA); Himanshu Sindhwal and Naresh Vattikuti (Uurmi Systems Pvt. Ltd, India)
The block matching algorithm (BMA) used for motion estimation (ME) during video coding in H.264 standard takes nearly 90% of total encoding time. The proposed work is based on analyzing the video content to dynamically choose an efficient search pattern. The algorithm based on the variance of motion in the video shows efficient results by reducing the number of candidate macroblocks that need to be searched in the reference frames using different types of search patterns and also recommends an optimized motion vector search range. The algorithm shows 90% improvement in computational time over the full search algorithm and a significant improvement over other fast block matching algorithms without compromising the bitrate and quality of the video.
pdf file
Road Extraction From Airborne LiDAR Data Using SBF and CD-TIN
Rohini Narwade (Babasaheb Ambedkar Marathwada University & MGM's Jawaharlal Nehru Engineering College, India); Vijaya Musande (Babasaheb Ambedkar Marathwada University, India)
This paper proposes a method for automated road extraction from airborne Light Detection and Ranging (LiDAR) data. The method combines Segmentation Based Filtering (SBF) with Triangular Irregular Network-based segmentation to extract the road points. The method contains two major steps. Firstly, Segmentation Based Filtering (SBF) is applied to LiDAR data for initial segmentation of road regions. Here, region growing algorithm is applied after detecting outliers and subsequently, ground reference points are also identified using impainting with interpolation method. Secondly, road extracted by the SBF method is refined through the help of Constrained Delaunay Triangular Irregular Network based segmentation and then, road contour is extracted from road point image. For the experimental validation, the proposed and existing method is tested against ISPRS reference dataset. The experimental results showed that proposed method achieved that the completeness of the approach is 83.40 % and the correctness value is 83.02 % and the accuracy is 83.16%.
pptx file
Natural Vs. Manmade Scene Classification Using Statistics of Straight Lines
Classification of scenes along the semantic categories has received tremendous attention from researchers working in the field of computer vision. The content and the context information obtained from scenes at various levels of granularity have been used to solve the problem of classification of scenes. We propose a simple approach for classifying the scenes on the broader semantic lines of categories, viz natural and manmade (or artificial) scenes. Our approach is based on the observation that at a primitive level of visual processing of scenes, the presence of large number of straight line segments is more discriminative in deciding whether the scene is natural or manmade. We extract and encode the information about the straight line segments as a descriptor and use it to classify the scene as natural or manmade. Then, we compare our descriptor with the common descriptors like HSV (Hue, Saturation and Value) Histogram and Edge Orientation Histograms (EOH).
pptx file
Detection of Falsification Using Infrared Imaging: Time and Frequency Domain Analysis
Yaniv Azar (New York University Polytechnic School of Engineering & NYU WIRELESS, USA); Matthew Campisi (New York University Polytechnic School of Engineering, USA)
Throughout the years many people have tried to master the art of lie detection; however no one was ever able to come up with a method that detects lies in more than 83% accuracy, which is non invasive, mobile, and cost effective. Therefore this article will attempt to show that by measuring temperature changes in the nose area, one could determine whether someone is lying or not better than current polygraph can. First, eleven different subjects were chosen to participate in this project. Then, using a FLIR ThermoVision A40 infrared camera the data was collected from eleven different subjects in three different sets of measurement. This paper presents the results using two different methods of detection. The first method of detection was using a time domain analysis and the second method of detection was using a frequency domain analysis. Each of these methods had an accuracy of 69% and 84% in detecting lies, respectively. Finally, the two methods were compared to current polygraph, which has an accuracy of 83%. It was concluded that it is possible to detect lies using an infrared camera and that it is actually better than current polygraph since it has higher accuracy, is non invasive, and cost effective.
rar file
Bilateral Despeckling Filter in Homogeneity Domain for Breast Ultrasound Images
Vikrant Bhateja (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India); Mukul Misra (Shree Ram Swaroop Memorial University, Lucknow, India); Shabana Urooj (Gautam Buddha University, India); Aime' Lay-Ekuakille (University of Salento, Italy)
Breast sonograms are more effective towards differentiation of cysts from solid tumours; if they could be post-processed for minimization of speckle content without blurring of edges. The approach presented in this paper consists of a bilateral filtering in homogeneity domain so that the despeckling process do not compromises the texture and features of masses. The proposed despeckled approach decomposes the input image into homogeneous and non-homogeneous regions; which are then selectively processed using the bilateral filter. The domain filtering component is made dominant when applied to homogeneous pixels providing smoothening while the range filter dominates on the non-homogeneous pixels leading to edge preservation. Simulations carried out on breast ultrasound images depict satisfactory speckle filtering supported with improvement in values of performance parameters (PSNR, SSIM & SSI).

S19: Artificial Intelligence and Machine Learning-Igo to top

Room: 205 Block E Second Floor
Chairs: Joy Bose (Samsung R&D Institute India, Bangalore, India), Abhishek Gupta (JECRC, India)
Question Classification Using Syntactic and Rule Based Approach
Payal Biswas (Jawaharlal Nehru University New Delhi, India); Aditi Sharan (Jawaharlal Nehru University, India); Rakesh Kumar (Jawaharlal Nehru University New Delhi, India)
Question Classification is a crucial component of Question Answering System. In this paper we have proposed a compact and effective method for question classification. Here rather than using a two layered taxonomy of 6 course grain and 50 fine grained categories developed by Li and Roth, 2002, we have classified the questions into three broad categories. We have also studied the syntactic structure of the question and suggest the syntactic patterns and expected answer type for particular category of questions. Using these question Patterns we have also suggested an algorithm for classifying the question into particular category. For experiment purpose we have used Li and Roth data set of 2000 questions. The experimental output shows that even with small set of question categories we can classify the questions with more satisfactory and better result.Hence in brief various contributions through this paper are: - Achieve state of art of question classification using less number of question categories - Communicate an Question classification algorithm which classifies the question into proposed categories which aids to embed the appropriate Answer Extraction Algorithm discussed in our previous work [23]. - Suggest more generic syntactic patterns for Wh questions.
pptx file
Differential Evolution Based Multiobjective Optimization for Biomedical Entity Extraction
Utpal Sikdar and Asif Ekbal (IIT Patna, India); Sriparna Saha (IIT Patna & Department of CSE, India)
In this paper, we propose a two-stage approach based on multi-objective differential evolution (DE) for biomedical entity extraction. The algorithm works in two steps, first step of which concerns with the problem of automatic feature selection for entity extraction for a machine learning algorithm, namely Conditional Random Field (CRF). The solutions of the final best population provides different diverse set of classifiers. In the second phase we combine these classifiers together with a novel ensemble technique. Currently we evaluate the proposed algorithm for NE extraction in biomedical text. The proposed twophase approach yields the final recall, precision and F-measure values of 73.50%, 77.02% and 75.22% respectively.
pptx file
A Comparison of Multi-layer Perceptron and Radial Basis Function Neural Network in the Voice Conversion Framework
Ankita Chadha (K J Somaiya College of Engineering & University of Mumbai, India); Jagannath Nirmal (Mumbai University, India); Mukesh Zaveri (Sardar Vallabhai National Institute of Technology, surat, India)
The voice conversion system modifies the speaker specific features of the source speaker so that it sounds like a target speaker speech. The speaker specific features are reflected in speech at different levels, such as the shape of vocal tract, the shape of glottal excitation and the long term prosodic parameters. In this work, Line Spectral Frequencies (LSF) are used to represent the shape of the vocal tract and Linear Predictive (LP) residual represents the shape of the glottal excitation of a particular speaker. A Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) based neural network are explored to formulate the nonlinear mapping for modifying the LSFs. The baseline residual selection method is used to modify the LP-residual of one speaker to that of another speaker. A relative comparison between MLP and RBF are carried out using various objective and subjective measures for inter-gender and intra-gender voice conversion. The results reveal that an optimized RBF performs slightly better than baseline MLP based voice conversion.
rar file
Learning to Rank Experts Using Combination of Multiple Features of Expertise
V Kavitha (Anna University, India); Manju Gopalan (Annauniversity, India); Geetha T. v. (Anna University, India)
In academic domain, technical conferences are conducted to share different research ideas and to propose new research methodologies. The number of conferences being conducted in different academic domains and the number of research participants in conference are increasing rapidly. The conference chairs face difficulty in assigning panel of reviewers for various research topics. A ranked list of experts in a specific topic would assist the conference chairs in finding panel of reviewers. Expert finding system provides solution to this problem. The task of expert finding system is to determine a list of people sorted by their level of expertise in a specific research topic. This paper combines multiple features of research expertise to rank the list of experts on a topic. The ranked list of experts can be used to update the topic relevance score of a researcher for a specific research area. In order to rank the experts, we have used novel time weighted Citation Graph based features, modified Latent Dirichlet Allocation based textual features and Profile based features to represent the expertise of a researcher. Rank aggregation is done based on multiple features. Lambda rank, a semi-supervised learning to rank algorithm is used to learn the ranking function. The ranked list of experts for a research topic is suggested using the learned ranking function. Experiments made over a data set of academic publications in the area of Computer Science using combination of new features of expertise provide better ranked list of experts than using individual features.
pptx file
Multi-objective Clustering of Tissue Samples for Cancer Diagnosis
Sudipta Acharya (Indian Institute of Technology, Patna); Yamini Thadisina (Indian Institute of Technology, Patna, India); Sriparna Saha (IIT Patna & Department of CSE, India)
In the field of pattern recognition, the study of the gene expression profiles for different tissue samples over different experimental conditions has became feasible with the arrival of micro-array based technology. In cancer research, classification of tissue samples is necessary for cancer diagnosis, which can be done with the help of micro-array technology. In this article we have presented a multi-objective optimization (MOO) based clustering technique utilizing AMOSA (Archived Multi-Objective Simulated Annealing) as the underlying optimization strategy for classification of tissue samples from cancer data sets. As objective functions three cluster validity indices namely, XB, PBM, and FCM indices are optimized simultaneously to form more accurate clusters of tissue samples. The presented clustering technique is evaluated for two open source benchmark cancer data sets, which are Brain tumor data set and Adult Malignancy data set. In order to evaluate the quality or goodness of produced clusters two cluster quality measures viz, Adjusted Rand Index(ARI) and Classification Accuracy(%CoA) are calculated for each data set. Comparative results of the presented clustering algorithm with 10 state-of-the-art existing single-objective, multi-objective based clustering algorithms are shown for two benchmark data sets.
pptx file
A Low Cost Data Acquisition System From Digital Display Instruments Employing Image Processing Technique
Soumyadip Ghosh (Jadavpur University, India); Suprosanna Shit (Indian Institute of Science, Bangalore, India)
The use of digital instruments in industries and laboratories is rapidly increasing as they are simple to calibrate and have relatively high precision. In this paper, an automatic data acquisition system is proposed using OCR technique from digital multi-meter and other similar digital display devices. The input image is taken from a digital multi-meter having LCD seven segment display using a webcam. The image is then processed to extract numeric digits which are recognized using a feedforward neural network. The recognized values may be then exported to a spreadsheet for graph plotting and further analysis. A distinct advantage of this method is that it can automatically detect decimal point as well as negative sign. This setup can be used in real time systems employing a wide variety of digital display instruments, with high accuracy.
ppt file
Multiclass SVM-based Language-Independent Emotion Recognition Using Selective Speech Features
Amol Kokane (National Institute of Technology Karnataka, India); Ram Guddeti (National Institute of Technology Karnataka, Surathkal, India)
In this paper, we emphasize on recognizing six basic emotions viz. Anger, Disgust, Fear, Happiness, Neutral and Sadness using selective features of a speech signal of different languages like German and Telugu. The feature set includes thirteen Mel-Frequency Cepstral Coefficients (MFCC) and four other features of speech signal such as Energy, Short Term Energy, Spectral Roll-Off and Zero-Crossing Rate (ZCR). The Surrey Audio-Visual Expressed Emotion (SAVEE) Database is used to train the Multiclass Support Vector Machine (SVM) classifier and a German Corpus EMO-DB (Berlin Database of Emotional Speech) and Telugu Corpus IITKGP: SESC is used for emotion recognition. The results are analyzed for each speech emotion separately and obtained accuracies of 98.3071% and 95.8166 % for Emo-DB, IITKGP: SESC databases respectively.
A Supervised Approach to Distinguish Between Keywords and Stopwords Using Probability Distribution Functions
Aditi Sharan and Sifatullah Siddiqi (Jawaharlal Nehru University, India)
This paper presents a novel probability based approach for distinguishing between keyword and stopword from a text corpus. This has a lot of applications including automatic construction of stopword list. First objective of this paper is to investigate the role of probability distribution for distinguishing between keyword and stopword. Second objective is to compare the performance of probability distributions of various weighting measures for the purpose of identifying keyword and stopword. Main characteristics of our method are that it is corpus based, supervised and computationally very efficient. Being corpus based the method is independent of the language used. However we have tested the approach on a domain specific corpus in Hindi. In Hindi (including many Indian languages), it has a great significance as a standard list of stopwords is not available. The results are encouraging and we are able to achieve 74% accuracy. However as this is a preliminary attempt, there is a great scope for improvement.
pptx file
Gene-Expression Data Semi-Supervised Clustering in Multi-Objective Optimization Framework
Abhay Kumar Alok (IIT Patna, India); Sriparna Saha (IIT Patna & Department of CSE, India); Asif Ekbal (IIT Patna, India)
Studying the patterns hidden in gene expression data helps to understand the functionality of genes. But due to the large collection of genes and the complicated biological networks it is hard to study the generated large volume of data which often contains millions of measurements. In general clustering techniques are used to determine natural structures and capture exciting patterns from the given data as a first step of studying the gene expression data. In this paper the problem of gene expression data clustering is formulated as a semi-supervised classification problem. Thereafter framework of multiobjective optimization (MOO) is used to solve this semi--supervised clustering problem. Here five objective functions are used and simultaneously optimized by a newly developed simulated annealing based optimization technique AMOSA. Among the five objective functions, first four objective functions quantify some unsupervised properties like total symmetry, compactness and separability present in the clusters and last one captures the supervised information. In order to generate the supervised information, Fuzzy C-means algorithm is executed on the data sets. Based on the highest membership values of data points with respect to different clusters, labeled information are extracted. In each case only 10% class labeled information of data points are randomly selected which act as supervised information in case of semi-supervised clustering. The effectiveness of this proposed semi-supervised clustering technique is shown for three open access benchmark gene expression data sets. Results are compared with existing techniques for gene expression data clustering.
ppt file
A Novel Non-Destructive Grading Method for Mango (Mangifera Indica L.) Using Fuzzy Expert System
Rashmi Pandey, Nikunj Gamit and Sapan Naik (Uka Tarsadia University, India)
Mango (Mangifera Indica L.) sorting is the most desired expertise in the evaluation of automatic mango grading systems. Traditionally, Naked eye observation is used to assess the quality of mango. Hence, there is a need to automate grading process. Image processing and machine learning provide one alternative for an automated, non-destructive and cost-effective grading. In this paper, proposed methodology is divided in two halves: First part discusses selecting healthy mangoes and then classifying it into ripe and unripe category. Second part talks about grading mangoes based on its size. The image database is used to analyze performance of CIELab colour space and to find colour ranges for different regions of mango. CIELab colour model with Dominant density range method is used for colour feature extraction which easily discriminate colour and classify healthy and diseased mangoes. Same method is used to classify Healthy mangoes in ripe and unripe category. Rest of work is devoted for size measure evaluation using fuzzy expert system for grading of mango. Size feature is calculated using ellipse properties in order to classify in different grades. At final stage, size feature is fed to fuzzy expert system for grading. Integration of whole system results 97.47% average accuracy.
pdf file

Thursday, September 25, 14:30 - 18:30 (Asia/Calcutta)

S20-A: Bioinformatics and Bio-Computinggo to top

Room: 105 Block E First Floor
Chair: Saurabh Goyal (JSS-ATE, Noida, India)
A Steady State Genetic Algorithm for Multiple Sequence Alignment
Sabari Pramanik (Vidyasagar University, India)
Multiple Sequence Alignment is one of the important research topics in Bioinformatics. The objective is to maximize the similarities among sequences by adding and shuffling gaps in sequences. We here present a genetic algorithm based approach to solve the problem efficiently. We use steady state Genetic Algorithm with a new form of chromosome representation. PAM 350 is used as scoring matrix for calculating the SOP score, which is the fitness score in genetic algorithm. The results are tested using BAliBASE benchmark dataset and it shows that the solution does offer better results.
pptx file
An Ab Initio- ART Approach for Protein Folding Problem
Geetika Silakari (Rajiv Gandhi Technical University, M. P., India)
The protein structure has always been under significant exploration, as this is vitally responsible for the basic functionality. Understanding the formation of these structures has gradually been called as 'the protein folding problem'. The solution for this problem is basically concerned with the ultimate aim to attain the native state. Broadly the methods to predict the ultimate goal, are categorized as- the template -based (homology modeling, threading/fold recognition) and template -free (ab initio) methods, which are discussed later in detail. In this paper we have followed the ab initio methodology, to develop an ARTNN based approach to cluster the stable folds, out of the search space. The approach implemented through either of the two models (SF-ART or FC-ART model) could help enhance the performance of obtaining the three dimensional native state by providing more productive pathway, using the crude ab initio parameters, emphasizing quality performance with reduced execution time. This could thus be considered as a productive ab initio- clustering approach.
pptx file
Maximal Pattern Matching with Flexible Wildcard Gaps and One-Off Constraint
Anu Dahiya (Thapar University, India); Deepak Garg (Bennett University, Greater Noida, India)
Pattern matching is a fundamental operation in finding knowledge from large amount of biosequence data. Finding patterns help in analyzing the property of a sequence. This paper focuses on the problem of maximal pattern matching with flexible wildcard gaps and length constraints under the one-off condition. The problem is to find the maximum number of occurrences of a pattern P with user specified wildcard gap between every two consecutive letters of P in a biological sequence S under the one-off condition and constraint on the overall length of the matching occurrence. To obtain the optimal solution for this problem is difficult. We propose a heuristic algorithm, MOGO, based on the Nettree data structure to solve this problem. Theoretical analysis and experimental results demonstrate that MOGO performs better than the existing algorithms in most of the cases when tested on real world biological sequences.
pptx file
Progressive Alignment Using Shortest Common Supersequence
Ankush Garg and Deepak Garg (Thapar University, India)
Multiple Sequence Alignment is an NP-hard problem. The complexity of finding the optimal alignment is O(LN) where L is the length of the longest sequence and N is the number of sequences. Hence the optimal solution is nearly impossible for most of the datasets. Progressive alignment solves MSA in very economic complexity but does not provide accurate solutions because there is a trade-off between accuracy and complexity. Guide tree that guides the alignment of sequences is generated by alignment score in progressive alignment. In this paper, Shortest Common Supersequence (SCS) is utilized to generate the guide tree for progressive alignment and the output alignment results are checked by BAliBASE benchmarks for accuracy. According to SP and TC scores, progressive alignment using the guide tree generated by SCS is better than the guide tree generated by alignment score. Original ClustalW2.1 is modified by SCS, and modified ClustalW2.1 gives better results than the original tool.
pdf file
Enhanced Heuristic Approach for Travelling Tournament Problem Based on Extended Species Abundance Models of Biogeography
Daya Gupta (Delhi Technological University, India); Lavika Goel and Ashish Chopra (Delhi College of Enginnering, India)
This paper shows a heuristic approach of enhanced simulated annealing based on extended species abundance models of bio geography in order to obtain optimal solution for Travelling Tournament Problem. We upgrade the migration step of BBO by using probabilistic measures and hybridize it with simulated annealing to solve the TTP problem and avoid the problem of local minima. Our proposed hybrid approach converges to an optimal solution for TTP. There is negative impact of non deterministic problems on the TTP solution. We considered all these non-deterministic problems as noise. The physical significance of noise in our algorithm is any existing parameter which can affect the fitness of the habitat. We also calculate the overall cost of TTP for various extended species abundance models of BBO (Linear and Non linear models) to achieve desirable results. We compare the performance of our approach with other methodologies like ACO and PSO.
pptx file
An Electrooculogram Based Real Time System for Measurement and Analysis of Visual Stimuli for Detecting Strabismus and Nystagmus
Dyuthi Varier (VIT Chennai, India); Venkatasubramanian Krishnamoorthy (Vellore Institute of Technology Chennai Campus, India)
Strabismus, the misalignment between the eyes, is a prevalent condition observed in the medical field. It is found in infants; patients affected by cerebral palsy and may also occur due to over usage of drugs. Nystagmus is a condition that causes involuntary, rapid movement of one or both eyes. The eye(s) may move from side to side, up and down, or in a circular motion. It is common for individuals with this condition to tilt their heads to compensate for their difficulty seeing. The individuals with these disorders are associated with difficulty in climbing stairs, reading or driving. The brain fails to take images from one of the eyes and the unused eye eventually turns blind. In the present eye examination system, to test for these disorders, manual analysis is done by a doctor or examiner. This includes the cover-uncover test where the person manually covers the affected eyes intermittently to observe the change in alignment. Also, caloric reflex test, in which warm or cold water or air is poured into one ear. Results of manual tests may vary for each patient depending on the eye examiner taking the test. To standardize the testing process, a novel real time electrooculography based system is proposed. Bio sensors are used in this system to acquire EOG to identify/track the difference in alignment of both the eyes. This can ensure the fast and accurate detection of strabismus or nystagmus affected eyes.
pptx file

S20-B: International Symposium on Bio-Inspired Computing (BioCom'14)go to top

Room: 105 Block E First Floor
Chair: Saurabh Goyal (JSS-ATE, Noida, India)
Use of Soft Computing Techniques in Medical Decision Making: A Survey
Ajay Bhatia (Punjab Technical University, India); Vijay Mago (Troy University, USA); Rajeshwar Singh (Punjab Technical University)
Health care practitioners need to diagnose a disease and make a decision about the treatments. This has been one of the most challenging tasks for them. During the last two decades, researchers from computer science, mathematics, and medical sciences have been developing intelligent tools for supporting medical decision making. Various soft computing based systems have been successfully developed and used by healthcare professionals. In this paper, we briefly introduces the strengths of the soft computing and demonstrate the possibilities of applying these techniques to diagnosis and decision making. This review paper also identifies the more proficient techniques used in medical domain.
Differential Evolution for Solving Multi Area Economic Dispatch
Veera Venkata Sudhakar Angatha (SR Engineering College, India); Chandram Karri (BITS Pilani & KK BIRLA GOA CAMPUS, India); Dr. A. Jayalaxmi (Jntu Hyderabad, India)
Reviewer comments Reviewer 1: Nil Reviewer 2: The paper could have been organised better. Some of the references included in the list have not been referred, Some of the mathematical expressions have not been written clearly. Recommended changes: The weak aspects suggested need to be addressed Corrections have been done and reference numbers are included. It is a typographical error. Reviewer 3: The reviewed paper (pdf file) has title without the word "Solving", i.e. Differential Evolution for Multi Area Economic Dispatch I do not see any week aspects of the paper. Recommended changes: Please decide whether the title should contain the word 'solution' Paper title: Differential Evolution for solving Multi Area Economic Dispatch Reviewer 4: The fitness function is not presented. The comparison against rival methods is not clear, particularly ref. [8]. Recommended changes: 1. The fitness function must be detailed. The fitness function is fuel cost in the MAED problem. It is mentioned in step 5 of DE algorithm. 2. According to the literature review, several methods have been already used to solve the MAED problem. The authors should outline their pros and cons and consider the best of them for comparative purposes. The effectiveness of the proposed method, in terms of fuel cost given in three case studies. 3. The setting of the parameters should be at least justified, if their tuning cannot be included in this study. Control parameters used during execution of algorithms are given in case studies. Those are fixed by iterative procedure.
pptx filepdf file
Service Optimization in Cloud Using Family Gene Technology
Alaka Ananth (The National Institute of Engineering, Mysore, India); Chandra Sekaran K (National Institute of Technology Karnataka, India)
Cloud computing is the upcoming technology in current day scenario. It has emerged as a solution for providing resources to the consumers in the form of software, infrastructure or platform as a service. Cloud Service Storage enables users to synchronize their files across devices and also allows them to backup online. The main aim of this paper is to provide service optimization. Scheduling of services is a NP hard problem. Thus exhaustive approaches are not suitable for these kinds of algorithms. This paper presents a genetic algorithm based approach for optimization of services by using family gene technology. Family gene technology is used to classify individuals to different families based on gene parameters and evaluate the fitness function for each individual in that family. Optimization is achieved by mapping the service requests to appropriate service instances which satisfy the request and then by applying family gene based genetic algorithm to those mapped service requests.
pdf file

S21: Cognitive Radios and White Space Networkinggo to top

Room: 116 Block E First Floor
Chair: Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India)
Outage Performance of SU Under Spectrum Sharing with Imperfect CSI and Primary User Interference
Binod Prasad (NIT DURGAPUR, India); Sanjay Dhar Roy (National Institute of Technology Durgapur, India); Sumit Kundu (National Institute of Technology (NIT), Durgapur, India)
Cognitive radio refers to smart technology aims to improve the spectrum utilization and alleviate the spectrum shortage by allowing the secondary user (SU) (unlicensed user) to share the existing spectrum allocated to primary user (PU). In spectrum sharing approach secondary users are allowed to access the same spectrum used by primary user provided the interference produced at PU receiver due to SU transmission remains below the predefined interference threshold defined for the PU. We consider the channel state information available at SU transmitter of the interfering link i.e. link from SU-Tx to PU-Rx is imperfect. The SU estimates its transmit power based on CSI of interfering link if PU is present, while it transmits with maximum power Pm in absence of PU. In this paper we assume the CSI available, based on MMSE estimation, at SU-Tx is imperfect. We derive closed form expression of the outage probability of SU over Rayleigh faded channel with imperfect CSI of link between SU Transmitter and PU Receiver for a given outage constraint of PU. We also studied the impact of PU interference, channel estimation error, tolerable interference threshold, and acceptable limit of PU outage on SU performance. A MATLAB based simulation has also been carried out to support the analytical result.
zip file
Cooperative Discriminant Analysis Based Spectrum Sensing Using Optimum Fusion Rule
Bini Mathew (Mahatma Gandhi University, India); Ebin Manuel (University of Kerala, India)
Cognitive radio is a potential technique for future wireless communications to mitigate the spectrum scarcity issue. One of the most important challenge of a cognitive radio system is to identify the presence of primary (licensed) users over a wide range of frequency spectrum at a particular time and specific geographic location. In this paper, a spectrum sensing technique for cognitive radios based on discriminant analysis, called spectrum discriminator is studied. It is a blind detection technique where no prior knowledge about the measured signal is needed. In this paper, this spectrum sensing technique is extended to a cooperative environment using an adaptive cooperative spectrum sensing scheme based on the optimal data fusion rule. In the proposed method, secondary users efficiently cooperate to achieve superior detection accuracy with minimum cooperation overhead. The algorithm is able to detect the presence or absence of signals in any kind of spectrum. Hence, this method becomes a strong basis for a high quality operation mode of cognitive radios. Simulation results show that the proposed cooperative spectrum sensing scheme outperforms the conventional methods even under low SNR conditions.
pdf file
A Stable Route Selection Algorithm for Cognitive Radio Networks
Nitul Dutta (MEF Group of Institutions, Rajkot, India); Hiren Deva Sarma (Sikkim Manipal Institute of Technology, India); Ashish Srivastava (Marwadi Education Foundation’s Group of Institutions, India); Jyoti Srivastava (IIIT Allahabad, India)
In this paper, a route selection mechanism for cognitive radio network (CRN) is proposed where the route construction process selects only those channels which have the maximum probability of being stable. By the term stable channel, we mean that the channel will not be claimed by primary users (PUs) frequently. A probabilistic approach for finding the steady channel is adopted considering an initial state probability, based on the previous channel availability history. A set of algorithms are proposed in the paper to implement the route selection method in CRN and simulated in ns-2. Routing overhead, packet loss rate and route sustainability parameters with respect to the proposed protocol, show improvement over Cognitive AODV (CAODV).
pptx file
ANRC Hybrid Test Bed Implementation and an End-to-End Performance Characterization of Dynamic Spectrum Access
Ramachandra Budihal (Wipro Technologies & Indian Institute of Science, India); Surendran R and Mahendravarman N (Wipro Technologies, India); Jamadagni (Indian Institute of Science, India)
An abundance of spectrum access and sensing algorithms are available in the dynamic spectrum access (DSA) and cognitive radio (CR) literature. Often, however, the functionality and performance of such algorithms are validated against theoretical calculations using only simulations. Both the theoretical calculations and simulations come with their attendant sets of assumptions. For instance, designers of dynamic spectrum access algorithms often take spectrum sensing and rendezvous mechanisms between transmitter-receiver pairs for granted. Test bed designers, on the other hand, either customize so much of their design that it becomes difficult to replicate using commercial off the shelf (COTS) components or restrict themselves to simulation, emulation / hardware-in-loop (HIL), or pure hardware but not all three. Implementation studies on test beds sophisticated enough to combine the three aforementioned aspects, but at the same time can also be put together using COTS hardware and software packages are rare. In this paper we describe i) the implementation of a hybrid test bed using a previously proposed hardware agnostic system architecture ii) the implementation of DSA on this test bed, and iii) the realistic hardware and software-constrained performance of DSA. Snapshot energy detector (ED) and Cumulative Summation (CUSUM), a sequential change detection algorithm, are available for spectrum sensing and a two-way handshake mechanism in a dedicated control channel facilitates transmitter-receiver rendezvous.
ppt file
A Bayesian Approach Using M-QAM Modulated Primary Signals for Maximizing Spectrum Utilization in Cognitive Radio
Deepak Sahu (ABV-Indian Institute of Information Technology and Management, India); Aditya Trivedi (ABV-Indian Institute of Information Technology and Management Gwalior, India)
Recently, cognitive radio (CR) has gained a lot of attention with its spectrum sensing feature, because it provides better spectrum utilization in wireless communication. The spectrum sensing plays a major role in CR. In this paper, a new technique of spectrum sensing is proposed in which optimal Bayesian detector is used to detect the presence of M-ary quadrature amplitude modulated primary signal. This proposed approach acts as optimal detector when primary user is idle for most of the time. The analytical expression for detection statistic of the proposed technique over additive white gaussian noise channel is derived and the detection and false alarm probabilities are also calculated. These probabilities are compared with M-ary phase shift keying modulated primary signal scheme. The nature of false alarm probability is also analysed for different priori probability cases. Simulation results show that the proposed approach achieves higher spectrum utilization and also maximizes the throughput of secondary user in the CR network.
pdf file
Cognitive Load Measurement - A Methodology to Compare Low Cost Commercial EEG Devices
Rajat Das and Debatri Chatterjee (TCS Innovation Lab, India); Diptesh Das (Tata Consultancy Services Limited, India); Arijit Sinharay (Tata Consultancy Services Ltd., India); Aniruddha Sinha (Tata Consultancy Services, India)
Use of EEG signals in measuring cognitive load is a widely practiced area and falls under Brain-Computer-Interfacing (BCI) technology. However, this technology uses medical grade EEG devices that are expensive as well as not user-friendly for regular use. Recent launch of low cost wireless EEG headsets from different companies opens up the possibility for commercialization of BCI and thus drew attention of the research community all over the world. While there are numerous studies on BCI with use of medical grade devices there are limited number of papers reported on those low cost devices. Moreover, reports on evaluating relative performance of these commercially available EEG devices based on a specific BCI experiment are minuscule. This paper attempts to fill this gap and presents a comparative study between two widely used low cost wireless EEG devices namely Emotiv and Neurosky for application in cognitive load detection.
pptx file
A Dynamic Opportunistic Spectrum Access MAC Protocol for Cognitive Radio Networks
Smit B Tripathi (Gujarat Technological University & G. H. Patel College of Engineering & Technology, India); Mehul B Shah (Gujarat Technological University, India)
Cognitive radio can be regarded as the intelligent wireless devices which can sense the medium and effectively utilize the vacant or underutilized spectrum. The cognitive radio enables the Secondary(unlicensed) Users(SU)to opportunistically access the spectrum unused by the Primary (licensed) Users (PU). There are two basic objectives of Cognitive Radio Medium Access Control (CR MAC): interference control and avoidance for PUs and collision avoidance among SUs. We propose a MAC protocol for single channel which opportunistically utilizes the spectrum unused by the PUs. It has a licensed channel in the primary network. The time in the network is divided in terms of slot time. Slot time is of equal length and packet transmission in carried out in slots. It is divided in two phases namely the contention phase and the data transmission phase. The first phase selects a SU based on 802.11 DCF with RTS/CTS mechanism. The second phase is for data transmission. The SU is allowed to occupy all the time remaining after the contention phase. In case of collision, a dynamic backoff scheme is applied. We compare the results of the OSA MAC for single channel with dynamic MAC scheme. The results show that throughput of our scheme is better than the conventional OSA scheme.
ppt file
Throughput Analysis in Cognitive Radio Networks
Pankaj Verma and Brahmjit Singh (National Institute of Technology Kurukshetra, India)
The most valuable resource in wireless communication is spectrum and we are facing a shortage of spectrum to accommodate growing demand of wireless users. Recent studies have shown that allocated spectrum is underutilized. Cognitive radio networks have gained a significant interest because this technology has the ability to alleviate the problem of spectrum underutilization. In this paper, we have studied the cognitive radio networks in terms of throughput for different setting of system parameters. We have also investigated throughput in terms of Probability of Undetectable Primary User Transmission and it has been observed that with the increase in throughput, PUPT also increases.
pptx file

S22-A: Cloud, Cluster, Grid and P2P Computing-Igo to top

Room: 110 Block E First Floor
Chair: Porika Sammulal (JNTUH University, India)
An Efficient Task Scheduling Algorithm for Heterogeneous Multi-Cloud Environment
Sanjaya Kumar Panda (Veer Surendra Sai University of Technology, Burla & Indian School of Mines, Dhanbad, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)
Cloud Computing has been adopted as one of the growing technologies in the business and research community. However, due to significant client demands, there is a need to overflow some workloads to other data centers as no data center has unlimited resources. The workload sharing provides even more flexible and cheaper resources to complete the applications submitted to the data centers. However, scheduling workloads in multi-cloud environment is challenging as the data centers have resources which are heterogeneous in nature. In this paper, we propose a task scheduling algorithm in a heterogeneous multi-cloud environment. The algorithm is based on two popular heuristics namely, Min-Min and Max-Min. We perform extensive experiments on some benchmark and synthetic data sets and compare the results with two existing multi-cloud scheduling heuristics. The results show that the proposed algorithm outperforms both the heuristics in terms of makespan and average cloud utilization.
pdf file
Priority Based Resource Allocation and Demand Based Pricing Model in Peer-to-Peer Clouds
Dilip S M Kumar (University Visvesvaraya College of Engineering (UVCE) & Bangalore University, India); Naidila Sadashiv (Bangalore University, India); Rampura S Goudar (Redknee, India)
Management of resources in large scale distributed cloud environment is a major challenge due to the nature of cloud. On-demand resource provisioning allows the requests to be made on the fly. In order to provide QoS in accordance with the SLA in such a distributed environment, an effective resource handling scheme and pricing models that will benefit both the provider and cloud users is required. This paper aims to provide priority based resource allocation to the tasks by giving higher preference to the tasks that requests large amount of CPU. The tasks are classified into high, medium and low priority sets using the k -means algorithm. We also propose a dynamic pricing model where in the price is calculated based on the current demand for a resource and its availability. During high resource contention across the network, the resources are priced more than when there are surplus amount of resources. In such scenarios, the resources are discovered from the peer clouds through content addressable network for prioritized tasks. Simulation under different contention periods is carried out based on our priority based allocation. The results show that our algorithm provides better resource utilization ratio and throughput ratio when compared with non-prioritized tasks.
pdf file
Secure VM Backup and Vulnerability Removal in Infrastructure Clouds
Prabhjeet Kaur (Central University of Rajasthan); Gaurav Somani (Central University of Rajasthan, India)
The multi-tenant nature of cloud and provision of resources in terms of virtual machines (VM's) establishes an immense need of rethinking about the applicability of earlier backup and recovery methods in virtual machines based infrastructure cloud platforms. In this work, we have proposed a complete backup and recovery framework, VM-SAVER, which provides a flexible and scalable backup and recovery models for virtualized physical servers inside a multi-tenant cloud. VMSAVER incorporates the basic state saving methods of various hypervisors and proposes various approaches and scenarios where the virtualized servers backup and recovery is ensured. In addition to this, our framework also has a novel feature of vulnerability detection and removal from the running VM's and past backups. This novel framework is flexible, reconfigurable and can work with a range of hypervisors making it suitable for the present day infrastructure clouds.
pdf file
Enhanced Cloud Computing Security and Integrity Verification via Novel Encryption Techniques
Ranjit Kaur and Raminder Pal Singh (Lovely Professional University, Phagwara, India)
Cloud computing is a revolutionary movement in the area of IT industry that provides storage, computing power, network and software as an abstraction and as a service, on demand over the internet, which enables its clients to access these services remotely from anywhere, anytime via any terminal equipment. Since cloud has modified the definition of data storage from personal computers to the huge data centers, security of data has become one of the major concerns for the developers of cloud. In this paper a security model is proposed, implemented in Cloud Analyst to tighten the level of cloud storage security, which provides security based on different encryption algorithms with integrity verification scheme. We begin with the storage section selection phase divided into three different sections Private, Public, and Hybrid. Various encryption techniques are implemented in all three sections based on the security factors namely authentication, confidentiality, security, privacy, non-repudiation and integrity. Unique token generation mechanism implemented in Private section helps ensure the authenticity of the user, Hybrid section provides On Demand Two Tier security architecture and Public section provides faster computation of data encryption and decryption. Overall data is wrapped in two folds of encryption and integrity verification in all the three sections. The user wants to access data, required to enter the user login and password before granting permission to the encrypted data stored either in Private, Public, or Hybrid section, thereby making it difficult for the hacker to gain access of the authorized environment.
pptx file
Statistical-based Filtering System Against DDOS Attacks in Cloud Computing
Pourya Shamsolmoali and Masoumeh Zareapoor (Jamia Hamdard University, India)
A Distributed Denial of Service (DDOS) attack can make huge damages to resources and access of the resources to genuine users. Offered defending system cannot be easily applied in cloud computing due to their relatively low competence and wide storage. In this work we presented statistical technique to detect and filter DDOS attacks. The proposed model requires small storage and ability of fast detection. The obtained results show that our model has the ability to mitigate most of TCP attacks. Detection accuracy and Time consumption were the metrics used to evaluate the performance of our proposed model. From the simulation results, it is visible our algorithms achieve high detection accuracy (97%) with fewer false alarms.
pptx file
Modified MapReduce Framework for Enhancing Performance of Graph Based Algorithms by Fast Convergence in Distributed Environment
Hitesh Singhal (National Institute of Technology Karnataka, India); Ram Guddeti (National Institute of Technology Karnataka, Surathkal, India)
The amount of data which is produced is huge in current world and more importantly it is increasing exponentially. Traditional data storage and processing techniques are ineffective in handling such huge data [10]. Many real life applications require iterative computations in general and in particular used in most of machine learning and data mining algorithms over large datasets, such as web link structures and social network graphs. MapReduce is a software framework for easily writing applications which process large amount of data (multi -terabyte) in parallel on large clusters (thousands of nodes) of commodity hardware. However, because of batch oriented processing of MapReduce we are unable to utilize the benefits of MapReduce in iterative computations. Our proposed work is mainly focused on optimizing the three factors resulting in performance improvement of iterative algorithms in MapReduce environment. In this paper, we address the key issues based on execution of tasks, the unnecessary creation of new task in each iteration and excessive shuffling of data in each iteration. Our preliminary experiments have shown promising results over the basic MapReduce framework. The comparative study with the existing solutions based on MapReduce framework like HaLoop, has also shown better performance w.r.t the algorithm run time and the amount of data traffic over the Hadoop Cluster.
pdf file
A Dynamic Workload Management Model for Saving Electricity Costs in Cloud Data Centers
Narander Kumar (Babasaheb Bhimrao Ambedkar University, Lucknow, India); Shalini Agarwal (Sri Ramswaroop Memorial University, India)
Geographically distributed data centers of cloud provider incur heavy electricity costs due to high prices as well as inefficient workload management among the data centers. To bring down the operational costs dynamic power management is used as the basic approach where different power modes exist for a server. However the air flow pattern of the cooling systems is not taken into consideration in the existing works. This paper investigated that the power consumption and hence the electricity costs of the active servers in the data center is influenced by server utilization as well as output temperature of the cooling unit and proposed two algorithms, Electricity Cost Saving Workload Management Algorithm(ECSWMA) and Electricity Price Aware Workload Management Algorithm (EPAWMA) that jointly manage the workload of all the data centers run by a cloud provider in a cost effective manner. Experiments show that the proposed algorithm lowers the Accumulated Electricity Costs of the active servers to a large extent.
pptx file
Software-Defined Cloud Computing: Architectural Elements and Open Challenges
Rajkumar Buyya and Rodrigo N. Calheiros (The University of Melbourne, Australia); Young Yoon (Samsung Electronics, Korea)
The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases—QoS-aware bandwidth allocation and bandwidthaware, energy-efficient VM placement—and discuss the research challenges and opportunities in this emerging area.

S22-B: Cloud, Cluster, Grid and P2P Computing-IIgo to top

Room: 110 Block E First Floor
Chair: Porika Sammulal (JNTUH University, India)
A Broker Based Approach for Cloud Provider Selection
Raghavendra Achar (NITK, India); Santhi Thilagam P. (National Institute of Technology Karnataka & Surathkal, India)
Rapid growth of internet technology made many IaaS providers arise across the globe to meet the needs of small IT companies. Number of IT companies started using resources of IaaS provider due to elastic and pay as you go nature. Increasing number of cloud providers results in difficulty for requester to select suitable cloud provider based on requirements. In this paper we present a broker based architecture for selecting suitable cloud provider from multiple providers. The broker measure the quality of each cloud provider and prioritize them based on the needs of the requester. The experiment is conducted using cloudsim simulator shows that proposed architecture selects suitable cloud provider.
ppt file
Access Control Aware Search on the Cloud Computing
Abdellah Kaci and Thouraya Bouabana-Tebibel (Ecole Nationale Supérieure d'Informatique, Algeria); Zakia Challal (USTHB, Algeria)
The current trend towards outsourcing data to the Cloud Computing requires rigorous techniques to deal with data confidentiality. Encryption is one of the most secure techniques in use. However, it remains weak against some indiscretions, and presents incompleteness in the measures offered to ensure confidentiality. New flaws appear, inducing confidential information disclosure, despite a secure and rigorous data encryption. The flaws appear particularly when searching on data even though the latter are encrypted. In this article, we propose to enhance the level of confidentiality that may already be guaranteed for outsourced data. We are particularly interested in access control on the result of the search over encrypted data. The property behind this aspect of security is known as ACAS (Access Control Aware Search) principle. We present an approach which integrates the access control mechanism to data encryption in order to comply with ACAS. The techniques used are based on Searchable Encryption and Attribute Based Encryption methods. The proposed model underwent experiments to evaluate its performances based on data size and the complexity of the access control policy.
pptx file
Two Layered Protection for Sensitive Data in Cloud
Kamlesh Kumar Hingwe (National Institute of Technology, Tiruchirappalli, India); S. Mary Saira Bhanu (National Institute of Technology-Tiruchirappalli, India)
Security and privacy are the biggest obstacles in Database as a service (DBaaS) of Cloud Computing. In DbaaS, cloud service providers provide services for storing customer's data. As the data are managed by an un-trusted server, the service is not fully trustworthy. The data at the third party data center can be made secure by encrypting the database. But querying the encrypted database is not easy. The result can be obtained from the encrypted database either by decrypting the database for every query or the query itself is encrypted and encrypted query is executed over encrypted database. Another problem associated with most of the database encryption algorithms is that they do not support range query. The proposed framework performs database encryption, query encryption and also supports range query over encrypted databases. This framework is focused on securing database as well as storing sensitive information without any leaks. A double layered encryption is used for sensitive data and a single layer encryption is used for non-sensitive data. Order Preserving Encryption (OPE) is used for single layer encryption. OPE maintains the order in encrypted database and so range query can be performed over encrypted database using encrypted query. OPE has a drawback of revealing information and so for sensitive data, a double layered encryption using Format Preserving Encryption (FPE) followed by OPE symmetric key encryption algorithm is proposed. Symmetric key is used for both OPE and FPE but key is divided into two parts for double encryption.
zip file
Cloud Broker: Working in Federated Structures A Hybrid Cloud Live Performance Analysis
Prashant Khanna and Sonal Jain (JK Lakshmipat University, India); BV Babu (Galgotias University & Greater Noida, India)
The research analyzes the functioning of an Autonomic Cloud Broker (ACB) in a real life scenario utilizing a Federated Cloud Infrastructure. The paper analyzes two important use cases that exist while a Cloud Broker enables the provision of services from multiple cloud providers to simultaneously demanding cloud users. Varying, real time load conditions are generated in a live use case on a private cloud with distributed cloud brokers working in a federated manner with cloud bursting handled through a common interface visible to the Amazon Web Services (AWS) cloud infrastructure. The analysis engine is generated using the NewRelic Analysis engine and results on the performance of the broker are derived in real time. The paper highlights real world issues plaguing the Cloud Brokerage framework and indicates ways to mitigate the same using the Federated Cloud Infrastructure combining the Public Cloud and a private cloud. The research asserts that it is possible to create Autonomic Cloud Brokers, albeit in a tightly integrated and fine tuned cloud environment.
pptx file
Extended Level Real Time Scheduling Framework Using a Generalized Non-Real Time Platform
Purnima Singhal, Amit Kumar and Upendra Ghintala (National Institute of Technology Agartala, India); Kunal Chakma (National Institute of Technology - Agartala, India)
The feasible execution of real time tasks over a platform majorly depends upon the scheduling policies to be used. Also, different task domains comply to different resource requirements. Allocating resources optimally according to the task requirements is a core challenge to the real time systems community. This paper discusses an approach where we leverage a generalized non real time system to develop an integrated schedule for real time tasks by extending the control for resource allocation to the user level. A major advantage of this framework is that it does not involve any modifications to the underlying kernel code or the use of patches. In our framework, two levels of scheduling are maintained, where at the base level the task schedule is decided by the kernel internally, and at the extended level, the user determines a separate schedule. Both the schedules are maintained inside the kernel. Each schedule is preceded by a schedulability test which governs the feasibility of execution of the incoming task set.
pptx file

S23: Hybrid Intelligent Models and Applicationsgo to top

Room: 007-A Block E Ground Floor
Chair: Manjunath Aradhya (Sri Jayachamarajendra College of Engineering, India)
Optimizing Architectural Properties of Artificial Neural Network Using Proposed Artificial Bee Colony Algorithm
Hiteshkumar Nimbark (JJTU, India); Rinkal Sukahdia (GTU, India)
The design of Artificial Neural Network (ANN) is a typical task as it is depends on human experience. There are few techniques like the Back-Propagation algorithm and nature inspired meta-heuristic are one of the most widely used and popular technique for optimizing feed forward neural network training. Artificial Bee Colony (ABC) algorithm is nature inspired meta-heuristic approach based on behavior of intelligent honey bees for searching food sources of its colony. To improve the performance of artificial bee colony algorithm, a novel proposed ABC approach is introduced using opposition-chaos initialization method with well balance characteristic among exploitation and exploration abilities. Finally the proposed novel ABC is used for optimizes an ANN's architecture properties like synaptic weight and transfer function of each neuron that maximized the accuracy and minimizes the error. Analysis is been performed over standard classification dataset, reflecting light of efficiency of proposed method.
pptx file
Hybrid Genetic Particle Swarm Tuned Sliding Mode Controller for Chaotic Finance System
Indhu Nair and Anasraj Robert (Government Engineering College Thrissur, India)
This exploration is to design an optimal sliding mode controller for the chaotic finance system. In this controller design, back stepping and sliding mode control techniques are combined together to get the chaotic finance system globally, asymptotically stabilized at the equilibrium point. Furthermore, sliding surface parameters are optimized using a hybrid Genetic Particle Swarm Optimization (GPSO) to improve the reaching phase characteristics of the sliding mode controller. Numerical simulation results demonstrate the effectiveness of the proposed scheme in successfully tuning the parameters of the sliding mode controller. The comparative study with other techniques shows the efficacy of the hybrid Genetic Particle Swarm tuned sliding mode controller in improving the reaching phase characteristics and settling time required for the chaotic finance system to reach a stable equilibrium point.
pptx file
Improving Change Proneness Prediction in UML Based Design Models Using ABC Algorithm
Deepa Godara and Rakesh Kumar Singh (Uttarakhand Technical University, India); Rakesh Singh (Kumaon Engineering College, India)
In the field of software engineering, which is emerging as the undisputed man of the match in the ever-changing sports of sophistications, the incessant changes effected in software every day have assumed such an alarming proportion causing untold and unimagined paradoxes that it is highly essential to initiate instant and immediate steps to balance this blitz. It does not mean that no endeavor has been made in the bye-gone era to tackle the issue. In fact, several methods to solve this dilemma were introduced in the past by predicting the changes in the software. But these methods have miserably met with a waterloo in facilitating a highly appreciable forecast harvest. To handle this problem with an eye on good prediction accuracy, a new method is introduced in our paper. The two phases in our proposed work are: (1) Feature Identification (2) Classification of classes for the change-proneness prediction. In feature identification phase, the features obtained from any input application are time, trace events, behavioral dependency, frequency and popularity, which help to predict the change proneness in our work. There are three ways by which these five features are found from the application. They are: Features obtained directly from an application such as time, trace events; Features obtained from UML Diagrams such as behavioral dependency and Features obtained from optimal frequent item set mining and ABC such as frequency and popularity. Thus all these five features are obtained from our proposed work and then in the classification phase, these features are given as the input to the ID3 decision tree algorithm for effectively classifying the classes according as whether it predicts the change proneness or not. If a class is classified into prediction of change prone class, then the value of change proneness is also obtained by our work.
pptx file
A New Light Weight Encryption Approach to Secure the Contents of Image
Jalesh Kumar and S Nirmala (J. N. N. College of Engineering, India)
In this paper, a light weight encryption approach is proposed to secure the contents of digital image at different levels. The proposed approach is based on moves of knight pawn used in chess board and genetic algorithm. The procedure comprises four stages. In first stage, the numbers of initial positions for knight moves are considered. The number of moves required for each knight is chosen in second stage. Two knight move points are selected randomly in third stage. Finally, crossover operation is applied on selected moves to degrade the visual quality of image. The performance analysis of the proposed method is measured in terms of correlation coefficient, entropy and Peak Signal to Noise Ratio. The analysis carried out reveals that the proposed algorithm works successfully to secure the information in an image at different levels.
ppt file
Sensitivity Analysis of Wind Power Generating System
Shahida Khatoon, Dr. Ibraheem and Reshma Ehtesham (Jamia Millia Islamia, India)
Assessment of parameter variation is necessary for high performance of system control. The variation in parameter values makes the model inaccurate, thus the controller performance. The parameters assessment of wind turbine generators becomes utmost important due to the increasing penetration of wind turbine generators into the power system. The parameter values and assumptions of any model are subject to change and error. Sensitivity analysis investigates these potential changes and errors and their impacts on conclusions to be drawn from the model. High performance operation of permanent magnet machines relies heavily upon machine characteristics or parameters. Thus the performance of the control is heavily tied to this determination of machine parameters, and any errors therein can degrade performance.
pptx file
Implementation of Fractional Fuzzy PID Controllers for Control of Fractional-Order Systems
Pragya Varshney (Netaji Subhas Institute of Technolgy, India); Sujit Kumar Gupta (NIT Kurukshetra, India)
In this paper, a novel fractional fuzzy proportional-integral-derivative controller (FFPID) is proposed. The proposed controller combines the effect of fractional-order PID (FOPID) control with fuzzy logic control (FLC) to obtain a robust and effective controller. All the five parameters are first tuned using Genetic Algorithm. Fine tuning of the integer-order parameters is then done using Fuzzy logic controller. Combining these two techniques, results in a new controller called fractional fuzzy PID controller. The effectiveness and robustness of the new methodology is illustrated by applying the proposed controller on two fractional order systems; viz control of a fractional-order-system and a fractional-order plant with dead time. It is observed that the performance of the proposed controller provides better control as compared to that of PID and FOPID controllers.
ppt file
Cauchy Criterion for the Henstock-Kurzweil Integrability of Fuzzy Number-Valued Functions
Tutut Herawan (Universiti Malaysia Pahang & Universitas Ahmad Dahlan, Malaysia); Haruna Chiroma (Federal College of Education (Technical), Gombe, Malaysia); Zailani Abdullah (Universiti Malaysia Kelantan, Malaysia); Eka Novita Sari (AMCS Research Center, Indonesia); Rozaida Ghazali (Universiti Tun Hussein Onn Malaysia, Malaysia); Nazri Mohd Nawi (Universiti Tun Hussein Onn Malaysia & Faculty of Computer Science and Information Technology, Malaysia)
In this paper, Cauchy criterion, the necessary and sufficient conditions for Henstock integrability of fuzzy number-valued functions defined on a compact interval in the real line is presented. The results can be used to characterize the integrability of a fuzzy number valued function without calculation the value of the integral. In addition, the notion, the elementary properties and the relation of R and R* integrals of fuzzy number-valued function defined on a compact interval in the real line are also presented.
Quick Object Extraction in Fuzzy Framework
Manish Kashyap (IIITM Gwalior, India); Mahua Bhattacharya (ABV Indian Institute of Information Technology & Management, India)
Through this paper a computationally efficient approach to object extraction in fuzzy settings is presented. The proposed approach is a 'potential alternative' (in terms of computational complexity) to the two most popular and reliable image segmentation algorithms namely 'Udupa and Samarasekera's Fuzzy connectedness algorithm' and the 'level set algorithm'. The research motivation behind the current work is reducing the computational time of these algorithms as these techniques are available since long and have proven their versatility but due to their computational complexity, they are not suitable for real time applications. The computational complexity of the proposed method is as good as finding the similarity/dissimilarity measure (gradient used here for simplicity) of the image and then thresholding it finite number of times. This is extremely less as compared to the above two state- of-art methods. A heuristic justification to the equivalence among these three algorithms is then presented. We keep proving rigorous equivalence among these three methods as our future work.
ppt file

S24: Mobile Computing and Wireless Communications-Igo to top

Room: 007-B Block E Ground Floor
Chair: Axel Sikora (University of Applied Sciences Offenburg, Germany)
Performance Evaluation of Image Transmission over MC-CDMA System using two Interleaving Schemes
Shikha Jindal and Diwakar Agarwal (GLA University, India)
In the present time, there is a rapid increase in demand for faster and reliable transmission of multimedia data through wireless channels. When multimedia data specially an image is to be transferred through the wireless channel, image is highly degraded due to wireless channel errors such as fading, interference and bursty errors. Multi-Carrier Code Division Multiple Access (MC-CDMA) technique is considered as the most promising technique for efficient wireless data transmission. This paper presents the performance comparison of image transmission over MC-CDMA system with two interleaving techniques; Helical interleaving and chaotic interleaving. At the receiver, Linear Minimum Mean Square Error (LMMSE) equalization is employed. System performance is measured through Peak Signal-to-Noise Ratio (PSNR) and Root Mean square Error (RMSE) values. From results, it has been evaluated that MC-CDMA system having chaotic interleaving and LMMSE performs better instead of having helical interleaving for image transmission over wireless channels.
Designing an Extensible Communication Platform for Rural Area
Abhishek Thakur (BITS Pilani, Hyderabad Campus & BITS Pilani, Hyderabad Campus, India); Chittaranjan Hota (Birla Institute of Technology & Science, Pilani, Hyderabad Campus, India)
Rural digital connectivity in developing nations have failed to sustain themselves after subsidy is removed. The demography of users does not demand Internet connectivity for majority of their communication needs. This work proposes an extensible platform for rural communication utilizing delay tolerant networks. The platform allows quick development of localized applications for day to day communication needs. Using data driven paradigm the solution simplifies and abstracts the client and server implementations for the applications. For more intensive applications, the platform allows modular expansion on both the client and server pieces of applications. To prove the extensibility and operational efficiency, the authors present the design of multiple end user scenarios, targeting multimedia application for remote farm monitoring, video rental and another scenario for conducting online evaluations for school tests. The key contributions of the proposal are - operational sustainability of platform, ease of application development for simple communication needs, ability to handle complicated scenarios like processing high definition multimedia content over disconnected networks.
pptx file
A Novel Single Cavity Non-Degenerate Dual-Mode Dual-Band Resonator
Snehalatha L (Indian Institute of Technology, India); Nagendra Prasad Pathak and Sanjeev Manhas (Indian Institute of Technology, Roorkee, India)
This paper introduces a novel design of a dual-band resonator based on non-degenerate dual resonant modes of a simple rectangular cavity. The high-Q cavity resonator is fabricated using a hybrid configuration consisting of PTFE substrate with dielectric constant of 3.2 and a cavity engraved in Aluminum sheet. The targeted first and second resonant frequencies are 10GHz and 13GHz respectively. The measured results from the fabricated resonator validate the concept. The key advantage of using a hybrid substrate structure for realizing the cavity is that the active devices can be mounted directly on the top substrate enabling seamless integration of high-Q resonator with the rest of microwave integrated circuit (MIC).
pptx file
Design and Development of Bandstop Filter using Spiral Stubs
Rishikesh Ranjan and Anjini Kumar Tiwary (Birla Institute of Technology, Mesra, India); Nisha Gupta (Biral Institute of Technology, Mesra, India)
A new class of bandstop filter (BSF) consisting of circular/rectangular bent microstrip transmission lines with spiral stubs are presented in this paper. This filter achieves the bandstop characteristics at 1.44 GHz. The prototype models are fabricated and tested. Both simulation and experimental results are presented and compared. A good agreement between the two is observed.
ppt file
Performance Analysis of Chebyshev UWB Bandpass Filter Using Defected Ground Structure
Yashika Saini (Rajasthan Technical University, Kota, Rajasthan, INDIA, India); Mithilesh Kumar (Rajasthan Technical University, Kota, Rajasthan-INDIA, India)
This paper deals with design and analysis of chebyshev ultra-wideband (UWB) bandpass filter (BPF) using defected ground structure (DGS). The BPF is constructed from the step impedance lowpass filter (LPF) and the optimum distributed highpass filter (HPF). Four rectangular shaped DGS were etched on the ground plane. Furthermore, the low pass and high pass filters are designed, based on their equivalent L-C circuit on Electromagnetic (EM) Circuit Simulator. The calculation of lumped element values is done and performance analysis curves are plotted for these filters which are compared to their practical response as done on CST. The analysis on Electromagnetic (EM) Circuit Simulator is done for UWB here, but the same can be done for any general frequency band on microwave range. The compact UWB bandpass filter is designed on FR-4 substrate of thickness h = 1 mm with Dielectric constant = 4.05 is and it occupies 22.4mm × 12mm die of area. This filter is operating in the whole UWB passband of 3.1 GHz to 10.6 GHz. The design of this filter is simulated on CST Microwave studio showing the desired simulation response.
pptx file
Visible Light Communication: A Smart Way Towards Wireless Communication
Manoj Bhalerao (Sathyabama University, Chennai & PVG's College of Engineering, India); Santosh Sonavane (Director, India)
The concept of visible light communication (VLC) is introduced in this paper. In VLC to transmit information the visible light portion of the electromagnetic (EM) frequency spectrum is used. Analogous to conventional forms of wireless communication such as Bluetooth (BT) and Wireless-Fidelity (Wi-Fi) in which radio frequency (RF) signals are used to transmit information over the wireless medium. By modulating the intensity of light source like LED is used as a transmitter and a photosensitive detector like Photodiode (PD) which demodulates the light signal back into electrical form is used as a receiver in VLC. Modulation of the intensity of light source is done in such a way that which is undetectable to the eyes. There is an urgent need for VLC technology to overcome the problem faced by RF communication. In this paper wide overview of need of VLC, applications of VLC and design challenges for VLC is provided.
ppt file
Fixed Point Digital Predistortion System Based on Indirect Learning Architecture
Namrata Dwivedi (IIIT Delhi, India); Vivek A Bohara (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India); Mazen Abi Hussein (ESIEE Paris, France); Olivier Venard (Université Paris-Est, ESIEE Paris, France)
In this paper, we analyze the effects of fixed point (FXP) implementation on digital predistortion (DPD) system based on indirect learning architecture (ILA). Unlike the conventional floating point implementation, in FXP ILA the digital predistorter (PD) and the coefficient estimation algorithm are implemented in FXP arithmetic. We quantify the impact of this FXP implementation on the overall performance of the DPD system so that we can achieve good linearity performance with minimum number of bits for data, coefficients and arithmetic operations. The performance of the proposed FXP DPD system is evaluated in terms of adjacent channel power ratio (ACPR) and error vector magnitude (EVM) at the output of PA when a Long Term Evolution-Advanced (LTE-Advanced) signal is applied at the input. The reference PA model used for simulation is the Wiener model and the PD has been modeled as a memory polynomial model.
pdf file
Integrating Pervasive Computing, InfoStations and Swarm Intelligence to design Intelligent Context-aware Parking-space location mechanism
Abhishek Gupta (JECRC, India); Venimadhav Sharma (Rajastahn Technical University, India); Naresh Ruparam (JECRC, India); Surbhi Jain (JECRC, Jaipur, India); Abdulmalik Alhammad (De Montfort University, United Kingdom (Great Britain)); Md Afsar Kamal Ripon (DMU, United Kingdom (Great Britain))
With the state-of-the-art developments in wireless communication and internet technology and advances in the miniaturization of electronics, the recent trend in research has been to progressively embed computers and communication in various everyday objects. The ubiquitous availability of internet imparts computing and communication power to objects by adding small silicon chips to them and this paradigm of pervasive computing is transforming conventionally designed vehicles into smart vehicles. In this paper we present novel swarm intelligence based vehicle parking system that exploits the concept of context awareness and wireless communication capabilities in smart vehicles. The parking zones are assigned with web based features using pervasive computing, wireless enabled infrastructure and smart chips. The parking details become available on the internet, and can be communicated to the vehicle searching for a parking space. In case of multiple available parking spaces, the shortest route to the nearest parking is calculated using particle swarm optimization algorithm and is informed to the vehicle driver. Mounting micro sensors and wireless sensors on these vehicles enables them to gather knowledge about their environment, perceive the surroundings and to reason based on the context. Partial automating of the driving task by intelligent location of vacant parking space would enhance the drivers' driving knowledge and capabilities, leading to more efficient, safer and environment-friendly driving conditions, savings in fuel expenditure
Dynamic Survivable Traffic Grooming with Effective Load Balancing in WDM All-Optical Mesh Networks
Abhishek Bandyopadhyay and Mohtasham Raghib (Asansol Engineering College, India); Uma Bhattacharya (Bengal Engineering & Science University, India); Monish Chatterjee (Asansol Engineering College & West Bengal University of Technology, India)
Traffic grooming in WDM optical networks is a scheme for aggregating several low-speed traffic streams from users onto a high-speed lightpath. In such networks, an optical fiber carries a large number of lightpaths and each individual lightpath carries the traffic of a large number of connection requests. So the failure of a single fiber-link, even for a brief period in such networks is a serious event. Thus survivability of user connections is extremely important. Since the problem of survivable traffic grooming in WDM mesh networks is NP-Complete, we propose a polynomial-time heuristic HDSTG (Heuristic Dynamic Survivable Traffic Grooming) that can be effectively used for dynamic traffic grooming in WDM all-optical mesh networks. Our heuristic is designed to provide guaranteed survivability of connection requests for any single link failure. We also propose two strategies, which can be used for effective load balancing to improve dynamic survivable traffic grooming namely TGMHL (Traffic Grooming with Minimized Hops and Load) and TGML (Traffic Grooming with Minimized Load). Performance comparisons demonstrate that the proposed strategies are better for network cost reduction and throughput enhancement as well.
pptx file
TCA-PCA Based Hybrid Code Assignment for Enhancement and Easy Deployability of CDMA Protocol
Shweta Malwe (Indian School of Mines, Dhanbad, India); G P Biswas (ISM Dhanbad, India)
Code Division Multiple Access (CDMA) aims at the simultaneous transmissions among mobile stations using orthogonal spreading codes where the spatial reuse of the codes is considered for the reduction of code requirement. Code assignment should be such that the communication performance of the network is improved where no code assignment interference is allowed. There exists three code assignment techniques namely TCA, RCA and PCA, and several efficient methods for code assignment are published in literature. In this paper, we reinvestigated the TCA based code assignment scheme for easy deployment, non-existence of secondary interference, supporting easy implementability of RCA (being equivalent to TCA) etc. Since TCA requires ∆ times number of codes over the ∆ codes as required in PCA approximately, where ∆ is the maximum degree of a CDMA network graph, we incorporate PCA with TCA in a systematic manner for completing interference-free code assignment and reducing the code requirement to only 2∆+1 codes. The proposed technique may be called a TCA-PCA hybrid code assignment scheme, which is first time presented in this paper. The details of the scheme with its simulation using MATLAB is provided. The results show that the proposed technique outperforms the existing TCA-based techniques in one hand, and on the other hand, it not only eases the RCA implementation but also significantly simplifies the PCA scheme.
pptx file

S25: Pattern Recognition, Signal and Image Processing-Igo to top

Room: 016 Block E Ground Floor
Chair: Mandeep Singh (Thapar University, India)
A Steganographic Technique Based on VLSB Method Using RC4 Stream Cipher
Osmita Bardhan (Advanced Computing and Microelectronics Unit & Indian Statistical Institute, India); Ansuman Bhattacharya (National Institute of Technology Meghalaya, India); Bhabani P Sinha (Indian Statistical Institute, India)
This paper presents a steganographic technique for embedding a secret file (any file e.g. audio,video) into a color image. We have used $RC4$ technique to generate pseudo random position for embedding secret data. After determining the position embedding is done using {\em Variable Least Significant Bit} ($VLSB$) method. The number of bits embedded into a pixel depends on the pixel value itself. An $(N \times N)$ cover image is divided into $q$ number of $(b\times b)$ blocks and for each block $RC4$ method has been applied. $2q$ different key values for random key generation of $q$ blocks are the secret key here. As $RC4$ algorithm is being used for randomizing the position of embedding, this method ensures higher level of security with good embedding capacity. Besides that the amount of embedded bits into a pixel purely depends upon the pixel itself, so the image quality is less distorted.
pdf file
Digital Image Inpainting using Speeded Up Robust Feature
Trupti Chavan and Abhijeet Vijay Nandedkar (Shri Guru Gobind Singhji Institute of Engineering and Technology, India)
This paper focuses on the inpainting of damaged digital images. It uses relevant image and speeded-up robust features (SURF) for this purpose. A concept wherein, the missing information is restored using relevant image is presented. The relevant image may be a snapshot of the same location with different viewpoint or geometrical transformation. The proposed algorithm is divided into three main stages: Initially, key feature points of damaged and relevant image are found out. In second stage, the relation between the damaged and relevant image is found out in terms of affine transforms (i.e. scale, rotation and translation). Finally, the inverse transformation is applied to reconstruct the damaged area. PSNR is used to compare proposed method with the existing exemplar based method [4] and Hay's scene completion method [9]. The experimental results demonstrate that the proposed inpainting method is efficient in terms of quality and speed.
Geometric Distortion Correction In Images Using Proposed Spy Pixel And Size
Navnath S. Narawade (University of Pune, India); Rajendra Khanphade (University of Pune, India)
The performance of reversible watermarking is questionable due to their vulnerability to geometric attacks. Few methods are resistant to geometric distortions, most of them are vulnerable. Here we present proposed, hence forth we call it as spy pixel and size geometric distortion correction method. It improves robustness to geometric distortions. Geometric distortion correction, restores affected image to it's approximate original. Extraction of watermark and image from restored watermark image becomes easy, rather in few cases it becomes completely reversible extraction. Experimental results have shown the effectiveness of proposed scheme. We have combined this method with existing popular watermarking and reversible methods which has given best results.
A Comparative Analysis of Remote Sensing Image Classification Techniques
Pushpendra Sisodia, Vivekanand Tiwari and Anil Kumar Dahiya (Manipal University Jaipur, India)
In this paper, we have compared the accuracy of four supervised classification as Mahalanobis, Maximum Likelihood Classification (MLC), Minimum distance and Parallelepiped classification with remote sensing Landsat images of different time period and sensors. We have used Landsat Multispectral Scanner (MSS), Thematic Mapper (TM) and Enhanced Thematic Mapper+ (ETM+) images of 1972, 1998 and 2013 respectively of Jaipur district, Rajasthan, India. Accuracy has been calculated using Producer accuracy, User accuracy, Overall accuracy and Kappa statistics. We have found that remote sensing images with a different time period and sensors, when classified with supervised algorithms produced different results. Minimum distance classification produced better accuracy with Landsat MSS image than other three classifications while Maximum Likelihood Classification produced better accuracy with Landsat TM and ETM+ images than other three classifications.
pptx file
Fuzzy Algorithm for Segmentation of Images in Extraction of Objects From MRI
Jan Kubicek (VSB-Technical University of Ostrava & Faculty of Electrical Engineering and Computer Science, Czech Republic); Marek Penhaker (VSB - Technical University of Ostrava, Czech Republic)
The paper discusses a suitable segmentation method for extraction of specific objects from Magnetic Resonance Imaging (MRI). A particular attention is paid to detection and extraction of articular tissues from knee images. This is a pressing issue for physicians because MRI reveals often damage to articular cartilage which is shown by a minor change in a brightness scale. The image segmentation can provide a detailed colour map which shows distribution of the tissue densities. This algorithm is based on detection of local extremes in histogram and uses a membership function in order to allocate each image density into an output set. Each such set is given a colour from a predefined colour spectrum. This procedure can easily differentiate between the tissue structures based on the tissue densities.
ppt file
Performance Improvement of HEVC Using Adaptive Quantization
Merlin Paul (National Institute of Technology Calicut, India); Abhilash Antony (Muthoot Institute of Technology and Science, India); Sreelekha G (National Institute Of Technology, India)
High Efficiency Video Coding (HEVC) is the most recent video compression standard that achieves higher encoding efficiency over earlier popular standards like MPEG-2 and H.264/AVC. By adopting a variety of coding efficiency enhancement and parallel processing tools, HEVC is in a position to provide up to 50% more bit-rate reduction over its precursor H.264/AVC. In an HEVC encoder, transforms are applied to the residual signal resulting from inter or intra-frame prediction. The transformed coefficients in a Transform Unit (TU) are then equally quantized based on the value of Quantization Parameter (QP) to provide the input for the entropy encoder. The quantization process does not consider the directional bias on the energy distribution of transformed coefficients and the scanning pattern that follows the quantization stage. This paper presents an Adaptive Quantization technique that adjusts the level of quantization based on intra prediction mode, type of component(luma or chroma) and the block size. The proposed method provides an average BD rate reduction of -0.35% and -0.46% along with a BD PSNR improvement of 0.065dB and 0.038dB for Intra Main and Intra Main 10 configurations respectively, without Rate Distortion Optimized Quantizer (RDOQ).
pdf file
HAP Antenna Radiation Pattern for Providing Coverage and Service Characteristics
Saeed Alsamhi (1569961455); N s Rajput (IIT BHU, India)
The aim of this paper is to show the most important technique in wireless communication that is antenna radiation pattern. The antenna radiation pattern is the graphical representation of the radiation properties of the antenna in the space. The shape of cell size can be determined by antenna radiation patterns. The quality of coverage is depending on the cell size and QoS is depending on the coverage area which can provide from HAPs. Therefore, the antenna radiation pattern plays the most important role for HAPs. HAPs represent an alternative technology to wireless communication system which can provide large coverage, low propagation delay, line of sight and broadband services. The study focuses on the behavior of cells or tier provided via HAP and the probability of services. The different in antenna radiation patterns are shown the different in cell sizes. Using steerable antenna is able to vary for providing constant coverage and improve the QoS. Since a steerable antenna is used on the HAP, it could allow HAPs to be deployed in the different parts of sky while the boresight of antennas are still pointing at the desired coverage area. For each antenna pattern result is shown. Single HAP is considered in study.
pptx file
Competitive Analysis of Existing Image Quality Assessment Methods
Prerana Markad (University of Pune, India); Rushikesh Borse (University of Pune & Indian Institute of Technology, India)
Image quality assessment has become essential part of image processing applications since times. Image quality assessment aims at evaluating the quality of image. Goal of image quality assessment is to provide a quality metric that can predict perceived image quality automatically. Evaluation of IQA metric gives optimum result when human visual system is taken in to consideration. In this paper, we described the current standards of image quality measures. We also do competitive analysis of these metrics. It is important to analyze the performance of these metrics in a comparative setting and analyze the strengths and weaknesses of these methods. Comparison of available image quality metrics is critically important in deciding as to which metric is better for a particular application.
pptx file

Thursday, September 25, 14:30 - 18:00 (Asia/Calcutta)

S26: Data Management, Exploration and Mining-Igo to top

Room: 015 Block E Ground Floor
Chair: Veena B. Mendiratta (NOKIA Bell Labs, USA)
Structural Analysis and Regular Expressions Based Noise Elimination From Web Pages for Web Content Mining
Amit Dutta, Sudipta Paria and Tanmoy Golui (West Bengal University of Technology, India); Dipak Kumar Kole (St. Thomas' College of Engineering. and Technology, India)
Commercial websites usually contain noisy information blocks along with main content. Noisy information degrades the performance of web content mining. Web content mining is used for discovering useful knowledge or information from the web page. In this paper, we propose noise elimination method that uses tag based filtering followed by structural analysis of the web page. The proposed tag based filtering method is implemented by regular expression. Firstly, the filtering method is used to remove several predefined HTML tags present in the web page. Then the concise web page is taken for structural analysis to remove remaining noise. Most of the time Noisy blocks share same contents and layouts or presentation styles in every web page of a website. In structural analysis phase, we compare the HTML contents of the crawled web pages from a website to capture common blocks having same contents and layouts or presentation styles and remove them. Filtering method eliminates considerable amount of noisy contents before structural analysis. Noisy contents in crawled web pages get reduced significantly. The overall space and time complexity is less compared to other noise elimination approach. The experiment is conducted on several popular commercial websites and the results are shown exposing the efficiency of the proposed method.
pptx file
Projected Clustering with Subset Selection
Anoop S Babu (Amrita Vishwa Vidyapeetham University, India); Ramachandra Kaimal (Amrita University, India)
It has always been a major challenge to cluster high dimensional data considering the inherent sparsity of data-points. Our model uses attribute selection and handles the sparse structure of the data effectively. The subset section is done by two different methods. In first method, we select the subset which has most informative attributes that do preserve cluster structure using LASSO (Least Absolute Selection and Shrinkage Operator). Though there are other methods for attribute selection, LASSO has distinctive properties that it selects the most correlated set of attributes of the data. In second method, we select the subset of linearly independent attributes using QR factorization. This model also identifies dominant attributes of each cluster which retain their predictive power as well. The quality of the projected clusters formed, is also assured with the use of LASSO.
Evolution of Regular Directed Patterns in Dynamic Social Networks
Hardeo Kumar Thakur (Netaji Subhas Institute of Technology Delhi & Delhi University, India); Anand Gupta (Netaji Subhas Institute Of Technology, India); Payal Goel (Netaji Subhas Institute of Technology, India)
Existing dynamic Graph mining algorithms focus typically on finding patterns in undirected, unweighted and weighted dynamic networks ignoring the fact that some of them could be directed also. In this paper, we focus on finding regular evolution patterns in edges, outdegree and indegree of all the nodes, featuring consecutively at fixed time intervals during the growth of an unweighted and directed dynamic graph. Such regular patterns will help in finding the characteristics (such as popularity, inactiveness) of the nodes. A methodology of occurrence rule is proposed here in order to determine regular evolution patterns, which are considered to be regular if they follow the same occurrence rule. The methodology is also used to find the patterns in the outdegree and indegree of all the nodes. These patterns are used to describe exhaustively the neighbourhood properties of dynamic graphs as in the social networks. To ensure its practical feasibility, the method has been applied to real world dataset on Facebook-like Forum network, and the results have shown that 37.6% of the edges are directed regular edges, and 59% of the total users, not having their indegree patterns are unpopular users.
pptx file
A New Similarity Function for Information Retrieval Based on Fuzzy Logic
Yogesh Gupta and Ashish Saini (Dayalbagh Educational Institute, India); Ajay Saxena (, India)
In this paper, a novel approach is presented to construct a similarity function to make information retrieval efficient. This approach is based on different terms of term-weighting schema like term frequency, inverse document frequency and normalization. The proposed similarity function uses fuzzy logic to determine similarity score of a document against a query. All the experiments are done with CACM benchmark data collection. The experimental results reveal that the performance of proposed similarity function is much better than the fuzzy based ranking function developed by Rubens along with other widely used similarity function Okapi-BM25 in terms of precision rate and recall rate.
pptx file
Geo Skip List Data Structure - Storing Spatial Data and Efficient Search of Geographical Locations
Amol Barewar (VNIT, Nagpur); Mansi Radke (VNIT, Nagpur, India); Umesh Deshpande (Visvesvaraya National Institute of Technology (VNIT) Nagpur, India)
Existing data structures which facilitate storage and retrieval of geographical data are R-trees, R* trees, KD trees etc. Most widely used and accepted structure among these is the R-tree. A drawback with R-trees is that it represents regions as fictitious rectangles which do not correspond to actual geographical regions. Also R-trees do not represent the hierarchy very well. For example New York City belongs to state New York and is in country United States of America and is a part of the North America continent. This kind of information is not brought out naturally by R- trees. Moreover, R-trees have problem of merging and splitting when underflow and overflow condition of a rectangle occurs which increases the complexity of this structure. To overcome these problems, we propose a structure called Geo-skip list which is inspired from the skip list data structure. It is simple, dynamic, partly deterministic and partly randomized data structure.We have compared the results of our structure with those of R-tree and have found improvement in the search efficiency.
pptx file
HFRECCA for Clustering of Text Data From Travel Guide Articles
Amrita Manjrekar (Shivaji University, India); Seema Wazarkar (Department of Technology, Shivaji University, Kolhapur, India)
Text clustering is advantageous for extraction of text data from web applications such as e-news papers, collection of research papers, blogs, news feeds at social networks, etc. This paper presents a text clustering Hierarchical Fuzzy Relational Eigenvector Centrality-based Clustering Algorithm (HFRECCA) . The algorithm is a combination of fuzzy clustering, divisive hierarchical clustering and page rank algorithm. Travel guide articles are pre-processed to remove stop words and stemming. Then, similarity matrix is generated using word distance computation. In HFRECCA, divisive hierarchical clustering algorithm is applied where it uses Fuzzy Relational Eigenvector Centrality-based Clustering Algorithm (FRECCA) as sub routine algorithm. FRECCA outputs cluster membership values on the basis of page rank score using page rank algorithm and generate clusters according to it. HFRECCA has features of hierarchical clustering as well as fuzzy clustering as it creates hierarchy of clusters and an object can belong to multiple clusters. Structure of information resides in text documents is hierarchical hence HFRECCA is useful for clustering of data from natural language documents.
pptx file
Profiling User Behaviour for Efficient and Resilient Cloud Management
Cathryn Peoples (Ulster University, United Kingdom (Great Britain)); Gerard P. Parr and Bryan W. Scotney (University of Ulster, United Kingdom (Great Britain)); Sanat Sarangi (Tata Consultancy Services, India); Subrat Kar (Indian Institute of Technology, Delhi, India)
User behaviour profiling can be used within network management schemes to indicate the capabilities required from management proxies in terms of the way(s) and rate at which they should be aware of real-time network state and change in resource demands/requirements. A management proxy in a cloud, for example, needs ability to monitor change in the most popular webpages associated with a website or files associated with an application so that this detail may influence the caching strategy for optimised performance and operation. When resources are provisioned dynamically across a cloud, the strategy to do so will accommodate efficiency and security objectives, and also take into account the ways in which users are demanding services to optimise the opportunities that their requirements are met. Many online companies now operate in this way and analyse customer behaviour to improve services by meeting predictable requirements and utilising unpredictable behaviour. It is therefore to this gap we respond in this work. A network management model is developed around behaviour profiles of user activities associated with the Wireless Sensor Knowledge Archive (Wisekar) website hosted in the Indian Institute of Technology in Delhi, India. Trends in user access and activities with the website are identified and, in response, a management framework is proposed. A Certainty Factor quantifies the confidence with which management is applied such that actions enforced accommodate both predictable and unpredictable user behaviour.
ppt file

Thursday, September 25, 15:00 - 18:00 (Asia/Calcutta)

T3: Tutorial 3 - Web Application Securitygo to top

Mr. Manu Zacharia, C|EH, C|HFI, CCNA, MCP, Certified ISO 27001-2005 Lead Auditor, MVP-Enterprise Security(2009-2012), ISLA-2010 (ISC)
Room: 003 Block E Ground Floor

Topics:

Intro to Web Application Security Web Application Architecture Web Application Security Testing / Penetration Testing OWASP OWASP Top 10 vulnerabilities Injection Attacks Cross-Site Scripting (XSS) Broken Authentication and Session Management Insecure Direct Object References Cross-Site Request Forgery (CSRF) Security Misconfiguration Insecure Cryptographic Storage Failure to Restrict URL Access Insufficient Transport Layer Protection Un-validated Redirects and Forwards Incident management Log analysis

Buffer Topics

Other Vulnerabilities File upload Vulnerabilities Shells Web Application Denial-of-Service (DoS) Attack Buffer Overflow

T4: Tutorial 4 - Protocols for Internet of ThingsDetailsgo to top

Dr. Mukesh Taneja, Cisco Systems, Bangalore, India
Room: 004 Block E Ground Floor

More things are connecting to the Internet than people — over 12.5 billion devices in 2010 alone. 50 billion devices are expected to be connected by 2020. Yet today, more than 99 percent of things in the physical world remain unconnected. How will having lots of things connected change everything? The growth and convergence of processes, data, and things on the Internet will make networked connections more relevant and valuable than ever before, creating unprecedented opportunities for industries, businesses, and people. The Internet of Things (IoT) is the next technology transition when devices will allow us to sense and control the physical world. It is also part of something even bigger. The Internet of Everything (IoE) is the networked connection of people, process, data, and things. Its benefit is derived from the compound impact of these connections and the value it creates as "everything" comes online. IoT solutions on devices, gateways and infrastructure nodes include the following: connectivity layer (such as that provided by networks that use IEEE802.15.4, LTE/3G/2G, WiFi, Ethernet, RS485, Power Line Communication and IP based protocols), service layer (middleware such as being specified by oneM2M) and application layer.

T5: Tutorial 5 - Implementing a Private Cloud Environment with the use of Open Nebula and Virtual BoxDetailsgo to top

Ms. Sanchika Gupta, IIT Roorkee, India & Mr. Gaurav Varshney, Qualcomm India
Room: 009 Block E Ground Floor

The tutorial will give a detailed description of Cloud and its services and will provide a practical demo of how a private Cloud can be built with Open nebula and Virtual box. The description of Cloud and its security aspects with known attacks and vulnerabilities will also be briefed with the explanation of existing solutions for it. A remote desktop session will also be provided to the audience to see an already existing implementation of a private Cloud at the end of the tutorial. An Intrusion detection approach will also be discussed for securing cloud from file integrity, malwares and DDOS attacks.

Outline:

Introduction to Cloud: Describing Cloud and why it is needed and what services it provides. Architecture of Cloud and what are the minimal resource needs for deployment of a private Cloud. Introduction to Private Cloud using open nebula and virtual box: Introduction to Private Cloud Deployment with the use of Open Nebula with Virtual Box virtualization environment. Design of private Cloud using Open Nebula and Virtual Box. Implementation details: A step by step guide to implement a private Cloud. Remote session to one of the implemented private Cloud using Open Nebula and Virtual Box Discussion of Cloud Security Aspects Cloud attacks and analysis. Discussion of a Complete and Lightweight Intrusion Detection at Cloud.

T6: Tutorial 6 - Watermarking techniques for scalable coded image and video authenticationDetailsgo to top

Dr. Deepayan Bhowmik, Heriot-Watt University, Edinburgh UK & Dr. Arijit Sur, Indian Institute of Technology, Guwahati, India
Room: 010 Block E Ground Floor

Due to the increasing heterogeneity among the end user devices for playing multimedia content, scalable image and video communication attracts significant attention in recent days. Such advancements are duly supported by recent scalable coding standards for multimedia content coding, i.e., JPEG2000 for images, MPEG advanced video coding (AVC)/H.264 scalable video coding (SVC) extension for video, and MPEG-4 scalable profile for audio. In scalable coding, high-resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices, and usage requirements. However, protection and authentication of these contents are still challenging and not surprisingly attracts attention from researchers and industries. Digital watermarking, which has seen considerable growth in last two decades, is proposed in the literature as a solution for scalable content protection and authentication. Watermarking for scalable coded image and video, faces unique set of challenges with scalable content adaptation. The tutorial will share various research problems and solutions associated with the image and video watermarking techniques in this field. This tutorial will help the participants to understand 1) the image and video watermarking and its properties, 2) watermarking strategies for scalable coded image and video, and 3) lastly, the recent developments and the open questions in this field.

Outline:

Digital watermarking (properties and applications) and frequency domain transforms used in watermarking, e.g., Discrete wavelet transform (DWT) (25 mins) Scalable image and video coding and its application in multimedia signal processing (25 mins) JPEG2000, MJPEG2000, MC-EZBC, and MPEG-AVC / H.264-SVC image and video coding Research techniques for image watermarking for JPEG2000 content adaptation (40 mins) Imperceptibility issues Robustness issues Research techniques for video watermarking for content adaptation (50 mins) Imperceptibility issues (particularly flicker in video watermarking) Robustness issues Real-time watermarking Recent developments and open questions in this field (20 mins)

T7: Tutorial 7 - QoS and QoE in the Next Generation Networks and Wireless NetworksDetailsgo to top

Dr. Pascal Lorenz, University of Haute Alsace, France
Room: 006 Block E Ground Floor

Emerging Internet Quality of Service (QoS) mechanisms are expected to enable wide spread use of real time services such as VoIP and videoconferencing. Quality of experience (QoE) is a subjective measure of a customer's experiences with a service. The "best effort" Internet delivery cannot be used for the new multimedia applications. New technologies and new standards are necessary to offer QoS/QoE for these multimedia applications. Therefore new communication architectures integrate mechanisms allowing guaranteed QoS/QoE services as well as high rate communications. The emerging Internet QoS architectures, differentiated services and integrated services, do not consider user mobility. QoS mechanisms enforce a differentiated sharing of bandwidth among services and users. Thus, there must be mechanisms available to identify traffic flows with different QoS parameters, and to make it possible to charge the users based on requested quality. The integration of fixed and mobile wireless access into IP networks presents a cost effective and efficient way to provide seamless end-to-end connectivity and ubiquitous access in a market where the demand for mobile Internet services has grown rapidly and predicted to generate billions of dollars in revenue.

This tutorial covers to the issues of QoS provisioning in heterogeneous networks and Internet access over future wireless networks as well as ATM, MPLS, DiffServ, IntServ frameworks. It discusses the characteristics of the Internet, mobility and QoS provisioning in wireless and mobile IP networks. This tutorial also covers routing, security, baseline architecture of the inter-networking protocols, end to end traffic management issues and QoS for Mobile/Ubiquitous/Pervasive Computing users.

Outline

Concepts of the QoS/QoE Traffic mechanisms, congestion Generations of Internet Mechanisms and architectures for QoS ATM networks (IP over ATM, WATM) New communication architectures Mechanisms allowing QoS MPLS DiffServ IntServ QoS in Wireless Networks Mobile Internet applications Quality for Mobile/Ubiquitous/Pervasive Computing users in gaining network access and satisfying their service requirements Mobile, satellites and personal communications Mobile and wireless standardization IEEE 802.11, IEEE 802.16, IEEE 802.20 WLL, WPAN, WLL

Thursday, September 25, 16:30 - 18:30 (Asia/Calcutta)

S27: Poster/Demo Tracksgo to top

Room: Lawn Area Block E
Chairs: Sumitra Purushottam Pundlik (MIT College Of Engineering Kothrud Pune, India), Sanjeev Yadav (Govt. Women Engineering College Ajmer, India)
A Dual Band Compact Circularly Polarized Asymmetrical Fractal Antenna for Bluetooth and Wireless Applications
Ruchika Choudhary (Govt. Engineering College Ajmer, India); Sanjeev Yadav (Govt. Women Engineering College Ajmer, India); Krishna Rathore (Rajasthan Technical University Kota, India); Mahendra Mohan Sharma (Malaviya National Institute of Technology & Principal Govt Engineering College Ajmer INDIA, India)
In this paper, a dual band Compact fractal antenna is proposed for circular polarization (CP). The proposed antenna is designed to operate at dual bands having bandwidth 1 GHz and 8 GHz at resonant frequencies 2.4 GHz and 12.9 GHz respectively for Bluetooth and wireless applications with good return loss and radiation pattern characteristics. It consist asymmetrical antenna by truncating the sides of a square patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost Rogers-RT5880 substrate having dimensions 50(L) × 50(W) × 3.2(h) mm3 with Ɛr= 2.2 and tanδ= 0.0009. The antenna shows acceptable gain with omni-directional radiation patterns in the frequency band.
ppt file
A New Fragile Watermarking Approach for Tamper Detection and Recovery of Document Images
Chetan KR (JNN College of Engineering, India); S Nirmala (J. N. N. College of Engineering, India)
In this paper, a fragile watermarking scheme is proposed for tamper detection and recovery of digital images. In this scheme, the source document image is divided into blocks of uniform size and a pseudo-random mapping of blocks is generated. For each block, a watermark consisting of authentication and recovery information is generated and embedded into the corresponding mapping block. The authentication data is formed using the Least Significant Bit (LSB) of each pixel value in the block. The recovery information consists of sub-sampled versions of two moment-preserving thresholds of the block. At the receiver side, the watermark is extracted from the corresponding mapping blocks for each block of the watermarked document image. A dual-option parity check method is applied on the authentication data in the extracted watermark to check whether the watermarked image block is tampered or not. If the image block is found to be tampered, the recovery information is extracted from the watermark. From the experimental results it is found that the average value of tamper detection rate and recovery rate for various types of intentional attacks on the document images corpus is 93.8% and 93.6% respectively. Further, the average Peak Signal to Noise Ratio (PSNR) of the recovered images is 61.24%. The comparative analysis reveals that the proposed scheme outperforms the existing system [16].
ppt file
A Perspective Study of Virtual Machine Migration
Christina Joseph (National Institute of Technology, Karnataka, India); Chandra Sekaran K (National Institute of Technology Karnataka, India); Robin Cyriac (Rajagiri School of Engineering and Technology, India)
Cloud Computing is one of the leading technologies. As a solution to many of the challenges faced by Cloud providers, virtualization is employed in Cloud. Virtual machine migration is a tool to utilize virtualization well. This paper gives an overview of the different works in literature that consider virtual machine migration. The different works related to virtual migration are classified into different categories. Some of the works that consider less explored areas of virtual machine migration are discussed in detail.
ppt file
A System for Intelligent Context Based Content Mode in Camera Applications
Tasleem Arif (Samsung India Software Operations, India); Prassanita Singh (Samsung R&D Institute India, India); Joy Bose (Samsung R&D Institute India, Bangalore, India)
This paper proposes a content based mode in camera based applications and image viewers, where the system recognizes the type of content in the image and automatically links it to other applications. When the device is in content mode, all the images viewed or accessed have an extra context based menu based on the detected data (such as call number and email). The user can switch between the normal mode and this content mode, while using the device, by performing a simple gesture. This mode can also be plugged in to any number of services available on the device. Such a feature would make it much easier for users to perform useful actions directly from the image, saving several intermediate steps. We provide methods for the user to manipulate and share the content accessed through the camera, including browsing the links, saving a new contact, and so on. We review existing applications where parts of this feature have been implemented, especially in calling cards. Then we describe the system architecture and algorithm for implementing the content mode generally in all applications on the device, as well as APIs useful for third party application developers. We also provide sample user interfaces where this mode can used.
ppt filepdf file
A Trade-off Between Complexity and Performance Over Multi-core Systems
Igli Tafa, IT (Polytechnic University of Tirana, Information Technology Faculty & AED Company, Albania); Vilma Tomco (Statistics and Applied Informatics at Faculty of Economy in Tirana University, Albania)
In this paper is investigated the development of multi-core systems and also is performed a comprehensive performance evaluation. Over the years their performance is the best compared to those before, however they have become more and more complex. Multi-core systems are still in the process of development and they provide to the most efficient computers the needed computation power. The main goal of this paper is to evaluate the performance of different available multi-core systems over single-core systems. In order to achieve this goal, two scheduling algorithms are implemented in order to schedule a Java language application by utilizing multi-threading programming. The application is executed first in a single-core system and then in a multi-core system. In both systems has been measured the execution time which is a determinant parameter to compare the performance. The results show that the multi-core systems perform better but on the costs of complexity of the design and implementation.
ppt file
Accelerating Information Diffusion in Social Networks Under the Susceptible-Infected-Susceptible Epidemic Model
Kundan Kandhway (TCS Innovation Labs Chennai, India); Joy Kuri (Indian Institute of Science, India)
Standard Susceptible-Infected-Susceptible (SIS) epidemic models assume that a message spreads from the infected to the susceptible nodes due to only susceptible-infected epidemic contact. We modify the standard SIS epidemic model to include direct recruitment of susceptible individuals to the infected class at a constant rate (independent of epidemic contacts), to accelerate information spreading in a social network. Such recruitment can be carried out by placing advertisements in the media. We provide a closed form analytical solution for system evolution in the proposed model and use it to study campaigning in two different scenarios. In the first, the net cost function is a linear combination of the reward due to extent of information diffusion and the cost due to application of control. In the second, the campaign budget is fixed. Results reveal the effectiveness of the proposed system in accelerating and improving the extent of information diffusion. Our work is useful for devising effective strategies for product marketing and political/social-awareness/crowd-funding campaigns that target individuals in a social network.
zip file
Agricultural Aid for Mango Cutting (AAM)
Sandeep Konam (Rajiv Gandhi University of Knowledge Technologies, R. K. Valley, India)
Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.
ppt file
Automatic Text Summarizer
Annapurna Patil (M S Ramaiah Institute of Technology, India); Shivam Dalmia (M. S. Ramaiah Institute of Technology, India); Syed Ansari (M. S. Ramaiah Institute of Technology); Tanay Aul and Varun Bhatnagar (M. S. Ramaiah Institute of Technology, India)
In today's fast-growing information age we have an abundance of text, especially on the web. New information is constantly being generated. Often due to time constraints we are not able to consume all the data available. It is therefore essential to be able to summarize the text so that it becomes easier to ingest, while maintaining the essence and understandability of the information. We aim to design an algorithm that can summarize a document by extracting key text and attempting to modify this extraction using a thesaurus. Our main goal is to reduce a given body of text to a (user-defined) fraction of its size, maintaining coherence and semantics.
ppt file
Behavioural Analysis for Prevention of Intranet Information Leakage
Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India); Neenu Manmadhan (Amrita University, India)
User authentication in web applications such as emails is many times done at a single point of time such as during login. Information leakage occurs when the user's authentication is compromised. Although authentication systems verify different aspects of the user, a deviation in user behaviour is not captured in such validations. This paper proposes a mechanism to model user behaviour and captures the deviations in the intention of the user that could lead to information leakage. Through design of an application plugin developed for an intranet email application, the proposed model assists with preventing the leakage of classified emails. Password authentication and behavioural analysis based authentication are both used to architect this plugin. The plugin features continuous monitoring and authentication of the user during their time of usage.
ppt file
Cuffless BP Measurement Using a Correlation Study of Pulse Transient Time and Heart Rate
Niranjan Kumar (Indraprastha Institute of Information Technology, Delhi, India); Amogh Agrawal (Indian Institute of Technology Ropar, India); Sujay Deb (IIIT Delhi, India)
The recent advancements in computing, signal processing and communication technologies have immensely improved public health care systems. As a result, new diagnostic instruments with better precision and compatibility with machines, gadgets and databases are developed. However, many of these technical marvels are young and still at research level, and are not very well known commercially. In the present study, a cuff-less blood pressure (BP) measurement was done using correlation study of pulse transient time and heart rate. Among the various vital biomedical signals that can be collected non-invasively, Electrocardiogram (ECG) and Photoplethysmograph (PPG) were taken simultaneously to determine electrical signals of the heart (ECG), the condition of the arteries (PPG) and together can be related to the arterial pressure of the systemic circulation (BP). This promises to give a composite overview of the cardiovascular system non-invasively. The BP estimation was done using non-linear 2nd order curve fitting regression model. The final objective is to develop a portable, non-invasive, biomedical signal acquisition system which not only can monitor, store and communicate the above mentioned biomedical signals but can also predict the BP by processing these signals. The device will be used for continuous monitoring especially for patients with cardiovascular complaints, and will be reduced to a mobile attachment, that is easy to use and can be operated by a paramedical staff (ASHA workers) with minimal training.
rar fileppt file
Effect of Holling type-II Function on Dynamics of Discrete Prey-Predator System with Scavenger
Sudipa Chauhan (Amity University & Amity Institute of Applied Sciences, India); Sumit Bhatia and Surbhi Gupta (Amity University, India)
A discrete prey predator model with scavengers is proposed. The model is build up by the interaction between three species prey predators and scavengers using difference equation. The stability analysis for the boundary fixed points is carried out showing the survival of prey population and prey-predator population both in the presence of scavenger.
ppt file
Efficient Regression Test Selection and Recommendation Approach for Component Based Software
Janhavi Puniani and Ashima Singh (Thapar University, India)
Component-based software system may contain external components as well as in-house built components. During the maintenance phase the components get altered or modified very often. The type of testing which not only ensures that the modified component is working fine but also ensures that the changes have no adverse or effects on the rest of the system is called as regression testing. But due to lack of the knowledge about the third party components and the modifications done in the components, it is difficult for the component users or testers to do an appropriate selection of the test cases from the original test suite for testing the altered system. Thus there is a need of an efficient regression test selection approach which results in a reduced and effective regression test suite. The paper proposes an approach 'Regression Test Selection and Recommendation' which uses UML state chart diagrams and sequence diagrams to identify the changes which are further used for classification of initial test suite, selection and recommendation of test cases and development of a regression test suite. The aim is to reduce the size of the test suite effectively and also adhere to the quality of regression testing. The approach is validated using case study of Automated Teller Machine.
ppt file
Exploiting Machine Learning Algorithms for Cognitive Radio
Veeru Sharma (IIIT Delhi, India); Vivek A Bohara (Wireless Systems lab , IIIT Delhi)
Cognitive radio is an intelligent radio that has the ability to sense and learn from its environment. Basic core of cognitive radio contains a learning engine and it plays an important role in every application of cognitive radio from spectrum sensing to spectrum management. Learning engine implements different learning algorithms. In this paper we discuss various learning algorithms and their application in solving specific problems of cognitive radio. Some of the prominent learning algorithms discussed in this paper are Genetic Algorithm (GA), Artificial Neural Networks (ANN) and Hidden Markov Model (HMM).
pdf file

Friday, September 26

Friday, September 26, 09:00 - 10:30 (Asia/Calcutta)

R3: Conference Registrationgo to top

Room: Block E, Ground Floor (Reception)

Friday, September 26, 09:30 - 10:10 (Asia/Calcutta)

K9: Keynote - Distributed MIMO: Realizing the Full MIMO PotentialDetailsgo to top

Prof. Soura Dasgupta, University of Iowa, USA
Room: Auditorium Block D Ground Floor

Friday, September 26, 09:30 - 14:00 (Asia/Calcutta)

S30: Third International Symposium on Natural Language Processing (NLP'14)/ International Workshop on Authorship Analysis in Forensic Linguistics (AFL-2014)/ International Workshop on Language Translation in Intelligent Agents (LTIA-2014)go to top

Room: 215 Block E Second Floor
Chairs: Rajkumar Rathore (Uttar Pradesh Technical University, Lucknow (INDIA) & Galgotias College of Engineering & Technology, Greater Noida (INDIA), India), Rajeev RR (IIITM-K, India)
Exploration of Robust Features for Multiclass Emotion Classification
Bincy Thomas (SCMS School of Engineering and Technology); Dhanya A (SCMS School of Engineering & Technology, India); Vinod P (SCMS School of Engineerin & Technology, India)
Classification of emotion from sentences requires the classifier to be trained on relevant features. This paper focuses on different features (a) Bag-of-Words (b) Part-of-Speech tags (c) Sentence Length and (d) Lexical Emotion Features. Extensive evaluation on variable feature length for classifying textual emotions is carried out to understand their role in model performance. Experiments depict that the bag--of--words provide better accuracy as boolean representation of feature rather than as term-frequency.
ppt file
Semiautomatic Annotation Scheme for Demonstrative Pronoun Considering Indirect Anaphora for Hindi
Pardeep Singh (National Institute of Technology & Hamirpur, India); Kamlesh Dutta (NIT Hamirpur, India)
Annotation is a tedious and time consuming process. Natural language processing requires a lot of analysis and information regarding words and segment of words. Information regarding word using POS tagger, parser and other tool can be gathered. But still dues to scarcity of language resources annotation of genre is required for further studies. In this working paper we propose a semiautomatic method which will annotate three tag. It annotate pronoun pattern, case marker/connector and semantic category of a genre. Out of ten tags seven are annotated using Botley's annotation scheme manually. Rest of three tags are proposed to automate. The experiment done on EMILEE corpus. Input file is EMILEE file and output is fully annotated Unicode file.
pdf filepptx file
Improving Keyword Detection Rate Using a Set of Rules to Merge HMM-based and SVM-based Keyword Spotting Results
Akram Shokri (Audio and Speech Processing Lab, IUST., Iran); Mohammad H. Davarpour (Azad University, Semnan Branch, Iran); Ahmad Akbari (Iran University of Science and Technology, Iran)
Evaluating the accuracy of HMM-based and SVM-based spotters in detecting keywords and recognizing the true place of keyword occurrence shows that the HMM-based spotter detects the place of occurrence more precisely than the SVM-based spotter. On the other hand, the SVM-based spotter performs much better in detecting keywords and has higher detection rate. In this paper, we propose a rule based combination method for combining output of these two keyword spotters in order to benefit from features and advantages of each method and overcome weaknesses and drawbacks of them. Experimental results of applying this combination method on both clean and noisy test sets show that its recognition rate has considerable growth rather than each individual method.
SentiMa - Sentiment Extraction for Malayalam
Deepu S. Nair, Jisha Jayan and Rajeev RR (IIITM-K, India); Sherly Elizabeth (IIITM-K, Technopark, Trivandrum, India)
Sentiment Analysis is one of the most active research areas in NLP, which analyzes people's opinions, sentiments, evaluations, attitudes, and emotions from written language. The growing importance of sentiment analysis coincides with the growth of social media such as reviews, forum discussions, blogs, and social networks. Sentiment analysis enables computers to automate the activities performed by human for making decisions based on the sentiment of opinions, which has wide applications in data mining, Web mining, and text mining. This work has been carried out to find the sentiments from Malayalam film review. In this paper, Negation Rule has been applied for extracting the Sentiments from a given text which is a rule based approach. This work would help to assign the rank and popularity of the new arrival film and also to the users for expressing their feelings after watching new films
pdf file
Pattern Based Pruning of Morphological Alternatives of Bengali Wordforms
Biswanath Barik (Tata Consultancy Services, India); Sudeshna Sarkar (IIT, Kharagpur, India)
Multiple morphological interpretations of word-forms is a bottleneck for different level of syntactico-semantic analyses of Natural Language (NL) sentences. Common vocabulary words of morphologically rich languages typically have more than one morphological analysis. However, if a word contains multiple morphological alternatives, one of them is appropriate with respect to the context where the word is used. Different text processing tasks require morphological features of words. Therefore, an efficient procedure is needed to choose the appropriate morphological analysis of each word of a given context. In this paper, we propose a method to identify correct morphological analysis of each Bengali word-form by cancelling (or pruning) morphological analyses showing context incompatibility.
pdf file
Machine Learning Approach for Correcting Preposition Errors Using SVD Features
Anuja Aravind (Amrita Vishwa Vidyapeeham, India); Anand Kumar M (Amrita Vishwa Vidyapeetham, India)
Non-native English writers often make preposition errors in English language. The most commonly occurring preposition errors are preposition replacement, preposition missing and unwanted preposition. So, in this method, a system is developed for finding and handling the English preposition errors in preposition replacement case. The proposed method applies 2-Singular Value Decomposition (SVD2) concept for data decomposition resulting in fast calculation and these features are given for classification using Support Vector Machines (SVM) classifier which obtains an overall accuracy above 90%. Features are retrieved using novel SVD2 based method applied on trigrams which is having a preposition in the middle of the context. A matrix with the left and right vectors of each word in the trigram is computed for applying SVD2 concept and these features are used for supervised classification. Preliminary results show that this novel feature extraction and dimensionality reduction method is the appropriate method for handling preposition errors.
pptx file
Joint Layer Based Deep Learning Framework for Bi-lingual Machine Transliteration (For English and Tamil Languages)
Sanjanasri Jp (Amrita Vishwa Vidyapeeham, India); Anand Kumar M (Amrita Vishwa Vidyapeetham, India)
Between the growth of Internet or World Wide Web (WWW) and the emersion of the social networking site like Friendster, Myspace etc., information society started facing exhilarating challenges in language technology applications such as Machine Translation (MT) and Information Retrieval (IR). Nevertheless, there were researchers working in Machine Translation that deal with real time information for over 50 years since the first computer has come along. Merely, the need for translating data has become larger than before as the world was getting together through social media. Especially, translating proper nouns and technical terms has become openly challenging task in Machine Translation. The Machine transliteration was emerged as a part of information retrieval and machine translation projects to translate the Named Entities based on phoneme and grapheme, hence, those are not registered in the dictionary. Many researchers have used approaches such as conventional Graphical models and also adopted other machine translation techniques for Machine Transliteration. Machine Transliteration was always looked as a Machine Learning Problem. In this paper, we presented a new area of Machine Learning approach termed as a Deep Learning for improving the bilingual machine transliteration task for Tamil and English languages with limited corpus. This technique precedes Artificial Intelligence. The system is built on Deep Belief Network (DBN), a generative graphical model, which has been proved to work well with other Machine Learning problem. We have obtained 79.46% accuracy for English to Tamil transliteration task and 78.4 % for Tamil to English transliteration.
ppt file
Hybrid Part of Speech Tagger for Malayalam
Merin Francis (Mahatma Gandhi University, Kottayam, Kerala, India); K n Ramachandran Nair (Viswajyothi College of Engineering and Technology, India)
The process of assigning part of speech for every word in a given sentence according to the context is called as part of speech tagging. Part of speech tagging (POS tagging) has a crucial role in different fields of natural language processing (NLP) including speech recognition, speech synthesis, natural language parsing, information retrieval, multi words term extraction, word sense disambiguation and machine translation. . This paper proposes an efficient and accurate POS tagging technique for Malayalam language using hybrid approach. We propose a Conditional Random fields(CRF) based method integrated with Rule-Based method. We use SVM based method to compare the accuracy. The corpus both tagged and untagged used for training and testing the system is in the unicode format and the tagset developed by IIIT Hyderabad for Indian Languages is used. The system is tested for selected books of Bible and perform with an accuracy of 94%.
ppt file
Evaluation of Some English-Hindi MT Systems
Nisheeth Joshi and Iti Mathur (Banasthali University, India); Hemant Darbari and Ajai Kumar (DIT, MIT, Govt. of India, India); Priyanka Jain (DIT, MIT, Govt. of India & C-DAC, India)
MT evaluation is a very important activity in MT system development. Evaluation of MT systems can help MT developers in understanding the short-comings of their systems and clear focus on the problem areas, so that systems performance can increase. In this paper we have discussed evaluation of some English-Hindi MT engines. For this, we have applied human as well as automatic evaluations of these systems. Automatic evaluation metrics across linguistic levels have been used to perform this study.
pptx file
Speaker Identification Using FBCC in Malayalam Language
Drisya Vasudev (MG University, India); Anish Babu K K (Rajiv Gandhi Institute of Technology, Kottayam, India)
Speaker identification attempts to determine the best possible match from a group of certain speakers, for any given input speech signal. The text-independent speaker identification system does the task to identify the person who speaks regardless of what is said. The first step in speaker identification is the extraction of features. In this proposed method, the Bessel features are used as an alternative to the popular techniques like MFCC and LPCC. The quasi-stationary nature of speech signal is more efficiently represented by damped sinusoidal basis function that is more natural for the voiced speech signal. Since Bessel functions have damped sinusoidal as basis function, it is more natural choice for the representation of speech signals. Here, Bessel features derived from the speech signal is used for creating the Gaussian mixture models for text independent speaker identification. A set of ten speakers is used for modelling using Gaussian mixtures. The proposed system is made to test over the Malayalam database obtaining an efficiency of 98% which is promising.
pdf file

Friday, September 26, 10:15 - 11:10 (Asia/Calcutta)

K10: Keynote - Information Discovery in Wireless Sensor NetworksDetailsgo to top

Dr. Robin Doss, (Associate Head of School (Development & International)), School of Information Technology, Deakin University, Australia
Room: Auditorium Block D Ground Floor

The data gathering capabilities of wireless sensor networks (WSNs) are particularly attractive for mission critical operations that can be deployed in unattended and hostile environments such as battlefield surveillance, military reconnaissance and emergency response; so intelligence can be gathered without the risk of human casualties. Traditional approaches for information discovery in WSNs have assumed the communication pattern as many-to-one where sensors gather information and then push to a central data repository, the "sink". However, mission critical applications on WSNs are intended to work without a main control centre (such as a sink) and they demand life/time critical information and support for unique traffic patterns that maximize the network lifetime. Further, the nature of such mission-critical applications requires high quality of service (QoS) requirements for the information discovery process. Many emerging applications for WSNs require dissemination of information to interested clients within the network and require support for differing traffic patterns. These requirements make information discovery a challenging task because the complexity and energy-constraints of wireless sensors makes this a non-trivial problem. Early approaches to information discovery, such as flooding and gossiping with push-pull strategies use broadcast communication. For instance, in a military application, a sensor network might be deployed for enhancing the soldiers' awareness when visibility is low. Sensors that detect an event can "push" this information out to every sensor in the network (e.g., sensors detect tanks and enemies and can periodically "push" that information to the other sensors on the network) or they can wait and allow a sensor to "pull" this information through querying (e.g., soldier sends a query such as "Where are the tanks or enemies?"). The efficiency of "push" or "pull" methods varies and depends on the demand for information. However, when the frequency of events and queries are not taken into consideration, pure pull-based or push-based methods are inefficient in real deployments. Recent approaches for information discovery, such as Comb-Needle, Double Ruling and Cross Roads, aim to successfully balance push and pull approaches to improve the QoS in terms of efficiency and lifetime of the WSN.

In this talk we will look at the challenges of information discovery in a multi-dimensional wireless sensor network. A multi-dimensional WSN is defined as a WSN that is deployed to gather and store data related to multiple attributes.

Friday, September 26, 11:30 - 12:50 (Asia/Calcutta)

K11: Keynote - Searchable Encryption for the Real World: Theory and PracticeDetailsgo to top

Dr. Michael Steiner, IBM Research - India
Room: Auditorium Block D Ground Floor

In this talk, I will address the problem of performing private database queries, motivated by real-world requirements, e.g., of protecting sensitive data in the cloud or allowing law enforcement & intelligence agencies to privately yet accountably query third-party databases. I will discuss the technical difficulties which might lead to lower bounds and necessitates, e.g., the co-development of the cryptographic protocols with the overall system design. I will present corresponding practical solutions which provide real-world scalability and discuss interesting aspects of these solutions.

Friday, September 26, 12:50 - 13:40 (Asia/Calcutta)

K13: ISI'14 Keynote: Software Coverage: An intelligent Technique using Nature inspired ComputingDetailsgo to top

Dr. Praveen Ranjan Srivastava, Indian Institute of Management (IIM), Rohtak, India
Room: Auditorium Block D Ground Floor

Software Coverage is one of the most challenging and arduous phase of software development life cycle which helps in determining the software quality. State/Code coverage are widely used paradigm, which describes the degree to which the state/code has been tested. Aim of the current discussion is to propose an optimised code coverage algorithm with the help of an emerging technique (nature inspired principle/ Meta-heuristic algorithm), i.e., Intelligent Water Drop (IWD), Cuckoo Search, Ant Colony, Firefly etc. These approaches use dynamic parameters for finding all the optimal paths using basic properties of natural things. It tries to provide a technique for exhaustive coverage with minimal repetition which ensures all transitions coverage and all paths coverage at least once with minimal number of repetitions of states as well as transitions. The algorithm works by maximising an objective function which focuses on most error prone parts of the program so that critical portions can be tested first

Friday, September 26, 13:30 - 14:30 (Asia/Calcutta)

L3: Lunch Breakgo to top

Room: Lawn Area Block E

Friday, September 26, 14:30 - 19:00 (Asia/Calcutta)

S28-A: Security, Trust and Privacy-Igo to top

Room: 007-A Block E Ground Floor
Chair: Michael Steiner (IBM T.J. Watson Research Center, USA)
Detection of Metamorphic Viruses: A Survey
Ankur Bist (KIET, Ghaziabad, India)
Computer viruses are big problem for security. It is essential to differentiate between reproducing programs and its similar forms. Reproducing programs will not necessarily harm the system. Classification aspect of metamorphic viruses is an emerging issue of research. The orientation and expansion of metamorphic viruses is quite critical and general problem of finding metamorphic viruses is NP hard. In this paper emerging methodology are discussed with their efficiency and working pattern.
pptx file
A Group-based Multilayer Encryption Scheme for Secure Dissemination of Post-Disaster Situational Data Using Peer-to-Peer Delay Tolerant Network
Souvik Basu (Heritage Institute of Technology & Indian Institute of Engineering Science & Technology, Shibpur, India); Siuli Roy (Heritage Institute of Technology, India)
In the event of a disaster, the communication infrastructure can be partially or totally destroyed, or rendered unavailable due to high congestion. Today's smart-phones that can communicate directly via Bluetooth or WiFi without using any network infrastructure, can be used to create an opportunistic post disaster communication network where situational data can spread quickly, even in the harshest conditions. However, presence of malicious and unscrupulous entities that forward sensitive situational data in such a network may pose serious threats on accuracy and timeliness of the data. Therefore, providing basic security features, like authentication, confidentiality and integrity, to all communications occurring in this network becomes inevitable. But, in such an opportunistic network, that uses short range and sporadic wireless connections, no trusted third party can be used as it won't be accessible locally at the runtime. As a result, traditional security services like cryptographic signatures, certificates, authentication protocols and end-to-end encryption become inapplicable. Moreover, since disaster management is generally a group based activity; a forwarding entity may be better authenticated based on its group membership verification. In this paper, we propose a Group-based Distributed Authentication Mechanism that enables nodes to mutually authenticate each other as members of valid groups and also suggest a Multilayer Hashed Encryption Scheme in which rescue-groups collaboratively contribute towards preserving the confidentiality and integrity of sensitive situational information. The schemes provide authentication, confidentiality and integrity in a fully decentralized manner to suit the requirements of an opportunistic post disaster communication network. We emulate a post disaster scenario in the ONE simulator to show the effectiveness of our schemes in terms of delivery ratio, average delay and overhead ratio.
pptx file
Detection of Alphanumeric Shellcodes Using Similarity Index
Nidhi Verma, Vishal Mishra and Varinder Pal Singh (Thapar University, India)
Shellcodes are widely used to exploit applications. Shellcodes can breach security and privacy to an unimaginable extent. Poor Programming results in various bugs that give attacker a chance to exploit an application. Exploiting an application allow attacker to inject a malicious code and transfer control of program to the injected code. The malicious code injected during exploitation is usually a shellcode. Detection of such shellcodes used for exploitation is an issue which concerns all the anti-virus companies. Attackers generally write shellcode in a way to bypass anti-virus engine, one such type of shellcodes are alphanumeric shellcodes. Despite of so much advancement in detection technology detecting alphanumeric shellcodes is still not possible. This research paper presents a study of such alphanumeric shellcodes and effectiveness of current technology in detecting such shellcode and also presents a novel approach to detect alphanumeric shellcodes.
pptx file
An Approach for Secure Wireless Communication Using Friendly Noise
Parag Aggarwal (Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi), India); Aditya Trivedi (ABV-Indian Institute of Information Technology and Management Gwalior, India)
As the need of highly secure data transmission in wireless communication has increased rapidly, physical layer security gains a lots of attention recently. The transmission of confidential data between two legitimate users in the existence of the passive eavesdropper with quasi-static Rayleigh channel is considered. A new technique is proposed in which a friendly noise is incorporated with confidential data bearing signal such that it preserves the integrity of the signal and does not degrade the legitimate link. This friendly noise acts as jamming signal for eavesdropper. The analytical expression for secrecy outage probability of the proposed technique for both single and multiple antennas eavesdropper is derived. The secrecy performance of proposed technique is compared with the traditional direct transmission and the cooperative jamming schemes. The asymptotic nature of the outage probability is also studied for the different parameters. Numerical results are also presented to verify the analytical expressions of proposed model, which show that the proposed technique is better than the traditional direct transmission and the cooperative jamming schemes.
pdf file
Secure Yoking Proof Protocol for RFID Systems
Saravanan Sundaresan and Robin Doss (Deakin University, Australia)
In this paper, we propose a secure yoking proof protocol for RFID passive tags based on Zero Knowledge. The protocols that have been proposed earlier are either found to be vulnerable to certain attacks or do not comply with EPC standard for passive tags because they use complex encryption schemes. Also, the unique design requirements of yoking/grouping proofs have not been fully addressed by many. Our protocol addresses these important security and design gaps in yoking proofs. The proposed protocol uses pseudo random squares and quadratic residuosity to realize the zero knowledge property. Tag operations are limited to functions such as modulo (MOD), exclusive-or (XOR) and 128bit Pseudo Random Number Generators (PRNG). Passive tags are capable of these operations and hence the protocol achieves EPC compliance and also meets the necessary security requirements.
pptx file
A Trust-Based Probing to Secure Data Routing
Thouraya Bouabana-Tebibel (Ecole Nationale Supérieure d'Informatique, Algeria); Nidal Tebibel and Selma Zemmouri (USTHB, Algeria)
Mobile ad hoc networks rely on trusty cooperation between the nodes. Packets dropping is a classical weakness that may be caused either by misbehaving nodes or just faulty links. Isolating the nodes behind such effects is an efficient solution for decreasing packet loss. We propose, in this paper, a new secure protocol to monitor, detect, and safely isolate misbehaving nodes. The proposed solution is based on authentication and trust. First, it improves the Probing approach with a new technique allowing detection of nodes responsible for the dropping. Next, it enhances routes security by applying on the faulty nodes mechanisms based on trust. The proposed protocol is analyzed using the NS-2 simulator.
pdf file
Cloud Workflow and Security: A Survey
Anupa J (National Institute of Technology, India); Chandra Sekaran K (National Institute of Technology Karnataka, India)
The cloud revolution has helped enterprises to improve their business and performance by providing them computing power, storage capabilities and a variety of services for very less or no infrastructure and reasonable cost. It also provisioned the scientific and academic communities to run complex applications involving large data sets, high performance or distributed resources. The Workflow Management Systems (WfMSs) help the enterprises in automation of their business processes and thus help the management to take critical decisions fast. Cloud Workflows club the advantages of both Cloud Computing and WfMSs. In spite of the advantages of the cloud, security is a major area of concern. The use of WfMSs for critical and strategic applications, which is common in case of Business and Scientific community, gives rise to major concerns regarding the threats against integrity, authorization, availability etc. The concept of running secure workflow instances on public cloud processing platforms is still in its infancy. This paper gives an overview of workflow management systems, cloud computing, cloud workflows and security in these areas. This paper also provides a survey on security mechanisms for WfMSs and Cloud Workflows.

S28-B: Security, Trust and Privacy-IIgo to top

Room: 007-A Block E Ground Floor
Chair: Michael Steiner (IBM T.J. Watson Research Center, USA)
Defense Against Packet Dropping Attacks in Opportunistic Networks
Asma'a Ahmad, Majeed Alajeely and Robin Doss (Deakin University, Australia)
Opportunistic networks (OppNets) are an interesting topic that are seen to have a promising future. Many protocols have been developed to accommodate the features of OppNets such as frequent partitions, long delays, and no end-to-end path between the source and destination nodes. Embedding security into these protocols is challenging and has taken a lot of attention in research. One of the attacks that OppNets are exposed to is the packet dropping attack, where the malicious node attempts to drop some packets and forwards an incomplete number of packets which results in the distortion of the message. To increase the security levels in OppNets, this paper presents an algorithm developed to detect packet dropping attacks, and finds the malicious node that attempted the attack. The algorithm detects the attack by using an indicative field in the header section of each packet; the indicative field has 3 sub fields - the identification field, the flag field, and the offset field. These 3 fields are used to find if a node receives the complete original number of packets from the previous node. The algorithm will have the advantage of detecting packets dropped by each intermediate node, this helps solve the difficulties of finding malicious nodes by the destination node only.
pptx file
Smart Card Based Password Authentication and User Anonymity Scheme Using ECC and Stegnography
Vineeta Singh (Manipal University, India); Priyanka Dahiya (Manipal University Jaipur, India); Sanjay Singh (Manipal Institute of Technology, India)
In this information age, providing security over the Internet is a major issue. Internet security is all about trust at a remote distance, because we are dealing with everyone remotely and not able to confirm identity or authenticity in the traditional sense. To increase the password authentication a scheme was proposed by Chun-Ta Li which is basically a smart card-based password authentication and update scheme. The Chun's scheme provides user anonymity and eviction of unauthorized users. In our research work we have crypt analyze the Chun's method and shown that it is vulnerable to various types of attacks: insider attack, off line password verifier attack, stolen verifier attack and impersonating attack. To overcome the security vulnerability of Chun's scheme, we have proposed an advance scheme of password authentication and user anonymity using Elliptic Curve Cryptography (ECC) and stegnography. The proposed scheme also provides privacy to the client. Based on scheme performance criteria such as immunity to known attacks and functional features, we came to the conclusion that the proposed scheme is much efficient and solves several hard security threats.
Network Security Function Virtualization(NSFV) Towards Cloud Computing with NFV Over Openflow Infrastructure: Challenges and Novel Approaches
Laxmana Rao Battula (Freescale & Freescale, India)
Cloud computing is emerging to host different services on high end compute nodes of Data center with increasing demands of social networking, video streaming, big data processing and other internet applications. With this approach resource sharing is achieved for compute, networking and storage to reduce the OPEX and CAPEX. Network Function Virtualization(NFV) is big part of cloud computing and has evolved as a proposal from operators for hosting network services as network functions implemented in software, which may be launched on the Virtual machines (VMs) dynamically based on the demand in the form of VNFaaS. With elasticity and agility of vendor independent network services coming up and going down in the cloud, chaining of network services gets affected in the E2E deployment. Software defined networking (SDN) is complimentary technology to NFV for achieving unified network abstraction to expand data flow control in the form of virtual switch. SDN plays major role in the infrastructure of NFV. SDN makes networks more flexible, dynamic and cost-efficient, while greatly simplifying the operational complexities. This paper's main focus is to define, review and evaluate the architectural framework approaches of scalable compute node with NFV and SDN for addressing various challenges on Data center in the form of Network Security Function Virtualization (NSFV) over Openflow infrastructure. Key challenges in data center include network service provisioning & monitoring, E2E Security, network function virtualization in multi-tenant environment, software defined networking control for Service chaining of virtual network security functions and performance. The main aim of this paper is to provide comprehensive global forum for researchers to present the challenges and novel approaches of ongoing research in the state-of-art areas of cloud computing architectures with NFV and SDN relevance to Datacenter.
pptx file
An Approach to Cryptographic Key Distribution Through Fingerprint Based Key Distribution Center
Subhas Barman (Govt. College of Engineering and Textile Technology, Berhampore, India); Samiran Chattopadhyay (Jadavpur University, India); Debasis Samanta (Indian Institute of Technology, Kharagpur, India)
In information and communication technology, security of information is provided with cryptography. In cryptography, key management is an important part of the whole system as the security lies on secrecy of cryptographic key. Symmetric cryptography uses same key (secret key) for message encryption as well as cipher text decryption. Distribution of the secret key is the main challenge in symmetric cryptography. In symmetric cryptography, key distribution center (KDC) takes the responsibility to distribute the secret key between the communicating parties to establish a secure communication among them. In the traditional KDC, a unique key is used between communicating parties for the purpose of distributing session keys. In this respect, our proposed approach uses fingerprint biometrics of communicating parties for the purpose of unique key generation and distribute session key with the fingerprint based key of user. As the key is generated from fingerprint of user, there is no scope of attacks to break the unique key. In this way, the unique key is associated with biometric data of communicating party and the key is not need to remember by that party. This approach converts the knowledge based authentication to biometric based authentication of KDC. At the same time, our approach protects the privacy of fingerprint identity as the identity of user is not disclosed even when the KDC is compromised.
pdf file
Provably Secure Peer-Link Establishment Protocol for Wireless Mesh Networks
Rakesh Matam (Indian Institute of Information Technology Guwahati, India); Somanath Tripathy (IIT Patna, India); Swathi Bhumireddy (Indian Institute of Technology Patna, India)
The existing Peer link establishment protocol documented in IEEE 802.11s standard is not secure and vulnerable against relay and wormhole attacks. To address this issue, an efficient technique using location information, is proposed in this work. Security of the proposed technique is analysed using Simulation Paradigm.
pdf file
X-ANOVA and X-Utest Features for Android Malware Analysis
Vinod P (Malaviya National Institute of Technology, India); Rincy Raphael (SCMS School of Engineering and Technology, India)
In this paper we proposed a static analysis framework to classify the android malware. The three different feature likely (a) opcode (b) method and (c) permissions are extracted from the each android .apk file. The dominant attributes are aggregated by modifying two different ranked feature methods such as ANOVA to Extended ANOVA (XANOVA) and Wann-Whiteney U-test to Extended U-Test (X-Utest). These two statistical feature ranking methods retrieve the significant features by removing the irrelevant attributes based on their score. Accuracy of the proposed system is computed by using three different classifiers (J48, ADAboost and Random forest) as well as voted classification technique. The X-Utest exhibits better accuracy results compared with X-ANOVA. The highest accuracy 89.36% is obtained with opcode while applying X-Utest and X-ANOVA shows high accuracy of 87.81% in the case of method as a feature. The permission based model acquired highest accuracy in independent (90.47%) and voted (90.63%) classification model
pdf file

S29: Sensor Networks, MANETs and VANETs - Igo to top

Room: 007-B Block E Ground Floor
Chair: Foued Melakessou (University of Luxembourg, Luxembourg)
An Intelligent Medical Monitoring System Based on Sensors and Wireless Sensor Network
Hasna Boudra (Uuniversity of Quebec at Montreal, Canada); Abdel Obaid (Université de Québec à Montréal, Canada); Anne Marie Amja (University of Quebec at Montreal, Canada)
Due to advances in wireless network technology, new applications are designed in medical and health care domain. Medical personnel efficiency has increased by using these new tools and applications. Patient monitoring of young and elderly people as well as smart houses equipped with sensors and information technologies are subject to discussion. Patients wear sensors to monitor vital signs reported in realtime to their doctors, which improves the quality of health care and allows them to spare money. In our work, we have developed a prototype of medical monitoring for elderly people by using several technologies such as REST, Jess rule engine and Android in order to have a flexible application in future changes.
A Distributed Energy Efficient and Energy Balanced Routing Algorithm for Wireless Sensor Networks
Deepika Singh (Indian School of Mines, India); Pratyay Kuila (National Institute of Technology Sikkim, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)
The main function of any wireless sensor network (WSN) is to route sensed data to the remote base station. However, the major bottleneck for such operation is the limited and irreplaceable power source of the sensor nodes. In order to minimize the energy consumption in the process of data routing, data packets are generally forwarded through a path that consumes minimum energy, which leads to uneven energy consumption of the sensor nodes and as a result the network is partitioned. Therefore, a routing algorithm should consider not only energy consumption of the sensor nodes but also their energy balancing. In this paper, we present a distributed routing algorithm which takes care both of these issues. Moreover, the algorithm is shown to be fault tolerant. We use energy density of the sensor nodes to balance the energy consumption and the distance between the nodes to conserve energy. Experiments are performed in diverse number of network scenarios and the results show that the proposed work is superior to the existing routing algorithms.
ppt file
Wireless Power Transfer Using Microstrip Antenna Assigned with Graded Index Lenses for WSN
Vikram Singh, chauhan (Mumbai University & V J T I, India); Asawari Kukde (Veermata Jijabai Technological Institute, Mumbai, India); Chirag Warty (Intelligent Communication lab & Director, Quantspire, India)
Wireless Power Transfer (WPT) system is seen as an alternative for modern day wired power transmission. In a long range wireless power transmission system, power is transmitted through microwaves. An effective power transmission relies on a highly directive antenna system . This paper talk about a microstrip rectangular patch antenna incorporated with a graded refractive index metamaterial lens . Using microstrip antenna will considerably reduce the size of WPT system as comparison to other microwave antenna. In this paper the wireless power transmission system is modeled for 2.4 GHz frequency. Antenna assigned with metamaterial lens proposed in paper will enhance the amount of transferred power to wireless sensors network with less radiation loss.
pptx file
TDEC: Threshold-sensitive Deterministic Energy Efficient Clustering Protocol for Wireless Sensor Networks
Prabhleen Kaur (Punjab Institute of Technology, Kapurthala(Punjab Technical University Main Campus), India); Rajdeep Singh (PIT, Kapurthala (PTU Main Campus), India)
Deployment of wireless sensor network has been increasing tremendously for the last few years due to its wide applicability in various fields ranging from engineering industry to our home environment technology. Wireless Sensor Network involves of low powered and tiny sensor nodes. Network is live as long as the sensor nodes have energy. Efficient utilization of energy of sensor nodes is major factor to be considered to increase network lifetime and stability period. We have proposed a threshold sensitive deterministic energy efficient clustering protocol which is reactive, self-adaptable, and distributive protocol. Cluster heads are being selected depending on residual energy parameter only and data is transmitted only if a parameter to be sensed satisfies hard and soft threshold. We analyze and simulate our protocol in MATLAB for simple temperature sensing application. Our simulation analysis shows better results comparable to existing protocols with respect to stability period and network lifetime in both homogeneous and heterogeneous environment.
pptx file
Use of Big Data Technology in Vehicular Ad-hoc Networks
Punam Bedi and Vinita Jindal (University of Delhi, India)
Big Data technology is becoming ubiquitous and depicting key attention of researchers in almost all areas. VANET is a special form of MANET that uses vehicles as nodes in a network. By applying Big Data technologies to Vehicular Ad-hoc Network (VANET), one can gain useful insight from a huge amount of operational data, to improve traffic management processes such as planning, engineering and operations. VANETs access large data during the real time operations. In this paper we map VANET characteristics to Big Data attributes stated in literature. Further, we evaluate the performance of Dijkstra algorithm used for routing in vehicular networks on Hadoop Map Reduce standalone distributed framework as well as on multinode cluster with 2, 3, 4 and 5 nodes respectively. The results obtained confirm that increasing the number of nodes in Hadoop framework, processing time for the algorithm is greatly reduced.
ppt file
An Enhanced Cluster-Head Selection Scheme for Distributed Heterogeneous Wireless Sensor Network
Divyanshu Gupta (JS institute of technology); Rajesh Verma (MNNIT, India)
Cluster Head selection method is a critical and energy constraining process in wireless sensor network. This process required significant amount of energy affecting the performance and operation of wireless sensor network. The advantageous heterogeneous wireless sensor network provides different type of data from different variety of sensors in same network but because of complex network operations it shows poor performance. For enhanced performance of wireless sensor network, improvements are needed at some critical parameters such as energy potency, network lifetime, node readying, fault lenience and dormancy. The proposed Cluster-Head selection scheme deals with two level heterogeneous wireless sensor networks. Improved Cluster-Head selection process ends up in less energy consumption that prolongs the network lifetime and stability.
An Effective Black Hole Attack Detection Mechanism Using Permutation Based Acknowledgement in MANET
Dhaval Dave (National Institute of Technology Warangal, India); Pranav Dave (Gujarat Technological University & LDRP Institute, India)
With the evolution of wireless technology and use of mobile devices, Mobile Ad-hoc Network has become popular among researchers to explore. A mobile ad-hoc network (MANET) is a self-configuring network of mobile routers (and associated hosts) connected by wireless links. The routers and hosts are free to move randomly and organize themselves arbitrarily. It allows mobile nodes to communicate directly without any centralized coordinator. Such network scenarios cannot rely on centralized and organized connectivity, and can be conceived as applications of Mobile Ad-Hoc Networks. Thus, MANET is vulnerable due to its dynamic network topology, as any node become untrusted at any time. The Black hole attack is one kind of security risk in which malicious node advertises itself to have a shortest path for any destination, to forge data or for DOS attack. In this paper, to detect such nodes effectively, we propose a Permutation based Acknowledgement for most widely used reactive protocol ad-hoc on demand distance vector routing AODV. This mechanism is enhancement of Adaptive Acknowledgement (AACK) and TWO-ACK, here we have tried to show the efficiency increment by decreasing number of messages routed in the network
pptx file
Dynamic and Distributed Channel Congestion Control Strategy in VANET
Sulata Mitra (Indian Institute of Engineering Science and Technology, India); Atanu Mondal (Camellia Institute of Technology, Kolkata, India)
Congestion control is an important research issue to ensure safe and reliable vehicle to vehicle communication by using the limited resource available in vehicular ad hoc network. It supports the communication of safe and unsafe messages among vehicles. The present work controls channel congestion dynamically by reducing the rate of transmission of messages among vehicles. The transmission rate of messages is reduced by allowing only the authentic vehicles to utilize the available resources in the network, by revoking the attackers and also by controlling the channel load dynamically. The performance of the proposed channel congestion control strategy is studied with and without the revocation of attackers.
pptx file

S31: Mobile Computing and Wireless Communications-IIgo to top

Room: 108-B Block E First Floor
Chair: Bob Gill (British Columbia Institute of Technology, Canada)
Studies on the Suitability of LT Codes with Modified Degree Distribution (MDD) for Fading Channels
Joe Louis Paul I (SSN College of Engineering, India); S Radha (SSN College of Engineering); Raja J (Sri Sai Ram Engineering College)
The objective of this paper is to study and analyze the suitability of Luby Transform (LT) codes for fading channels such as Rayleigh and Rician channels. The performance of LT codes is actually governed by a well-defined degree distribution function used in LT process. Hence, this paper presents a novel approach called Modified Degree Distribution (MDD) for improving the delay and Bit Error Rate (BER) performances of LT codes over fading channels. This work mainly discusses the significance of varying the proportion of degree-1 encoding symbols on the delay and BER performance of LT codes. Simulation results show that there is a significant improvement in the performance of MDD based LT codes over Rician channels compared to Rayleigh channels in terms of BER and encoding/decoding delay.
pdf file
An Energy Efficient Topology Control Scheme with Connectivity Learning in Wireless Networks
Jisha Shanavas (Amrita School of Engineering, India); Simi S (Amrita University, India)
In Wireless networks, the network topology changes over time due to the varying environmental and link characteristics. Distributed topology control of nodes in dynamic networks is a major factor that affects the connectivity and lifetime of the network. Nodes in wireless networks have limited resources. Topology control algorithms are helpful to improve the energy utilization, reduce interference between nodes and extend lifetime of the networks operating on battery power. This paper proposes a strategy for topology control and maintenance by learning the network link characteristics. The system learns the varying network link characteristics using reinforcement learning technique and give an optimal choice of paths to be followed for packet forwarding. The algorithm calculates the bounds on the number of neighbors per node which helps to reduce power consumption and interference effects. Also the algorithm ensures strong connectivity in the network so that reachability between any two nodes in the network is guaranteed. Analysis and simulation results illustrate the correctness and effectiveness of our proposed algorithm.
pdf file
Performance Dependence of Line-of-Sight Multiuser Multi-Input Multi-Output System on Allocated User Bandwidth in an Office Environment
Satinder Gill and Brent R. Petersen (University of New Brunswick, Canada)
In this paper, the line-of-sight (LOS) multiuser multi-input multi-output (MU-MIMO) system present in an office environment is considered. The major idea behind the proposed work is to demonstrate the effect of allocated user bandwidth on the performance of a bandlimited MU-MIMO system in an office environment. The hypothesis is proposed to estimate the average available MU-MIMO bandwidth of an office environment. The computer simulation results for the 2 ⨯ 2 LOS MU-MIMO scenario show great improvement in the performance of the MU-MIMO system for optimal user bandwidth allocation. Finally, the presented measurement results attest the MU-MIMO system theoretical and simulation performance improvement predictions.
zip file
Performance Boosting Approach of S-Random Interleaver for IDMA System Using Walsh Code
Sonam Sharma (G L A University Mathura, India); Paresh Chandra Sau (GLA University, India); Aasheesh Shukla (GLA University, India)
This paper presents S-Random interleaver based Interleave Division Multiple Access (IDMA) scheme. S-Random interleaver provides good spreading parameter which produce less correlated extrinsic values and also lowers down the error floor, hence better the system performance. As interleavers in IDMA used to recognize the different signal from the different users, S-Random interleavers are employed in IDMA for user separation and also provide significant improvement in BER performance as compared to that of conventional random interleaver. In this paper performance analysis of S-Random and random interleaver for iterative IDMA system have been evaluated with BPSK modulation in Rayleigh Fading channel. Further, the system performance can be improved significantly by the inclusion of optimal spread code such as Walsh-Hadamard code. Simulation results are given in terms of spread parameter, BER performance.
pptx file
Design of Multi Resonance Loop Shape Micro Strip Antenna for Ultra Wide Band Wireless Communication Applications
Kailas Kantilal Sawant (DIAT (DU), India); Suthikshn Kumar (DIAT, Deemed University, India); Sujit Dharmpatre (Pune University, Pune, India)
A novel multiresonance loop shape microstrip antenna (MRLMSA) for ultra wide band wireless communication applications is presented. The designed radiating antenna consists of a two loop shaped ring resonators around the single monopole hexagon patch. This antenna is powered through stepped co-planar waveguide (CPW) fed. The stepping fed is used to improve the antenna electrical characteristics at the centre and at higher frequencies. The stepping fed and ground formation will also provide the impedance matching with the radiating patch. This antenna is used for characterisation in ultra wide band frequency range from 3.1 GHz to 10.6 GHz. The antenna is designed and simulated on an Fr4_epoxy substrate of dimensions (L) x (W) = 72.25 mm x 51 mm having dielectric constant εr = 4.4, relative permeability = 1, thickness t = 1.53 mm, loss tangent tan δ = 0.002 and lande G factor 2. The size of the radiating patch is specified by Length x Width is (L1) x (W1) and edge length is (S1) for impedance matching, constant gain, steady radiation patterns, and constant group delay over the UWB frequency range. The two loop rings are used around the monopole hexagon is to provide the multiresonance radiation characteristics. The speciality of the design and its dimensions is that, within two loop rings and a monopole patch it gives multiresonance response in the UWB frequency range from 3.1 GHz to 10.6 GHz. An optimized MRLMSA is designed and simulated on an Ansoft- HFSS simulation software and a comparative statement of result is also proposed. This antenna is useful in the UWB wireless communication systems i.e. MIMO-Multi-Input Multi-Output UWB system for short range, higher power transmission with higher bandwidth requirement. This antenna is also used in other wireless systems such as WLAN, WiMax etc for notch (filter)applications.
pptx file
Analysis of Resonace-based Wireless Power Trasmission Using Circuit Theory Approach
Asawari Kukde (Veermata Jijabai Technological Institute, Mumbai, India); Vikram Singh, chauhan (Mumbai University & V J T I, India); Chirag Warty (Intelligent Communication lab & Director, Quantspire, India)
Wireless Power Transmission (WPT) can provide solutions for power transfer in complex environment and topologically challenging locations. This technology is mainly affected by the range and efficiency of transmission. Specially, bulk power transfer using WPT is also a challenging task. WPT using resonant coupling can provide an effective solution for power transfer to sensors and monitoring meters in temperature variant environments and battery powered static and mobile devices. However, its effects related to simultaneously charging multiple units and its directional field pattern when introduced on both the transmitting as well as receiving side of the system is to be studied. This system can be expanded to transmit power from single transmitter to multiple receivers.
Performance Analysis of Host Based and Network Based IP Mobility Management Schemes Over IPv6 Network
Riaz Khan (National Institute of Technology Srinagar, India)
The mobile communication is growing very fast in order to meet today world's needs and desires. As the portable devices are growing with rapid pace and people are moving from one place to another frequently, changing their attachment points to the communication networks (Mobile IP based Networks, Wireless Local Area networks WLAN and Wireless Personal Access Networks WPAN). People carrying the mobile gadgets want to remain connected to the network all the time and want uninterrupted services. There are standardized mobility management protocols; host based and network based. These protocols are being used to carry the mobility of Mobile Node (MN) with the minimum handover delay and provide secure connections (IPsec as inbuilt support in IPv6 enabled networks) with the MN's destination. In this paper we have evaluated the performance of both host based and network based mobility approaches. Through the simulation over Network Simulator (NS2) different performance parameters were calculated for both categories of mobility schemes to show their performance. We found, in some cases where host based mobility schemes are not suitable (like in Wireless Sensor Networks); network based mobility schemes provide fruitful and acceptable results.
pptx file
Handoff Performance Analysis of FNEMO and SINEMO for Vehicle-to-Infrastructure Communications
Palash Kundu (Jadavpur University, India)
One of the major tasks at IETF's, NEtwork MObility Basic Support Protocol (NEMO BSP) is to support the seamless and uninterrupted connectivity of mobile hosts using a specialized mobile router (MR) that directly connects the whole mobile network to the Internet. Seamless IP diversity based NEMO (SINEMO) outperforms NEMO-BSP in terms of handoff latency and related packet loss by utilizing the advanced loss recovery mechanism and multi-homing feature of stream control transmission protocol (SCTP). FMIPv6 was adopted to reduce handoff latency and related packet loss of MIPv6. To improve the handoff performance of NEMO BSP, FMIPv6 based NEMO (FNEMO) has been proposed. In this paper, handoff performance of two modes of FNEMO, Predictive-FNEMO (Pre-FNEMO) and Reactive-FNEMO (Re-FNEMO), and SINEMO are analytically compared based on handoff latency, handoff blocking probability and packet loss during handoff. The numerical results show that Pre-FNEMO outperforms SINEMO in terms of above metrics and Re-FNEMO also performs better than SINEMO in terms of packet loss during handoff.
pdf fileppt file
PAPR Reduction in Wavelet Based SC-FDMA Using PTS Scheme for LTE Uplink Transmission
Ishu Singla (UIET, India)
SCFDMA has become a promising technique for LTE uplink transmission. SCFDMA is often referred as DFT spread OFDMA. The reason for using SC-FDMA for uplink transmissions is lower PAPR. Lower PAPR makes the system power efficient. Partial Transmit Sequence(PTS) is technique used for reducing PAPR in different schemes. In this paper, Wavelet based SCFDMA is proposed for analysing PAPR performance using PTS scheme. Analysis is carried out using different wavelets and with different number of carriers. This analysis will show that the further PAPR reduction takes place in wavelet based SCFDMA using PTS scheme. Thus wavelet based SCFDMA gives better PAPR performance than that of DFT based SCFDMA.
pptx file

S32: Computer Architecture and VLSI-IIgo to top

Room: 105 Block E First Floor
Chair: Badri Patro (Indian Institute of Technology Bombay, India)
FPGA Implementation of Energy Efficient Multiplication Over GF(2m) for ECC
Ravi Kishore Kodali and Lakshmi Boppana (National Institute of Technology, Warangal, India)
Public key cryptography (PKC) is highly secure against threats compared to symmetric key cryptography (SKC). One of the PKC techniques, Elliptic curve cryptography has been gaining wider attention as compared to the popular RSA due to its lesser key size requirements in order to provide a similar security level. This paper details the hardware implementation modular multiplicative over binary field GF(2m). Efficient scalar point multiplication is a crucial part in elliptic curve cryptography. A scalar point multiplication consists of point doubling and point addition operations. Both of these operations inherently depend on addition, multiplication, squaring and inversion. Among these, the inversion operation is the most time consuming one. The computation of multiplicative inverse primarily consists of modular multiplication and modular squaring operations. This paper proposes an efficient scalar multiplication using iterative Karatsuba-Offman multiplication algorithm (KMA) over GF(2m). The performance comparison is based on the Xilinx Virtex-6 FPGA implementation for the NIST recommended binary field.
FPGA Implementation of a BCH Codec for Free Space Optical Communication System
Shriharsha Koila, Goutham Simha G D and Muralidhar Kulkarni (National Institute of Technology Karnataka, India); Udupi Sripati (NITK, Surathkal, India)
Future Free Space Optical (FSO) communication systems have the potential of communicating data at very high rates with very high levels of integrity over distances of up to a few kilometers (for terrestrial links). This technology has also been a candidate for setting up very high speed (~3 Gbps)and highly reliable (BER~〖10〗^(-9))communication links between satellites in geo-synchronous orbits and ground stations. Since the free space optical medium can induce many forms of distortion (atmospheric turbulence effects, optical beam wander etc), the use of a channel code to detect and correct errors during the process of information transfer over the channel is essential. A correctly designed channel code can reduce the raw BER from unacceptable values to values that can be tolerated in many applications. In this paper, we have designed a Codec (encoder/ decoder) pair for a (31, 16, 3) Bose, Ray-Chaudhuri, and Hocquenghem (BCH) code on the Nexys-4 FPGA platform. The performance of this BCH Codec has been tested over an indoor FSO channel and the improvement in terms of BER has been quantified. An improved syndrome computation circuit, parallel Chien search implementation and an improved method for calculating inverses in a finite field are the new features incorporated in this paper. We have been able to design and implement circuits which use these optimized approach and deliver real time encoding and decoding with an information transfer rate of 2 Mbps and can be extended upto a speed of 418Mbps.
zip file
A Composite Data Prefetcher Framework for Multilevel Caches
Harsh Arora (V. I. T University & Ex. Sr. Mgmt/Engineering -R&D Professional : Mentor Graphics Inc, Cadence Inc & Motorola Inc, India); Suvechhya Banerjee (V. I. T University, India); Davina V. (V. I. T University, India)
The expanding difference between the Processor speed and the DRAM performance have led to the aggressive need to hide memory latency and reduce memory access time. It is noticed that the Processor remains stalled on memory references. Data Prefetching is a technique that fetches that next instruction's data parallel to the current instruction execution in a typical Processor-Cache-DRAM system. A Prefetcher anticipates a cache miss that might take place in the next instruction and fetches the data before the actual memory reference. The goal of Prefetching is to reduce as many cache misses as possible. In this paper we present a detailed summary of the different prefetching techniques, and implement a composite Prefetcher prototype that employs the techniques of Sequential, Stride and Distance Prefetching.
FPGA Implementation of Stream Cipher Using Toeplitz Hash Function
Saptadeep Pal (Indian Institute of Technology, India); K K Soundra Pandian and Kailash Chandra Ray (Indian Institute of Technology Patna, India)
Hardware efficient stream ciphers and hash functions are widely used in cryptographic applications. The one-wayness and low hardware complexity of hash function make it a good candidate for authentication operation of crypto systems. On the other hand, the most popular stream cipher is being used in the domain of cryptology. Generally it uses a static key stream for the crypto process. The main motive of this work is to integrate the hash function based key generation with the RC4 stream cipher block so as to provide dynamic key to the encryption system thus realizing a robust security hardware prototype. The proposed method is designed for 5-bit hash key and stream cipher using Verilog HDL and simulated using Xilinx ISE 14.2 simulator. Further the design targeting to commercially available Xilinx Spartan 3E fg320-4 FPGA device. The excellence of the generated random key value by our proposed method is validated using the statistical tests proposed by National Institute of Standard and Technology (NIST).
rar file
Evolution of Conventional Antilogarithmic Approach and Implementation in FPGA Through VHDL
Kousik Dan (NIT Calicut, India)
An antilog is the inverse function of a logarithm. Today, conventional use of the term "antilog" has been replaced in mathematics by the term "exponent". The binary logarithm is often used in the field of computer science and information theory because it is closely connected to the binary numeral system, in analysis of algorithms and Singleelimination tournaments etc. So an efficient system is to perform antilogarithm as other operation at high speed, low power consumption with minimal area requirement .In this paper calculation of antilogarithm of a number with any base is proposed through four different approaches where next approach is modified version of previous approach. Obviously modification is done in such a way that there is an improvement of area, power and delays each next stage. FPGA implementation of each method is done through which comparison of these four methods can be possible following the simulation result. Xilinx 13.2 version is used. The VHDL approach for FPGA implementation is done in binary fix point with base 2 but it is possible to take any other base and proceed through same algorithm with some modification that will be explained later. Area, power, delay and error analysis is done .At last some possible optimization technique is proposed for future modification.
pptx file
Generic and Programmable Timing Generator for CCD Detectors
Parth Shah and Bhavesh Soni (Ganpat University, India); Mohammad Waris and Rajiv Kumaran (Space Application Centre, India); Sanjeev Mehta (Space Application Centre & ISRO, India); Arup Chowdhury (Space Application Centre, India)
Charge Coupled Devices (CCD) detectors are frequently used in imaging payloads developed for different satellite applications like space based astronomy and earth observations. CCD's are being used for onboard/satellite applications as it provides lower noise and higher dynamic range than CMOS detectors. CCDs are available in various architectures hence design of Timing Generator is planned based on CCD requirements. This paper discusses design methodology for generic timing generator which is completely programmable and supports various CCD architectures. The aim of design is to provide flexibility in terms of number of different types of clocks, effective image area and readout features with respect to various CCD architectures. Different supported CCD architectures, overall clock requirements, required readout features are studied and design architecture is worked out. The RTL design of Timing Generator is done using VHDL and block level verification is done using Verilog. The design is targeted to Xilinx Virtex-6 LX FPGA.
pptx file
FPGA Implementation of Dynamically Tunable Filters
Senthil Kumar E (Karunya University, India); Manikandan J (PES University (PESU), India); Agrawal VK (Director- CORI, PESIT, India)
Digital Signal Processing techniques are extensively used in a large number of applications such as communication and multimedia and filtering concepts are considered as one of the basic elements needed for digital signal processing. This has motivated the design of digital filters for digital signal processors (DSPs) and Field Programmable Gate Arrays (FPGAs) based system design. The cut-off frequencies of these filters vary based on the requirements of application. In this paper, FPGA implementation of dynamically tunable Finite Impulse Response (FIR) filter is proposed, wherein the cut-off frequency can be dynamically changed on-the-fly without any need to program the FPGA. The proposed work is carried out to design high pass, low pass, band pass and band stop filters. The performance of the filters designed is evaluated for direct form structure and optimized structure using Virtex-5 FPGA board.
pdf file
Memristor-Capacitor Based Startup Circuit for Voltage Reference Generators
Mangal Das (ABES Engineering College, India); Sonal Singhal (Shiv Nadar University, India)
This paper presents the design of Memristor-capacitor based startup circuit. Memristor is a novel device and has many advantages over conventional CMOS devices such as no leakage current and is easy to manufacture. In this work the switching characteristics of memristor is utilized. First the theoretical equations describing the switching behavior of memristor are derived. To prove the switching capabilities of Memristor, a startup circuit based on series combination of Memristor-capacitor is proposed. This circuit is compared with the reference circuit (which utilizes resistor in place of memristor) and the previously reported MOSFET based startup circuits. Comparison of different circuits was done to validate the results. Simulation results shows that memristor based circuit attains on (I = 2.25 mA) to off state (I = 10 μA) in 2.8 ns while the MOSFET based startup circuits takes (I = 1 mA) to off state (I = 10 μA) in 55.56 ns. However no significant difference in switching time was observed when compared with resistance based startup circuit. The benefit comes in terms of area because much larger die area is required for manufacturing of resistance in comparison to fabrication of memristor.
ppt file
Floating Point Coprocessor for Distributed Array Controllers
Himanshu Patel and B Raman (Indian Space Research Organisation (ISRO), India); Nilesh M. Desai (Space Applications Centre (ISRO), India)
This paper describes a novel architecture of IEEE-754 compatible Floating Point Coprocessor (FPC) interfaced to an 8- bit microcontroller soft core for distributed array controller ASIC. FPC register bank is mapped as a shared dual port memory with micro-controller to minimize the overhead of data transfer. FPC contains 256x32-bit LUT for storage of trigonometric or user defined functions. LUTs and instruction memory are mapped as "stack" SFR with microcontroller, so it can be initialized by multiple "push" to a single Special Function Register (SFR). Space borne distributed array controller ASICs utilize 8 bit microcontroller cores due to their advantage in terms of low memory size, area and power consumption, but they are slow in floating point computations. This FPC enables real time floating point computations without the need of 32-bit microcontrollers. The FPC architecture is generic so it can also be used for other applications with similar computational requirements. The FPC IP core has been implemented in VHDL and its performance has been compared for different cases. Simulation results shows that FPC gives 40x improvement in run time for distributed control applications.
pdf file

S33: Pattern Recognition, Signal and Image Processing-IIgo to top

Room: 104 Block E First Floor
Chair: Pascal Lorenz (University of Haute Alsace, France)
Automatic Knee Cartilage Segmentation and Visualization
Houda Bakir (Ecole National Superieur d'Ingenieur de Tunis, Tunisia); Jalel Zrida (Ecole Supérieure des Sciences et Techniques de Tunis, Tunisia)
In this paper we propose a full automatic segmentation of the knee cartilage from magnetic resonance images (MRI). The new segmentation approach is based on a combination of the Vector Field Convolution (VFC) image's features and radial search algorithm. The proposed approach provides an automatic segmentation and a 3D visualization of the knee cartilage.
ppt file
Rule Induction Based Object Tracking
Rahul Roy (Machine Intelligence Unit & Indian Satistical Institute, India); Ashish Ghosh (ISI Kolkata, India)
In this work, an object tracking method using a rule mining/ induction technique is presented. Initially, a rule based classification algorithm is employed to classify the target frame into object and background. A sequential covering algorithm is used in order to extract the rules from the candidate frame. Extracted rules are then used for classifying the test samples obtained from the search region of the target frame. Classified test samples form the classification map which are used for calculating the new centroid to locate the object in the target frame. Temporal coherence (between frames) is maintained by updating the rule set during the rule extraction phase. Efficiency of the proposed method is established both qualitatively and quantitatively by comparing it with some of the state-of-the-art algorithms.
pptx file
Efficient Pitch Detection Algorithms for Pitched Musical Instrument Sounds: A Comparative Performance Evaluation
Chetan Pratap Singh and Kishore T (NIT Warangal, India)
Pitch detection has been a popular research topic for a number of years now. Pitch seems to be one of the more important perceptual features, as it conveys much information about the sound signal. It is closely related to the physical feature of fundamental frequency f0. For musical instrument sounds, the f0 and the perceived pitch are practically equivalent. In this paper we have proposed four pitch detection algorithms for pitched musical instrument sounds. The goal of this paper is to investigate how these algorithms should be adapted to pitched musical instrument sounds analysis and to provide a comparative performance evaluation of the most representative state-of-the-art approaches. This study is carried out on a large database of pitched musical instrument sounds, comprising four types of pitched musical instruments violin, trumpet, guitar and flute. The algorithmic performance is assessed according to the ability to estimate pitch contour accurately.
pptx file
Design of a Synthetic ECG Signal Based on the Fourier Series
Jan Kubicek (VSB-Technical University of Ostrava & Faculty of Electrical Engineering and Computer Science, Czech Republic); Radana Kahankova (VSB-TU Ostrava, Czech Republic); Marek Penhaker (VSB - Technical University of Ostrava, Czech Republic)
The main objective of this work is to create a synthetic ECG signal in software MATLAB based on the analysis of Fourier series. The individual elements of ECG signal are approximated by mathematical model, which is thoroughly de-scribed, explained and then applied. The output is a synthetic model of an ECG. Our approach to modeling biological signals allows change input parameters (amplitude and period of significant elements ECG). Synthetic models of bio-logical signals can be used for demonstration purposes, but mainly serves as a material for functionality detectors for measuring and predicting lengths of waves and intervals.
ppt file
A Framework for Face Classification Under Pose Variations
Jagdish Sarode (University of Pune & Maharashtra Institute of Technology Pune, India); Alwin Anuse (Pune University, India)
Automatically verifying a person from a video frame or a digital image using computer application is known as a Face Recognition system. With change in facial pose face appearance changes drastically. Hence recognition of faces under pose variations has proved to be a difficult problem. In this paper a model based approach is used and the Moment based feature extraction techniques (Hu's, Zernike and Legendre Moments) are implemented on three different face Databases containing different poses of the face. This paper proposes a new method called "Genetic Algorithm based Transfer Vectors" for generation of features of a frontal face from the features of different poses of image. Then the generated frontal features are matched with the actual frontal features. Also this paper introduces new unconstrained human face Database called as "My Unconstrained Database" (MUDB) which is inspired from IMFDB. Extracted feature are classified by three different methods kNN classifier, LDA and Genetic Algorithm based Transfer vectors and Correct Recognition Rate is calculated.
pdf file
A Radix-2 DIT FFT with Reduced Arithmetic Complexity
Shaik Qadeer (Muffkham Jah College of Engineering and Technology (MJCET), India); Mohammed Zafar Ali Khan (Indian Institute of Technology, Hyderabad, India); Syed Sattar (RITS, Hyderabad, India); Ahmed Ahmed (Muffakham Jah College of Engineering and Technology, India)
The efficient computation of Discrete Fourier Transform (DFT) is an important issue as it is used in almost all fields of engineering for signal processing. In this paper we present an alternate form of Radix-2 Fast Fourier Transform (FFT) based on Decimation in time (DIT) to compute DFT, discuss their implementation issues and derive it's signal to quantization noise ratio(SQNR) that further reduces the number of multiplication counts without increasing the number of additions of power-of-two discrete Fourier Transform. This is achieved with simple scaling of Twiddle factor (TF) using a special scaling factor. This modification not only reduces the total flop counts from 5Nlog2N to ~42/3Nlog2 (6.66% fewer than the standard Radix-2 FFT algorithm) but also improves SQNR from 1/(2N2^-2b) to 9/(2N2^-2b) (1.6dB more than the standard Radix-2 FFT algorithm)
pdf file
Implementation of MFCC Based Hand Gesture Recognition on HOAP-2 Using WEBOTs Platform
Neha Singh, Neha Baranwal and Gora Nandi (Indian Institute of Information Technology, Allahabad, India)
Hand gestures are the only means of communication and interaction for hearing impaired. This paper proposed a computer vision based technique to identify hand gestures from library of Indian Sign Language (ISL) gestures and Sheffield Kinect Gesture (SKIG) Dataset. Mel Frequency Ceptral Coefficients (MFCC) is used as feature vector due to its high quality of discriminating power in different classes. Minimum distance classifier (Euclidean distance metric) is used for classification of different gestures of a same person in two different lighting conditions, yellow light and white light as well as on SKIG data set. Performance of the proposed technique is evaluated on ten types of ISL gestures (5 are dynamic and 5 are static gestures) and five types of SKIG Kinect gestures and compared with the existing techniques which are also performed on SKIG gesture dataset and ISL dataset. Comparative analysis of our proposed method is performed with the existing method. Performance analysis of our proposed method shows better results than the orientation histogram based technique. Here ISL gestures are simulated on HOAP-2 Robot in Webots platform, for establishing interaction between robot and human.
ppt file
Realization of Fractional Power Over Wideband in z Domain
Swati Tyagi (Netaji Subhas Institute of Technology, India); Dharmendra K. Upadhayay (University of Delhi, India)
In this paper modified s-to-z transformation of VVGS and VVG- Al-SKG rule is expanded for fractional power using continued fraction expansion (CFE). It is noticed that at low frequency the magnitude response is improved as compared to the VVGS and VVG-AL-SKG operator. Also designed generalized Al-Alaoui based half differentiator for 3rd order and 4th order. MATLAB results are compared with the theoretical results of continuous time ideal differentiator and other existing operators. Al-Alaoui-Schneider Operator based half-differentiator discretization has also been suggested for 3rd order and 4th order. The results reveal that the half differentiator based on Al-Alaoui Schneider operator outperforms in higher range of frequencies also.
ppt file
Design and Implementation of Novel Image Segmentation and BLOB Detection Algorithm for Real-Time Video Surveillance Using DaVinci Processor
Badri Patro (Indian Institute of Technology Bombay, India)
A video surveillance system is primarily designed to track key objects, or people exhibiting suspicious behavior, as they move from one position to another and record it for possible future use. The critical parts of an object tracking algorithm are object segmentation, image clusters detection, and identification and tracking of these image clusters. The major roadblocks of the tracking algorithm arise due to abrupt object shape, ambiguity in number and size of objects, background and illumination changes, noise in images, contour sliding, occlusions and realtime processing. This paper will explain a solution of the object tracking problem, in 3 stages: In the first stage, design a novel object segmentation and background subtraction algorithm, These algorithm will take care of salt pepper noise, and changes in scene illumination. In the second stage, solve the abrupt object shape problems, objects size and count various objects present , using image clusters detected and identified by the BLOBs (Binary Large OBjects) in the image frame. In the third stage, design a centroid based tracking method, to improve robustness w.r.t occlusion and contour sliding. A variety of optimizations, both at algorithm level and code level, are applied to the video surveillance algorithm. At code level optimization mechanisms significantly reduce memory access, memory occupancy and improved operation execution speed. Object tracking happens in real-time consuming 30 frames per second(fps) and is robust to occlusion, contour sliding, background and illumination changes. Execution time for different blocks of this object tracking algorithm were estimated and the accuracy of the detection was verified using the debugger and the profiler, which will provided by the TI(Texas Instrument) Code Composer Studio (CCS). We demonstrate that this algorithm, with code and algorithm level optimization on TIs DaVinci multimedia processor (TMS320DM6437), provides at least two times speedup and is able to track a moving object in real-time as compared to without optimization.
pdf file
ROC Analysis of Class Dependent and Class Independent Linear Discriminant Classifiers Using Frequency Domain Features
Swarna Kuchibhotla (Acharya Nagarjuna University, India); Hima Deepthi Vankayalapati, BhanuSree Yalamanchili and Koteswara Rao Anne (VRSEC, India)
Emotional speech recognition aims at classifying the human emotional states viz. happy, neutral, anger and sad etc.,. To classify these emotions we need to extract reliable Acoustic features like prosody and spectral. The time domain features are much less accurate than frequency domain features . So in this paper Mel Frequency Cepstral Coefficients(MFCC) are extracted from Berlin emotional speech corpus and are classified using Class Dependent and Class Independent Linear Discriminant Analysis(CD-LDA and CI-LDA). The results obtained shows the performance variation of the classifiers with respect to the emotional states
pptx file
To Study Non Linear Features in Circadian Heart Rate Variability Amongst Healthy Subjects
Kapil Tajane and Rahul Pitale (University of Pune, India); Leena Phadke (SKN Medical College, India); Aniruddha Joshi (National Chemical Lab, India); Jayant Umale (University of Pune, India)
ECG signal is used for diagnosis of ailments of heart. HRV is used as a predictive and prognostic marker of autonomic dis-functioning. ANS is known to influence heart and any dis-functioning of this system leads to cardiac disorders. ANS has endogenous circadian rhythm. Circadian rhythms are responsible for physical, mental and behavioral changes that follow a roughly 24-hour cycle. Previous studies have shown large inter and intra individual differences in HRV which has lead to difficulties in establishing standard norms. Therefore the aim of our study is to establish a brief protocol for HRV analysis where we will be able to extract features in shorter duration of recording, representative of 24 hour fluctuations in HRV. In this paper we have studied different linear as well as non-linear techniques to analyze circadian HRV. 24 hour ECG recording of 15 subjects using Minimum Activity Protocol subjected for HRV analysis.
pptx file
Mixed Positioning System Guaranteeing the Continuity of Indoor/Outdoor Tracking
Hanen Kabaou and Pascal Lorenz (University of Haute Alsace, France); Sami Tabbane (Sup Telecom, Tunisia)
Global Positioning System (GPS) or Global Navigation Satellite System (GNSS) are not always the best positioning systems, particularly in indoor environments. In these situations, other technologies already in the consumer device are being used, such as Wireless Fidelity (Wi-Fi). Positioning techniques based on the pattern of observations associated with multiple Wi-Fi. With this latter, observations are compared to previously mapped locations, and "trilateration", which received power is used as an indication of distance from the transmitter and a geometric calculation against known transmitter locations which is used to locate the device. As a solution to optimize the outdoor as indoor localization, the system integrates three different systems GPS, Wi-Fi and the Simultaneous Localization And Mapping (SLAM), using a coefficient of confidence that can qualify the accuracy and the quality of the positioning data inside the program. The idea comes from relay race sport where each player passes the baton to the nearest player of the same team. In our case, it is the transition from the outside positioning system to the inside one. Our solution switches from a system to another without producing a sudden "jumping". For this, we create a "unified environment" between the three systems.

S34: ISI-2014 - Intelligent Distributed Computing-Igo to top

Room: 210 Block E Second Floor
Chairs: Vikrant Bhateja (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India), Shruti Kohli (Birla institute of Technology, India)
A Heuristic for Link Prediction in Online Social Network
Ajeet Panwar and Rajdeep Niyogi (Indian Institute of Technology Roorkee, India)
As we know due to advancement in technology it is very easy to be in connection with others. People interact with each other and they create, share and exchange information and ideas. Social network is one of the most attracted areas in recent years. Link prediction is a key research area. In our proposed method we study link prediction using heuristic approach. Most of the previous papers only considered the network topology, they didn't consider the nodes properties individually, and they treated them only as passive entity in graph and using only network properties. But in our proposed method we will consider different parameter of nodes that define the behavior of nodes and one important issue that we will consider in our method "New researchers" because they are willing to get help in identifying potential collaborators. Thus our focus will also on "New researchers" and we would also like to have a quantitative analysis of the performance of the different existing methods and to study some domain specific heuristics that would improve the degree of prediction.
pptx file
A Result Verification Scheme for MapReduce Having Untrusted Participants
Gaurav Pareek (National Institute of Technology, Goa, India); Chetanya Goyal (Bipin Tripathi Kumaon Institute of Technology & Uttarakhand Technical University, India); Mukesh Nayal (Bipin Tripathi Kumaon Institute of Technology & Uttarakhand Technical University)
MapReduce framework is a widely accepted solution for performing data intensive computations efficiently. The master node prepares the input to be distributed among multiple mappers which distribute the reduced task to the reducers. Reducers perform identical set of computations on the reduced data independently. If any one of the reducers works maliciously and does not produce results as desired by the end-user, a significant error in the final output can be observed. MapReduce does not provide any mechanism to detect such Lazy Cheating Attacks by a computation provider. In this paper, we propose a generalized defense to this type of attack on statistical computations. The solution does not involve redundant computations on the data to prove the worker malicious. Implementation results on Hadoop show the detection rate of such cheating behavior by the proposed scheme. The accompanying theoretical analysis proves that the solution does not noticeably affect the timeliness and accuracy of the original service.
pptx file
A Survey on Reduction of Load on the Network
Rajender Nath (Kurukshetra University, India); Naresh Kumar and Sneha Tuteja (GGSIPU, India)
The following research paper throws light on ever increasing load on networks vitally due to the presence of web crawlers and inefficient search mechanisms. On the account of looking upon the matter more deeply, a lot of research has been done already but no feasible solution has been found yet. The following research paper tries to find out the possible loopholes by surveying previous researches so that one can come up with a more practicable and workable approach to lessen the load on the network.
rar file
Classification Mechanism for IoT Devices Towards Creating a Security Framework
VJ Jincy and Sudharsan Sundararajan (Amrita Vishwa Vidyapeetham, India)
IoT systems and devices are being used for various applications ranging from households to large industries on a very large scale. Design of complex systems comprising of different IoT devices involves meeting of security requirements for the whole system. Creating a general security framework for such interconnected systems is a challenging task and currently we do not have standard mechanisms for securing such systems. The first step towards developing such a frame-work would be to build a classification mechanism which can identify the security capabilities or parameters of the different entities comprising an IoT system. In this paper we describe one such mechanism which can take user input to classify the different components of a complex system and thereby determine their capability to support security mechanisms of different degrees. This in turn would enable designers to decide what kind of security protocols they need to adopt to achieve end-to-end security for the whole system.
pdf file
Customization of Recommendation System Using Collabrative Filtering Algorithm on Cloud Using Mahout
Senthilkumar Thangavel (Amrita School of Engineering, India)
Recommendation System helps people in decision making regarding an item/person. Growth of World Wide Web and E-commerce are the catalyst for recommendation system. Due to large size of data, recommendation system suffers from scalability problem. Hadoop is one of the solutions for this problem. Collaborative filtering is a machine learning algorithm and Mahout is an open source java library which favors collaborative filtering on Hadoop environment. The paper discusses on how recommendation system using collaborative filtering is possible using Mahout environment. The performance of the approach has been presented using Speedup and efficiency.
Dynamic Job Scheduling Using Ant Colony Optimization for Mobile Cloud Computing
V. Vaithiyanathan (SASTRA Universisty, India); Rathnakar Achary, Sr. (Alliance Business Academy, India); S. Nagarajan (SASTRA Universisty, India)
Cloud computing has been considered as one of the important computing paradigm. Its main purpose is to provide Software as a Service (SaaS), platforms to run the applications (PaaS) infrastructure (IaaS) and networks (NaaS). With the current scenario there is no doubting the incredible impact that mobile technologies have had on both in scientific and commercial applications. Employees preferred to use smart phones not just for communication purposes or for entertainment, but also to access the company's key applications. The integration of emerging cloud computing concept and the potential mobile communication services is together considered as Mobile Cloud Computing (MCC). A prominent challenge by using mobile devices and the mobile cloud [1] is resource constraints of these handheld devices. The computational complexities in mobile devices compared to the desktop computers are due to its smaller screen size, less memory capacity, lower processing capacity and low battery backup. Due to these resource limitations most of the processing and data handlings are carried out in the cloud, which is known as SaaS cloud. The smart phones are used to access could resources by using the browser. Performance of this mobile cloud is impaired by the time varying characteristics such as, latency, jitter and bandwidth of the wireless channel. In this research we propose a modified task scheduling mechanism called Ant Colony Optimization (ACO) to address the issues related to the performance of mobile devices [5] when used in a cloud environment and Hadoop. However there are bottlenecks related to the existing task scheduling techniques in MCC model which uses the built in FIFO algorithm for large amount of tasks. The proposed Ant Colony Optimization algorithm improve the task scheduling process by dynamically scheduling the tasks and improve the throughput and quality of service (QoS) of MCC.
ppt file
Computer Network Optimization Using Topology Modification
Archana Khedkar (University of Pune, India)
Computer network optimization is vital for reducing the costs of the networks, achieving the efficiency, robustness and uniform distribution of the traffic. For network of computers, optimizations are achieved for various aspects such as cost of data transfer, maximum data transfer per unit time, capacity utilisation, uniform traffic distribution. One of the important aspects of computer networks is network topology represented using graph theoretic concepts. Graph theory provides a strong mathematical framework for optimization of the topology.In this paper, network topology is optimised based on uniform node degree distribution. Uniform degree distribution is achieved just by redistribution of links without any deletion or addition of links to ensure that traffic is uniformly distributed throughout the network and every node is fully utilised without much deviation from average network traffic load. Uniform degree distribution can ensure that traffic cannot be congested, traffic load is distributed and can help in utilisation of the network to its fullest capacity. Owing to these importances, computer network topology is optimised based on Node degree distribution.
pptx file
Dir-DREAM: Geographical Routing Protocol for FSO MANET
Savitri Devi and Anil Sarje (Indian Institute of Technology Roorkee, India)
Wireless networks form an important part of communication. MANET (Mobile Ad-hoc network) is a much talked about field because of its abundant applications. Majority of MANETs work on RF spectrum as of now. Rise of multimedia applications and smart handheld devices have led to demand for higher bandwidth which has led to research in alternative fields of communication like Free Space Optics (FSO). In this paper we seek to provide a solution for MANET routing by using knowledge of node's location and the directional transmission capability of FSO. This paper aims at developing and simulating a geographical routing protocol that we call Dir-DREAM (Directional Distance Routing Effect Algorithm for Mobility) for a Mobile Ad-hoc network where nodes have multiple FSO Antennas. The proposed protocol uses node's location information and past information of interfaces over which packets from nodes are received for routing. We also perform ns-2 simulations to conduct performance evaluation of Dir-DREAM with FSO interfaces and DREAM over RF interface. Our proposed protocol performs well with multiple FSO interfaces and increases data packet delivery ratio, and decreases end to end delay. The designed protocol is observed to be performing well for varying node speed ranges.
pptx file
Data Owner Centric Approach to Ensure Data Protection in Cloud Environment
Kanupriya Dhawan (Punjab Technical University Jalandhar, India); Meenakshi Sharma (Punjab Technical Univerisity Jalandhar, India)
Last few years the latest trend of computing has been cloud computing. Cloud has brought remarkable advancement in individual as well as IT sector. But still many organisations are lagging behind in using cloud services. Major issue that affect them is related to data protection at cloud and afraid of sensitive data leakage by intruders. To solve this problem a data protection model has been designed so that data owner feels free to use cloud services. Proposed model is highly secure and data is under control of data owner itself. Re-encryption, HMAC and Identity based user authentication techniques are used in this model to make it more effective and attractive to use this model in real world.
pptx file
Enhancement of Data Level Security in MongoDB
Shiju Sathyadevan and Nandini Muraleedharan (Amrita Vishwa Vidyapeetham, India); Sreeranga P Rajan (Fujitsu Laboratories of America, USA)
Recent developments in information and web technologies have resulted in huge data outburst. This has posed challenging demands in efficiently storing and managing large volume of structured and unstructured data. Traditional relational models exposed its weakness so much so that need for new data storage and management techniques became highly desir-able. This resulted in the birth of NoSQL databases. Several business houses that churn out large volume of data have been successfully using NoSQL databases to store bulk of their data. Since the prime objective of such DB's were efficient data storage and retrieval, core security features like data security techniques, proper authentication mechanisms etc. were given least priority. MongoDB is one among the most popular NoSQL databases. It is a document oriented NoSQL database which helps in empowering business to be more agile and scalable. As Mon-goDB is gaining more popularity in the IT market, more and more sensitive information is being stored in it and so security issues are becoming a major concern. It does not guarantee privacy of information stored in it. This paper is about enabling security features in MongoDB for safe storage of sensitive information through "MongoKAuth" Driver, a new MongoDB client side component developed in order to automate a lot of manual configuration steps.
Enhancing the Security of Dynamic Source Routing Protocol Using Energy Aware and Distributed Trust Mechanism in Mobile Ad Hoc Networks
Deepika Kukreja (University School of Information and Technology & Netaji Subhas Institute of Technology, India); Sanjay Kumar Dhurandher (Netaji Subhas Institute of Technology, India); B Reddy (GGSIPU, India)
A routing protocol for detection of malicious nodes and selection of most reliable, secure, close to shortest and trustworthy path for routing data packets in Mobile Ad Hoc Networks (MANETs) is introduced. Dynamic Source Routing (DSR) protocol is extended and termed as Energy Efficient Secure Dynamic Source Routing (EESDSR). The protocol is based on an efficient, power aware and distributed trust model that enhances the security of Dynamic Source Routing (DSR) protocol. The model identifies the nodes exhibiting malicious behaviors like: gray hole, malicious topology change behavior, dropping data packets, dropping control packets and modifying the packet. Monitoring mechanism is suitable for MANETs as it focuses on power saving, has distributed nature and adaptable to dynamic network topology. The new routing protocol is evaluated using Network Simulator 2 (NS2). Through extensive simulations, it has been proved that EESDSR protocol performs better than the standard DSR protocol.
ppt file
Evaluating Travel Websites Using WebQual: A Group Decision Support Approach
Oshin Anand (IIM, Rohtak, India); Abhineet Mittal (Indian Institute of Management Rohtak, India); Kanta Moolchandani (Indian Institute of Management, Rohtak, India); Munezasultana Kagzi (Indian Institute of Management Rohtak, India); Arpan K Kar (Indian Institute of Technology Delhi, India)
The enhanced internet penetration and the increased usage of the facilities by travel industry floods the online arena with websites, and thus generates the urge to find out the best amongst the options and the factors determining it. The article tries to explore the optimally performing travel website in the Indian context based on the evaluation parameters highlighted by WebQual(TM). The work would highlight both the leading contributors to reuse of the website as well as their inter-relationship. The analysis has been done by the fuzzy extension of the Analytic Hierarchy Process for group decision making.
pptx file
Extending Lifetime of Wireless Sensor Network Using Cellular Automata
Manisha Sunil Bhende (University of Pune, India); Sanjeev Wagh (University of Pune & KJCOEMR, India)
The focus of this paper is to use Cellular Automata for simulating a series of topology control algorithms in Wireless Sensor Network using various environments. In this paper we introduced the use of cellular automata in Wireless sensor networks to control Topology. In the network the sensor nodes are redundantly deployed in the same area. Due to this redundant deployment the many nodes in the network are remained in their active state simultaneously. This causes reduction in the global energy of the network and reduction in the lifetime of the network. So, main purpose of the topology control algorithms is to reduce the initial topology of wireless sensor network by avoiding interference and extend the lifetime of network
pptx file

S35: ISI-2014 - Intelligent Distributed Computing-IIgo to top

Room: 211 Block E Second Floor
Chairs: Ranjan Das (Indian Institute of Technology Ropar, India), Senthilkumar Thangavel (Amrita School of Engineering, India)
Hash Based Incremental Optimistic Concurrency Control Algorithm in Distributed Databases
Dharavath Ramesh (Department of Computer Science & Engineering & Indian School of Miunes ( ISM), Dhanbad, India); Harshit Gupta, Kuljeet Singh and Kumar Chiranjeev (Indian School of Mines, India)
In this paper, we present a methodology which represents an excellent blossom in the concurrency control environment. It deals with the anomalies, and simultaneously, assures the reliability of the data before read-write transactions and after successfully committed. The method is based on the calculation of the hash value of the data and comparing the current hash value with the previous hash value every time before the write operation takes place. We show that, this method overcomes inefficiencies like unnecessary restarts and improves the performance. Finally, this work finds a need for an adaptive optimistic concurrency control method in distributed databases. Thus, a new hashing based optimistic concurrency control (HBOCC) approach is presented in this paper, where it is estimated to produce reliable results. We also epitomize the performance of this method with existing modalities.
ppt file
Hybrid Genetic Fuzzy Rule Based Inference Engine to Detect Intrusion in Networks
Kriti Chadha and Sushma Jain (Thapar University, India)
With the drastic increase in internet usage, various categories of attacks have also evolved. Conventional intrusion detection techniques to counter these attacks have failed and thus substantial systems are needed to eliminate these attacks before they inflict huge damage. With the ability of computational intelligence systems to adapt, exhibit fault tolerance, high computational speed and error resilience against noisy information, a hybrid genetic fuzzy rule based inference engine has been designed in this paper. The fuzzy logic constructs precise and flexible patterns while the genetic algorithm based on evolutionary computation helps in attaining an optimal solution, thus their collaboration will increase the robustness of intrusion detection system. The proposed network intrusion detection system will be able to classify normal behavior as well as anomalies in the network. Detailed analysis has been done on DARPA-KDD99 dataset to specify the behavior of each connection.
pptx file
Localization in Wireless Sensor Networks with Ranging Error
Puneet Gour and Anil Sarje (Indian Institute of Technology Roorkee, India)
In wireless sensor networks (WSNs), localization is very important because there are many applications which depend on the location of a sensor node. Among various WSNs localization techniques, ranging methods based on received signal strength (RSS) are most popular because of simplicity and no additional hardware requirement. However, RSS ranging suffers from various environmental conditions and it can give an erroneous range for positioning of a node. Hence, it is necessary to deal with ranging error efficiently. In order to efficiently find the position of a node in the presence of RSS ranging error, we introduce a novel localization technique in this paper. To apply our positioning concept, our method selects three most suitable reference nodes according to RSS and geometry of reference nodes. We compare our simulation results with, localization with dynamic circle expanding mechanism (LoDCE), which clearly show that our method outperforms it.
pptx file
Location-based Mutual and Mobile Information Navigation System: Lemmings
Simon Fong and Renfei Luo (University of Macau, Macao); Suash Deb (Cambridge Institute of Technology, India); Sabu Thampi (Indian Institute of Information Technology & Management, India)
Location-based Mutual and Mobile Information Navigation System (LEMMINGS) - Real-time Collaborative Recommender
Mobile Sensor Localization Under Wormhole Attacks: An Analysis
Gaurav Pareek (National Institute of Technology, Goa, India); Ratna Kumari (JNTU, India); Aitha Nagaraju (CURAJ, India)
In many application contexts, the nodes in a sensor network may be required to gather information relevant to their locations. This process of location estimation or localization is a critical aspect for all the location related applications of sensor networks. Localization helps nodes find their absolute position coordinates. Like possibly every system, localization systems are prone to attacks. Through this study we intend to do a low-level identification and analysis of broad, large-scale threat to mobile sensor localization systems due to attacks. In this paper, we study the behaviour of some well-known basic localization schemes under by far the most dangerous attacks on localization called wormhole attacks. The network and attacker model assumed in the paper are chosen so that the analysis unleashes the possibility of a resilient solution to the wormhole attacks problem independent of the nodes not under the effect of attacks.
pdf file
Neural Network Based Early Warning System for an Emerging Blackout in Smart Grid Power Networks
Sudha Gupta (University of Mumbai, India); Faruk Kazi (VJTI-Mumbai University, India); Sushma Wagh (VJTI Mumbai University, India); Ruta Kambli (Mumbai University, India)
Worldwide power blackouts have attracted great attention of researchers towards early warning techniques for cascading failure in power grid. The key issue is how to analyse, predict and control cascading failures in advance and prevent system against emerging blackouts. This paper proposes a model which analyse power flow of the grid and predict cascade failure in advance with the integration of Artificial Neural Network (ANN) machine learning tool. The Key contribution of this paper is to introduce machine learning concept in early warning system for cascade failure analysis and prediction. Integration of power flow analysis with ANN machine learning tool has a potential to make present system more reliable which can prevent the grid against blackouts. An IEEE 30 bus test bed system has been modeled in powerworld and used in this paper for preparation of historical blackout data and validation of proposed model. The proposed model is a step towards realizing smart grid via intelligent ANN prediction technique.
pdf file
OwlsGov: An OWL-S Based Framework for E-Government Services
Hind Lamharhar and Laila Benhlima (Mohammed V-Agdal University, Mohammadia School of Engineering, Morocco); Dalila Chiadmi (Mohammed V-Agdal University, Mohammadia School of Engineering)
The development of e-Government services becomes a big challenge of many countries of the world. However, in a distributed environment such as the e-government area, different interactions are made between heterogeneous systems. Therefore, a system that enables developing, integrating, discovering and executing these services is necessary. For this purpose, we present in this paper an approach for developing efficiently the e-government services based on semantic web services (SWS) technology and Multi-Agent Systems (MAS). In fact, the SWS enrich web services with semantic information (meaning) to facilitate the discovery, integration, composition and execution of services. The MAS enable building an environment in which the public administrations can publish their services, users (e.g. citizens) can express their needs and services can be discovered. In this paper, we present our framework for semantic description of e-government services based on SWS and on OWL-S framework in particular. We present as well the architecture of a MAS, which allows improving the dynamic usage processes of e-government services such as integration and discovery.
pptx file
Predictive Rule Discovery for Network Intrusion Detection
Kanubhai Patel (Charotar University of Science and Technology (CHARUSAT), India); Bharat Buddhadev (Malaviya National Institute of Technology, India)
Good number of rule-based intrusion detection systems (IDS) is widely available to identify computer network attacks. The problem with these IDS is requirement of domain experts to constitute rules manually. Rules are not learned by these IDSs automatically. This paper presents novel technique for predictive rule discovery using genetic algorithm (GA) approach to create rule base for IDS. The motivation for applying GA to rule discovery is that it is robust and adaptive search technique in nature that perform a universal search in the solution domain. KDD Cup 99 training and testing datasets were used to generate and test rules. We have obtained 98.7% detection rates during testing to detect various types of attacks.
rar filepptx file
P-Skip Graph: An Efficient Data Structure for Peer-to-Peer Network
Shalini Batra (Thapar University, India); Amrinderpreet Singh (Samsung Engineering Lab (R & D), Noida, India)
Peer-to-peer networks display interesting characteristics of fast queries, updation, deletion, fault-tolerance etc., while lacking any central authority. Adjacency Matrix, Skip-Webs, Skip-Nets, Skip-List, Distributed Hash Table, and many more data structures form the candidature for peer-to-peer networks, of which, Skip-Graph (evolved version of skip-list) displays one of the best characteristics as it help to search and locate a node in a peer-to-peer network efficiently with time complexity being O(log n). However when a hotspot node is searched and queried again and again, the Skip-Graph does not learn or adapt to the situation and still searches traditionally with O(log n) complexity. In this paper we propose a new data structure P-skip graph, a modified version of Skip graph, which reduces the search time of a hot spot node drastically from initial time of O(log n). Results provided by Simulations of a skip graph-based Peer-to-Peer application demonstrate that the proposed approach can in fact effectively decrease the search time to O(1).
pptx file
Quantifying Direct Trust for Private Information Sharing in an Online Social Network
Agrima Srivastava (BITS Pilani, Hyderabad Campus, India); Geethakumari G and KP Krishna Kumar (BITS-Pilani, Hyderabad Campus, India)
Online Social Networks (OSNs) are actively being used by a large fraction of people. People extensively share a wealth of their information online. This content if retrieved, stored, processed and spread beyond scope without user's consent may result in a privacy breach. Adopting a coarse grained privacy mechanism such as sharing information to a group of ``close friends" or the strong ties of the network is one of the solutions to minimize the risk of unwanted disclosure but this does not fully contribute in the process of protecting privacy. There is a high probability for an unwanted information disclosure even if the information is shared only with the strong ties. In most of the privacy literature while building trust the online sharing behavior is never looked into consideration. Hence, in this paper we propose and implement a privacy preserving model where such unwanted and unintentional private information disclosures could be minimized by further refining the trusted community of strong ties with respect to their privacy quotient.
ppt file
SemCrawl: Framework for Crawling Ontology Annotated Web Documents for Intelligent Information Retrieval
Vandana Dhingra (University of Pune, India); Komal Kumar Bhatia (YMCA University of Sc. & Tech., India)
Web is considered as the largest information pool and search engine, a tool for extracting information from web, but due to unorganized structure of the web it is getting difficult to use search engine tool for finding relevant information from the web. Future search engine tools will not be based merely on keyword search, whereas they will be able to interpret the meaning of the web contents to produce relevant results. Design of such tools requires extracting information from the contents which supports logic and inferential capability. This paper discusses the conceptual differences between the traditional web and semantic web, specifying the need for crawling semantic web documents. In this paper a framework is proposed for crawling the ontologies/semantic web documents. The proposed framework is implemented and validated on different collection of web pages. This system has features of extracting heterogeneous documents from the web, filtering the ontology annotated web pages and extracting triples from them which supports better inferential capability.
Smart Human Security Framework Using Internet of Things, Cloud and Fog Computing
Vivek Kumar Sehgal, Anubhav Patrick, Ashutosh Soni and Lucky Rajput (Jaypee University of Information Technology, India)
Human security is becoming a grave concern with each passing day. Daily we hear news regarding gruesome and heinous crimes against elders, women and children. Accidents and industrial mishaps have become commonplace. Comput-ers and gadgets have progressed a lot during past decades but little has been done to tackle the challenging yet immensely important field of physical security of people. With the advent of pervasive computing, Internet of Things (IoT), the omnipresent cloud computing and its extension fog computing, it has now be-come possible to provide a security cover to people and thwart any transgression against them. In this paper, we will be providing a security framework incorporat-ing pervasive and wearable computing, IoT, cloud and fog computing to safe-guard individuals and preclude any mishap.
pptx file

S36: ISI-2014: Data Mining, Clustering and Intelligent Information Systems -Igo to top

Room: 006 Block E Ground Floor
Chairs: Nampuraja Enose (Principal Consultant & Infosys Technologies Limited, India), Praveen Ranjan Srivastava (Indian Institute of Management (IIM), India)
Knowledge Transfer Model in Collective Intelligence Theory
Saraswathy Shamini Gunasekaran and Mohd Sharifuddin Ahmad (Universiti Tenaga Nasional, Malaysia); Salama Mostafa (Universiti Tun Hussein Onn Malaysia, Malaysia)
In a multi-agent environment, a series of interaction emerges that determines the flow of actions each agent should execute in order to accomplish its individual goal. Ultimately, each goal realigns to manifest the agents' common goal. In a collective environment, these agents retain only one common goal from the start, which is achieved through a series of communication processes that involve discussions, group reasoning, decision-making and performing actions. Both the reasoning and decision-making phases diffuse knowledge in the form of proven beliefs between these agents. In this paper, we describe the concept of discussions, group reasoning and decision making, followed by its corresponding attributes in proposing a preliminary Collective Intelligence theory.
Misalignment Fault Prediction of Motor-Shaft Using Multiscale Entropy and Support Vector Machine
Alok Kumar Verma (Rolls-Royce @NTU Corporate Lab, Nanyang Technological University, Singapore & Indian Institute of Technology Patna, India); Somnath Sarangi and Mahesh Kolekar (Indian Institute of Technology Patna, India)
Rotating machines constitutes the major portion of the industrial sector. In case of rotating machines, misalignment has been observed to be one of the most common faults which can be regarded as a cause for decrease in efficiency and can also for the failure at a time. Till date the researchers have dealt only with the vibration samples for misalignment fault detection, whereas in the present work both stator current samples and vibration samples has been used as a diagnostic media for fault detection. Multiscale entropy (MSE) based statistical approach for feature extraction and support vector machine (SVM) classification makes the proposed algorithm more robust. Thus, any non-linear behavior in the diagnostic media is easily handled. The proposed work has depicted an approach to analyze features that distinguishes the vibration as well as current samples of a normal induction motor from that of a misaligned one. The result shows that the proposed novel approach is very effective to predict the misalignment fault for the induction motor.
pdf file
Multi Objective Cuckoo Search Optimization for Fast Moving Inventory Items
Achin Srivastav and Sunil Agrawal (PDPM IIITDM Jabalpur, India)
The paper focuses on managing most important (class A) fast moving inventory items. A multi objective cuckoo search optimization is used to determine trade off solutions for continuous review order point, order quantity stochastic inventory system. A numerical problem is considered to illustrate the results. The results show a number of pareto optimal points are obtained using cuckoo search multi objective algorithm on a single run, which provides the practitioners flexibility to choose the optimal point.
pptx file
Multiobjective Mixed Model Assembly Line Balancing Problem
Sandeep Choudhary (Indian Institute of Information Technology Design and Manufacturing Jabalpur, India); Sunil Agrawal (PDPM IIITDM Jabalpur, India)
The main objective of this paper is to improve the performance measures of a multi-objective mixed model assembly line. The motivation of our work is from the paper by Zhang and Gen (2011) in which the assembly line problem is solved by using genetic algorithm. Mathematical solutions of their work show balance efficiency (Eb) of 86.06 percent, cycle time (Ct) of 54 minutes, work content (Twc) is 185.8 minute, production rate (Rp) is 1.11E, where E is line efficiency. When the same mathematical model is reconstructed by changing decision variables (changing variables that hold the relationship between the task-station-models to task-worker-model) without changing the meaning of constraints and solved using the branch and bound (B&B) method using Lingo 10 software, there is a significant improvement in performance factors of assembly line. The mathematical solutions are obtained as follows. The balance efficiency increases (Eb) by 3.86 percent, cycle time (Ct) decreases by 25.55 percent, work content (Twc) decrease by 22.17 percent, production rate (Rp) decease by 34.23 percent.
pptx file
New Approach for Function Optimization: Amended Harmony Search
Chhavi Gupta (Madhav Institute of Technology & Science, India); Sanjeev Jain (LNCT, India)
Harmony search (HS) algorithm is an emerging population oriented stochastic metaheuristic algorithm, which is inspired by the music improvisation process. This paper introduces an Amended harmony search (AHS) algorithm for solving optimization problems. In AHS, an enhanced approach is employed for generating better solution that improves accuracy and convergence speed of harmony search (HS). The effects of various parameters on harmony search algorithm are analyzed in this paper. The proposed approach performs fine tuning of two parameters bandwidth and pitch adjustment rate. The proposed algorithm is demonstrated on various complex benchmark functions and results are compared with two recent variants of HS optimization algorithms, improved harmony search (IHS) and highly reliable harmony search (HRHS). Results suggested that the AHS method has strong convergence and has better balance capacity of exploration and exploitation.
pptx file
Novel Research in the Field of Shot Boundary Detection - A Survey
Raahat Devender Singh (Panjab University, Chandigarh); Naveen Aggarwal (Panjab University, India)
Segregating a video sequence into shots is the first step toward video-content analysis and content-based video browsing and retrieval. A shot may be defined as a sequence of consecutive frames taken by a single uninterrupted camera. Shots are the basic building blocks of videos and their detection provides the basis for higher level content analysis, indexing and categorization. The problem of detecting where one shot ends and the next begins is known as Shot Boundary Detection (SBD). Over the past two decades, numerous SBD techniques have been proposed in the literature. This paper presents a brief survey of all the major novel and latest contributions in this field of digital video processing.
pdf file
Quality Metrics for Data Warehouse Multidimensional Models with Focus on Dimension Hierarchy Sharing
Anjana Gosain (Indraprastha University, India); Jaspreeti Singh (Guru Gobind Singh Indraprashta University, New Delhi, India)
Data warehouses, based on multidimensional models, have emerged as powerful tool for strategic decision making in the organizations. So it is crucial to assure their information quality, which largely depends on the multidimensional model quality. Few researchers have proposed some useful metrics to assess the quality of the multidimensional models. However, there are certain characteristics of dimension hierarchies (such as relationship between dimension levels; sharing of some hierarchy levels within a dimension, among various dimensions etc.) that have not been considered so far and may contribute significantly to structural complexity of multidimensional data models. The objective of this work is to propose metrics to compute the structural complexity of multidimensional models. The focus is on the sharing of levels among dimension hierarchies, as it may elevate the structural complexity of multidimensional models, thereby affecting understandability and in turn maintainability of these models.
pptx file
Time-efficient Tree-based Algorithm for Mining High Utility Patterns
Chiranjeevi Manike and Hari Om (Indian School of Mines, India)
High utility patterns mining from transaction databases is an important research area in the field of data mining. Due to the unavailability of downward closure property among the utilities of the itemsets it becomes great challenge to the researchers. Even though, efficient pruning strategy called, transaction weighted utility downward closure property used to reduce the number of candidate itemsets, total time to generate candidate itemsets and test the itemsets becomes more. In view of this, in this paper we have proposed a time-efficient treebased algorithm (TTBM) for mining high utility patterns from transaction databases. We construct conditional pattern bases to generate high transaction weighted utility patterns in the second pass of our algorithm. We used an efficient tree structure called HP-Tree and tracing method storing high transaction weighted utility patterns and for discovering high utility patterns respectively. We have compared the performance against Two- Phase and HUI-Miner algorithms. The experimental results show that the execution time of our approach is better.
pdf file
User Feedback Based Evaluation of a Product Recommendation System Using Rank Aggregation Method
Shahab Saquib Sohail and Jamshed Siddiqui (Aligarh Muslim University, India); Rashid Ali (AMU Aligarh, India)
The proliferation of the Internet has changed the daily life of a common man. There is a diverse effect of rapid growth of Internet in the daily life. The influence of Internet has changed the way we live and even the way we think. The use of the Internet for purchasing different products of the daily needs has increased exponentially in recent years. Now customers prefer online shopping for the acquisition of the various products. But the huge e-business portals and increasing online shopping sites make it difficult for the customers to go for a particular product. It is very common practice that a customer wishes to know the opinion of other consumers who already have acquired the same product. Therefore we tried to involve the human judgment in recommending the products to the users using implicit user feedback and applied a rank aggregation algorithm on these recommendations. In this paper we chose few products and their respective ranks arbitrarily taken from previous work. For obtaining user's purchase activities a vector feedback is taken from the user and on the basis of their feedback, products are scored; hence they are again ranked which gives each user's ranking. We propose a rank aggregation algorithm and apply it on individuals ranking to get an aggregated final users' ranking. Finally we evaluate the system performance using false negative rates, false positive rates, and precision. These measures show the effectiveness of the proposed method.
pptx file
Word Sense Disambiguation for Punjabi Language Using Overlap Based Approach
Preeti Rana and Parteek Kumar (Thapar University, India)
Word Sense Disambiguation (WSD) is a concept for disambiguating the text so that computer would be able to interpret appropriate sense which is not difficult for a human to disambiguate. It is motivated by its use in many crucial applications such as information retrieval, information extraction, machine translation etc. WSD uses the punjabi WordNet which helps this approach to search the appropriate sense by providing information like synonyms, examples, concepts, semantic relation etc related to an ambiguous word. India is a multilingual country where people speak many different languages, which results in communication barrier. This acted as a motivation behind the building of IndoWordNet which has wordnets of major Indian languages. The expansion approach is used by many indian languages to develop their wordnets from the hindi wordnet. Millions of people know punjabi language in india but little computerized work has been done in the field for this language .It is therefore worthy to build up a punjabi lexical resource (WordNet) that can discover the richness of punjabi language. Word sense disambiguation uses lesk's algorithm in which context of ambiguous word is compared with the information concluded from WordNet and chooses the winner. The output will be the particular sense number designating the appropriate sense of ambiguous word. The evaluation has been done on the punjabi corpora and the results are encouraging.
pptx file
Combining Different Differential Evolution Variants in an Island Based Distributed Framework - An Investigation
Shanmuga Sundaram Thangavelu and C. Shunmuga Velayutham (Amrita Vishwa Vidyapeetham, India)
This paper proposes to combine three different Differential Evolution (DE) variants viz. DE/rand/1/bin, DE/best/1/bin and DE/rand-to-best/1/bin in an island based distributed Differential Evolution (dDE) framework. The resulting novel dDEs with different DE variants in their islands have been tested on 13 high-dimensional benchmark problems (of dimensions 500 and 1000) to ob-serve their performance efficacy as well as to investigate the potential of combining such complemen-tary collection of search strategies in a distributed framework. Simulation results show that rand and rand-to-best strategy combination variants display superior performance over rand, best, rand-to-best as well as best, rand-to-best combination variants. The rand and best strategy combinations displayed the poor performance. The simulation studies indicate a definite potential of combining com-plementary collection of search characteristics in an island based distributed framework to realize highly co-operative, efficient and robust distributed Differential Evolution variants capable of han-dling a wide variety of optimizations tasks.
pptx file
Correlation Based Anonymization Using Generalization and Suppression for Disclosure Problems
Amit Thakkar (Charotar University of Science & Technology, India); Aashiyana Arifbhai Bhatti (Charotar University of Science & Technology); Jalpesh Vasa (Charotar University of Science and Technology & Chandubhai S Patel Institute Of Technology, India)
Huge volume of detailed personal data is regularly collected and sharing of these data is proved to be beneficial for data mining application. Data that include shopping habits, criminal records, credit records and medical history are very necessary for an organization to perform analysis and predict the trends and patterns, but it may prevent the data owners from sharing the data because of many privacy regulations. In order to share data while preserving privacy, data owner must come up with a solution which achieves the dual goal of privacy preservation as well as accurate data mining result. In this paper k-Anonymity based approach is used to provide privacy to individual data by masking the attribute values using generalization and suppression. Due to some drawbacks of the existing model, it needs to be modified to fulfill the goal. Proposed model tries to prevent data disclosure problem by using correlation coefficient which estimates amount of correlation between attributes and helps to automate the attribute selection process for generalization and suppression. The main aim of proposed model is to increase the Privacy Gain and to maintain the accuracy of the data after anonymization.
pdf file
Design and Implementation of a Novel Eye Gaze Recognition System Based on Scleral Area for MND Patients Using Video Processing
Sudhir Rao Rupanagudi and Varsha Bhat (WorldServe Education, India); Karthik R, Roopa P and Manjunath M (Yellamma Dasappa Institute of Technology, India); Glenn Ebenezer, Shashank S, Hrishikesh Pandith and Nitesh R (Dr AIT, India); Amrit Shandilya (WorldServe Education, India); Ravithej P (BNMIT, India)
In this modern era of science and technology, several innovations exist for the benefit of the differently-abled and the diseased. Research organizations worldwide, are striving hard in identifying novel methods to assist this group of the society to converse freely, move around and also enjoy those benefits which others do. In this paper, we concentrate on assisting people suffering from one such deadly disease - the Motor Neuron Disease (MND), wherein a patient loses control of his/her complete mobility and is capable of only oculographic movements. By utilizing these oculographic movements, commonly known as the eyegaze of an individual, several day to day activities can be controlled just by the motion of the eyes. This paper discusses a novel and cost effective setup to capture the eye gaze of an individual. The paper also elaborates a new methodology to identify the eye gaze utilizing the scleral properties of the eye and is also immune to variations in background and head-tilt. All algorithms were designed on the MATLAB 2011b platform and an overall accuracy of 95% was achieved for trials conducted over a large test case set for various eye gazes in different directions. Also, a comparison with the popular Viola-Jones method shows that the algorithm presented in this paper is more than 3.8 times faster.
pptx file
Enhancing Frequency Based Change Proneness Prediction Method Using Artificial Bee Colony Algorithm
Deepa Godara and Rakesh Kumar Singh (Uttarakhand Technical University, India)
In the field of software engineering, during the development of Object Oriented (OO) software, the knowledge of the classes which are more prone to changes in software is an important problem that arises nowadays. In order to solve this problem, several methods were introduced by predicting the changes in the software earlier. But those methods are not facilitating very good prediction result. This research work proposes a novel approach for predicting changes in software. Our proposed probabilistic approach uses the behavioral dependency generated from UML diagrams, as well as other code metrics such as time and trace events generated from source code. These measures combined with frequency of method calls and popularity can be used in automated manner to predict a change prone class. Thus all these five features (time, trace events, behavioral de-pendency, frequency and popularity) are obtained from our proposed work. Then, these features are given as the input to the ID3 (Interactive Dichotomizer version 3) decision tree algorithm for effectively classifying the classes, whether it predicts the change proneness or not. If a class is classified into prediction of change prone class, then the value of change proneness is also obtained by our work.
pptx file
Evaluation of Data Warehouse Quality From Conceptual Model Perspective
Rakhee Sharma (Guru Gobind Singh Inderaprastha University, India); Hunny Gaur (Guru Gobind Singh Indraprastha University & Ambedkar Institute Of Advanced Communication Technologies & Research, India); Manoj Kumar (Ambedkar Institute of Technology, GGSIPU University, India)
Organizations are adopting Data Warehouse (DW) for making strategic decisions. DW consist of huge and complex set of data thus its maintenance and quality are equally important. Using improper, misunderstood, disregarded data quality will highly impact decision making process as well as its performance. The DW quality is depended on data model quality, DBMS quality and Data quality itself. In this paper we have surveyed on two aspects of DW quality; one is how researchers have improved the quality of Data; and another is how data model quality is improved. The paper discuss that metrics are real quality indicators of DWs; they help the designers in obtaining good quality model that allow us to guarantee the quality of the DW. In this paper, our focus has been on surveying research papers with respect to quality of multidimensional conceptual model of DW. Having surveyed various papers, we compared all the proposals concerning theoretical validation and empirical validation of conceptual model metrics for assessment of DW model quality.
pptx file

S37: ISI-2014: Data Mining, Clustering and Intelligent Information Systems -IIgo to top

Room: 205 Block E Second Floor
Chairs: A. F. M. Sajidul Qadir (Samsung R&D Institute-Bangladesh, Bangladesh), Alok Kumar Verma (Rolls-Royce @NTU Corporate Lab, Nanyang Technological University, Singapore & Indian Institute of Technology Patna, India)
A Learning Based Emotion Classifier with Semantic Text Processing
Vajrapu Anusha and Sandhya Banda (MVSR Engineering College, India)
In this modern era, we depend more and more on machines for day to day activities. However, there is a huge gap between computer and human in emotional thinking, which is the central factor in human communication. This gap can be bridged by implementing several computational approaches, which induce emotional intelligence into a machine. Emotion detection from text is one such method to make the computers emotionally intelligent because text is one of the major media for communication among humans and with the computers. In this paper, we propose an approach which adds natural language processing techniques to improve the performance of learning based emotion classifier by considering the syntactic and semantic features of text. We also present a comprehensive overview of emerging field of emotion detection from text.
ppt file
A Lexicon Pooled Machine Learning Classifier for Opinion Mining From Course Feedbacks
Rupika Dalal and Ismail Safhath (South Asian University, India); Rajesh Piryani (South Asian University, New Delhi, India); Divya Rajeswari Kappara (South Asian University, India); Vivek Kumar Singh (Banaras Hindu University, India)
This paper presents our algorithmic design for a lexicon pooled approach for opinion mining from course feedbacks. First, we did an empirical evaluation of both, the machine learning classifier and the lexicon based approaches for opinion mining, and then designed a hybrid approach. The proposed method tries to incorporate lexicon knowledge into the machine learning classification process through a multinomial pooling of lexicon based approach. The algorithmic formulation have been evaluated on three datasets obtained from ratemyprofessor.com. The results show that the lexicon pooled approach obtains higher accuracy than the standalone implementations of machine learning and lexicon based approaches. The paper, thus propose and demonstrate how a lexicon pooled hybrid approach may be a preferred technique for opinion mining from course feedbacks.
 filepdf file
A Method to Induce Indicative Functional Dependencies for Relational Data Model
Sandhya Harikumar and R. Reethima (Amrita Vishwa Vidyapeetham, India)
Relational model is one of the extensively used database models. However, with the contemporary technologies, high dimensional data which may be structured or unstructured are required to be analyzed for knowledge interpretation. One of the significant aspects of analysis is exploring the relationships existing between the attributes of large dimensional data. In relational model, the integrity constraints in accordance with the relationships are captured by functional dependencies. Process- ing of high dimensional data to understand all the functional dependencies is computationally expensive. More specifically, functional dependencies of the most prominent attributes will be of significant use and can reduce the search space of functional dependencies to be searched for. In this paper we propose a regression model to find the most prominent attributes of a given relation. Functional dependencies of these prominent attributes are discovered which are indicative and lead to faster results in decreased amount of time.
pdf file
A Novel Way of Assigning Software Bug Priority Using Supervised Classification on Clustered Bugs Data
Neetu Goyal (Panjab University, Chandigarh, India); Naveen Aggarwal (Panjab University, India); Maitreyee Dutta (National Institute of Technical Teachers Training & Research, Chandigarh, India)
Bug Triaging is an important part of testing process in software development organizations. But it takes up considerable amount of time of the Bug Triager, costing time and resources of the organization. Hence it is worth while to develop an automated system to address this issue. Researchers have addressed various aspects of this by using techniques of data mining, like classification etc. Also there is a study which claims that when classification is done on the data which is previously clustered; it significantly improves its performance. In this work, this approach has been used for the first time in the field of software testing for predicting the priority of the software bugs to find if classifier performance improves when it is preceded with clustering. Using this system, clustering was performed on problem title attribute of the bugs to group similar bugs together us-ing clustering algorithms. Classification was then applied to the clusters obtained, to assign priority to the bugs based on their attributes severity or component using classification algorithms. It was then studied that which combination of clustering and classification algorithms used provided the best results.
ppt file
A Two-Stage Genetic K-harmonic Means Method for Data Clustering
Anuradha Thakare (University of Pune & PCCOE, India); Chandrashkehar Dhote (Amravati University, India)
Clustering techniques are aimed to partition the entire input space into disconnected sets where the members of each set are highly connected. K-harmonic means (KHM) is a well-known data clustering technique but it runs into local optima. A two stage genetic clustering method using KHM (TSGKHM) is proposed in this research which can automatically cluster the input data points into an appropriate number of clusters. With the best features of both the algorithm, TSGKHM in first stage overcomes the local optima and results in optimal cluster centers, and in second stage, results into optimal clusters. The proposed method is executed on globally accepted, four real time data sets. The inter-mediate results are produced. The performance analysis shows that TSGKHM performs significantly better.
ppt file
An Empirical Study of Robustness and Stability of Machine Learning Classifiers in Software Defect Prediction
Arvinder Kaur and Kamaldeep Kaur (Guru Gobind Singh Indraprastha University, India)
Software is one of the key drivers of twenty first century business and society. Delivering high quality software systems is a challenging task for software developers. Early software defect prediction based on software code metrics has been intensely researched by the software engineering research community. Recent knowledge advancements in machine learning have been explored for development of highly accurate automatic software defect prediction models. This study contributes to application of machine learning in software defect prediction by investigating the robustness and stability of 17 machine learning classifiers on 44 open source software defect prediction data sets obtained from PROMISE repository. The Area Under Curve(AUC) of Receiver operating characteristic curve(ROC) for each of the 17 classifiers is obtained on 44 defect prediction data sets. The classifiers are ranked on robustness as well as stability. Our experiments show that Random Forests, logistic regression and Kstar are robust as well as stable classifiers for software defect prediction applications. Further we demonstrate that Naïve Bayes and Bayes Networks are which have been shown to be robust and comprehensible classifiers in previous research on software defect prediction, have poor stability in open source software defect prediction.
pptx file
An Empirical Study of Some Particle Swarm Optimizer Variants for Community Detection
Anupam Biswas (Indian Institute of Technology (BHU), Varanasi); Pawan Gupta, Mradul Modi and Bhaskar Biswas (Indian Institute of Technology (BHU), Varanasi, India)
Swarm based intelligent algorithms are widely used in applications of almost all domains of science and engineering. Ease and flexibility of these algorithms to fit in any application has attracted even more domains in recent years. Social computing being one such domain tries to incorporate these approaches for community detection in particular. We have proposed a method to use Particle Swarm Optimization (PSO) techniques to detect communities in social network based on common interest of individual in the network. We have performed rigorous study of four PSO variants with our approach on real data sets. We found orthogonal learning approach results quality solutions but takes reasonable computation time on all the data sets for detecting communities. Cognitive avoidance approach shows average quality solutions but interestingly takes very less computation time in contrast to orthogonal learning approach. Linear time varying approach performs poorly on both cases, while linearly varying weight along with acceleration coefficients is competitive to cognitive avoidance approach.
pptx file
An Extended Chameleon Algorithm for Document Clustering
Lekha N k (Amrita University, India); Veena G (Amrita Vishwa Vidyapeetham, India)
A lot of research work has been done in the area of concept mining and document similarity in past few years. But all these works were based on the statistical analysis of keywords. The major challenge in this area involves the preservation of semantics of the terms or phrases. Our paper proposes a graph model to represent the concept in the sentence level. The concept follows a triplet representation. A modified DB scan algorithm is used to cluster the extracted concepts. This cluster forms a belief network or probabilistic network. We use this network for extracting the most probable concepts in the document. In this paper we also proposes a new algorithm for document similarity. For the belief network comparison an extended chameleon Algorithm is also proposed here.
pptx file
An Intelligent Modeling of Oil Consumption
Haruna Chiroma (Federal College of Education (Technical), Gombe, Malaysia); Sameem Abdul Kareem and Sanah Abdullahi Muaz (University of Malaya, Malaysia); Adamu Abubakar (International Islamic University Malaysia & Integ lab, Malaysia); Edi Sutoyo and Mungad Mungad (University of Malaya, Malaysia); Younes Saadi (University Tun Hussein Onn Malaysia, Malaysia); Eka Novita Sari (AMCS Research Center, Indonesia); Tutut Herawan (Universiti Malaysia Pahang & Universitas Ahmad Dahlan, Malaysia)
In this chapter, we select Middle East countries involving Jordan, Lebanon, Oman, and Saudi Arabia for modeling their oil consumption based on soft computing. The limitations associated with Levenberg-Marquardt Neural Network (LM) motivated this research to optimize the parameters of NN through Artificial Bee Colony searches (ABC-LM) to build a model for the prediction of oil consumption in the selected Middle East countries. The proposed model was able to predict oil consumption with improved accuracy and convergence speed. The ABC-LM performs better than the standard LMNN, Genetically optimized NN, and Back-propagation NN. Analysis of variance results indicated that the oil consumption in Jordan is significantly higher than that of Lebanon. On the other hand, oil consumption in Saudi Arabia is significantly higher than that of Oman. The approach proposes in this chapter can be applied by the selected countries in our case study for the formulation of both domestic and international policies related to oil consumption and economic development.
pptx file
Analysis and Evaluation of Discriminant Analysis Techniques for Multiclass Classification of Human Vocal Emotions
Swarna Kuchibhotla (Acharya Nagarjuna University, India); Hima Deepthi Vankayalapati, BhanuSree Yalamanchili and Koteswara Rao Anne (VRSEC, India)
Many of the classification problems in human computer interaction applications involve multi class classification. Support Vector Ma-chines excel at binary classification problems and cannot be easily ex-tended to multi class classification. The use of Discriminant analysis how ever is not experimented widely in the area of Speech emotion recognition. In this paper Linear Discriminant Analysis and Regularized Discriminant Analysis are implemented over Berlin and Spanish emotional speech databases. Prosody and spectral features are extracted from the speech database and are applied individually and also with feature fusion. Based on the results obtained, LDA classification performance is poor than RDA due to the singularity problem. The results are analysed using ROC Curves.
pptx file
Flipped Labs as an Smart ICT Innovation: Modeling Its Diffusion Among Interinfluencing Potential Adopters
Raghu Raman (Amrita University, India)
Smart ICT innovation like flipped classroom pedagogy is freeing up face-to-face in-class teaching system for additional problem based learning activities in the class. But the focus of flipped classrooms is more on the theory side with related lab work in science subjects further getting marginalized. In this paper we are proposing Flipped Labs - a method of pedagogy premeditated as a comprehensive online lab learning environment outside the class room by means of tutorials, theory, procedure, animations and videos. Flipped labs have the potential to transform the traditional methods of lab teaching by providing more lab time to students. An ICT educational innovation like flipped labs will not occur in isolation in an environment where two interrelated potential adopters namely teachers and students influence each other and both have to adopt for the innovation to be successful. In this paper we provide the theoretical framework for the diffusion and the adoption patterns for flipped labs using theory of perceived attributes and take into account the important intergroup influence between teachers and students. The results of this analysis indicated that Relative Advantage, Compatibility, Ease of Use, Teacher Influence and Student Influence were found to be positively related to acceptance of flipped labs.
Formulating Dynamic Agents' Operational State Via Situation Awareness Assessment
Salama Mostafa (Universiti Tun Hussein Onn Malaysia, Malaysia); Mohd Sharifuddin Ahmad (Universiti Tenaga Nasional, Malaysia); Muthukkaruppan Annamalai (Universiti Teknologi MARA, Malaysia); Azhana Ahmad and Saraswathy Shamini Gunasekaran (Universiti Tenaga Nasional, Malaysia)
Managing autonomy in a dynamic interactive system that contains a mix of human and software agent intelligence is a challenging task. In such systems, giving an agent a complete control over its autonomy is a risky practice while manually setting the agent's autonomy level is an inefficient approach. This paper addresses this issue via formulating a Situation Awareness Assessment (SAA) technique to assist in determining an appropriate agents' operational state. We propose four operational states of agents' execution cycles; proceed, halt, block and terminate, each of which is determined based on the agents' performance. We apply the SAA technique in a proposed Layered Adjustable Autonomy (LAA) model. The LAA conceptualizes autonomy as a spectrum and is constructed in a layered structure. The SAA and the LAA notions are applicable to humans' and agents' collaborative environment. We provide an experimental scenario to test and validate the proposed notions in a real-time application.
Fuzzy Based Approach to Develop Hybrid Ranking Function for Efficient Information Retrieval
Ashish Saini and Yogesh Gupta (Dayalbagh Educational Institute, India); Ajay Saxena (, India)
Ranking function is used to compute the relevance score of all the documents in document collection against the query in Information Retrieval system. A new fuzzy based approach is proposed and implemented to construct hybrid ranking functions called FHSM1 and FHSM2 in present paper. The performance of proposed approach is evaluated and compared with other widely used ranking functions such as Cosine, Jaccard and Okapi-BM25. The proposed approach per-forms better than above ranking functions in terms of precision, recall, average precision and average recall. All the experiments are performed on CACM and CISI benchmark data collections.
pptx file
Inquiry Based Learning Pedagogy for Chemistry Practical Experiments Using OLabs
Prema Nedungadi (Amrita University, India); Malini Prabhakaran (Amrita Vishwavidyapeetham University, Kollam, Kerala, India); Raghu Raman (Amrita University, India)
Our paper proposes a new pedagogical approach for learning chemistry practical experiments based on three modes of inquiry based learning namely - structured, guided and open. Online Labs (OLabs) are a web based learning environment for science practical experiments that include simulations, animations, tutorials and assessments. Inquiry based learning is a pedagogy that supports student centered learning and encourages them to think scientifically. It develops evidence based reasoning and creative problem solving skills that result in knowledge creation and higher recall. We discuss the methodology and tools that OLabs provides to enable educators to design three types of inquiry based learning for Chemistry experiments. The integration of inquiry based learning into OLabs is aligned to Indian Central Board of Secondary Education (CBSE) goal of nurturing higher order inquiry skills for student centered and active learning. Inquiry based OLabs pedagogy also empowers the teachers to provide differentiated instruction to the students while enhancing student interest and motivation.
zip file
New Unification Matching Scheme for efficient information retrieval using Genetic Algorithm
Anuradha Thakare (University of Pune & PCCOE, India); Chandrashkehar Dhote (Amravati University, India)
This article presents new Unification Matching Scheme (UMS) for information retrieval using genetic algorithm. The selection of appropriate matching functions contributes to the performance of information retrieval system. The proposed UMS executes the Unification function on three classical matching functions for different threshold values. The main objective is to utilize all the base functions to increase the relevancy of users query with the data objects. The best results from each matching function define the new generation on which other matching functions are applied. The results from each generation are optimized using Genetic Algorithm. Working of UMS is compared with individual classical matching functions. The significant improvement is seen in the experimental results in terms of precision and recall. The performance increased gradually in each generation thereby producing the relevant results.
pptx file

S38: ISI-2014: Pattern Recognition, Signal and Image Processing-Igo to top

Room: 110 Block E First Floor
Chairs: Ajinkya S. Deshmukh (Uurmi System Pvt. Ltd., India), Ravibabu Mulaveesala (Indian Institute of Technology Ropar, India)
Memory Based Multiplier Design in Custom and FPGA Implementation
Noor Mahammad Sk (Indian Institute of Information Technology Design and Manufacturing (IIITDM) Kancheepuram, India); Mohamed Basiri (IIITDM Kancheepuram, India)
The modern real time applications like signal processing, filtering, etc., demands the high performance multiplier design with fewer look up tables in FPGA implementation. This paper proposes an efficient look up table based multiplier design for ASIC as well as FPGA implementation. In the proposed technique, both the input operands of the multiplier are considered as variables and the proposed LUT based multiplier design is compared with other LUT based multiplication schemes like LUT counter, LUT of squares and LUT of decomposed squares based multiplier designs. The performance results have shown the proposed design achieves better improvement in depth and area compared with an existing technique. The proposed LUT based 124-bit multiplier achieves an improvement of 34:61% in depth compared to the counter LUT based architecture. The 1616-bit proposed LUT based multiplier achieves an improvement factor of 76:84% in the circuit depth over the square LUT based multiplication technique using 45 nm technology.
pdf file
Moving Human Detection in Video Using Dynamic Visual Attention Model
Sanjay G, Amudha J and Julia Tressa Jose (Amrita Vishwa Vidyapeetham, India)
Visual Attention algorithms have been extensively used for object detection in images. However, the use of these algorithms for video analysis has been less explored. Many of the techniques proposed, though accurate and robust, still require a huge amount of time for processing large sized video data. Thus this paper introduces a fast and computationally inexpensive technique for detecting regions corresponding to moving humans in surveillance videos. It is based on the dynamic saliency model and is robust to noise and illumination variation.Results indicate successful extraction of moving human regions with minimum noise, and faster performance in comparison to other models. The model works best in sparsely crowded scenarios.
ppt file
Multi-Output On-Line ATC Estimation in Deregulated Power System Using ANN
R Prathiba (Anna University, India); M Balasingh Moses (Anna University, Trichy, India); Durairaj Devaraj and M. Karuppasamypandiyan (Kalasalingam University, India)
Fast and accurate evaluation of the Available Transfer Capability (ATC) is essential for the efficient use of networks in a deregulated power system.This paper proposes multi output Feed Forward neural network for on line estimation of ATC. Back Propagation Algorithm is used to train the Feed Forward neural network. The data sets for developing Artificial Neural Network (ANN) models are generated using Repeated Power Flow (RPF) algorithm. The effectiveness of the proposed ANN models are tested on IEEE 24 bus Reliability Test System (RTS). The results of ANN model is compared with RPF results. From the results, it is observed that the ANN model developed is suitable for fast on line estimation of ATC.
pptx file
RBDT: The Cascading of Machine Learning Classifiers for Anomaly Detection with Case Study of Two Datasets
Goverdhan Reddy Jidiga (Jawaharlal Nehru Technological University, India); Porika Sammulal (JNTUH University, India)
The inhuman cause of behavior in computer users and lack of coding skills pursue a malfunctioning of applications creating security breaches and vulnerable to every use of online transaction today. The anomaly detection is in-sighted into security of information in early stage of 1980, but still we have potential abnormalities in real time critical applications and unable to model online, real world behavior. The anomalies are pinpointed by conventional algorithms was very poor and false positive rate (FPR) is increased. So, in this context better use the adorned machine learning techniques to improve the performance of an anomaly detection system (ADS). In this paper we have given a new classifier called rule based decision tree (RBDT), it is a cascading of C4.5 and Naïve Bayes use the conjunction of C4.5 and Naïve Bayes rules towards a new machine learning classifier to ensure that to improve in results. Here two case studies used in experimental work, one taken from UCI machine learning repository and other one is real bank dataset, finally comparison analysis is given by applying datasets to the decision trees ( ID3, CHAID, C4.5, Improved C4.5, C4.5 Rule), Neural Networks, Naïve Bayes and RBDT.
pptx file
Software Analysis Using Cuckoo Search
Praveen Ranjan Srivastava (Indian Institute of Management (IIM), India)
Software analysis includes both Code coverage as well as the Requirements coverage. In code coverage, automatic test sequences are generated from the control flow graph in order to cover all nodes. Over the years, major problem in software testing has been the automation of testing process in order to decrease overall cost of testing. This paper presents a technique for complete software analysis using a metaheuristic optimization technique Cuckoo Search. For this purpose, the concept of Cuckoo search is adopted where search follows quasi-random manner. In requirement coverage, test sequences are generated based on state transition diagram. The optimal solution obtained from the Cuckoo Search shows that it is far efficient than other metaheuristic techniques like Genetic Algorithm and Particle Swarm optimization
Spread Spectrum Audio Watermarking Using Vector Space Projections
Adamu Abubakar (International Islamic University Malaysia & Integ lab, Malaysia); Akram M. Zeki (International Islamic University Malaysia, Malaysia); Haruna Chiroma (Federal College of Education (Technical), Gombe, Malaysia); Sanah Abdullahi Muaz (University of Malaya, Malaysia); Eka Novita Sari (AMCS Research Center, Indonesia); Tutut Herawan (Universiti Malaysia Pahang & Universitas Ahmad Dahlan, Malaysia)
Efficient watermarking techniques guarantee inaudibility and robustness against signal degradation. The spread spectrum watermarking technique makes it harder for unauthorized adversary to detect the position of the embedded watermark in the carrier file, because the watermark bits are spread in the carrier medium. Unfortunately, there is a high possibility that synchronization of the watermark bits and carrier bits will go out of phase. This will lead to watermark detection problem in the carrier bit sequence. In this paper, we propose a vector space projections approach on spread spectrum audio watermarking technique, in order to present both the watermark bits and carrier bits as vectors. Similarities of watermark vector to a carrier vector are resolved by the normalized dot product of the cosine of the angle between them for embedding. After embedding, extraction and applying some signal processing technique were carried out. Our approach has proven robust when compared with other audio watermarking techniques. This technique gives good results and was found to be the robust based on performance test.
pptx file
Studying the Effects of Metamaterial Components on Microwave Filters
Ahmed Reja and Syed Ahmad (Jamia Millia Islamia, India)
This paper presents a compact stopband filters and bandpass filters using microstrip transmission lines coupled with two parallel sides of square split ring resonators (SRRs) and metallic vias holes. SRRs etched on the upper plane of microstrip line to provide negative value of effective permeability (µ<0) to the medium in a narrow band above their resonance frequency. Narrow bandpass filter with more drastic size reduction can be obtained by using metallic via holes to get negative effective permittivity (ε<0) with negative effective permeability (µ<0) which is generated by using SRRs. Backward wave propagation is achieved as a LHM when SRRs and via holes are applied together. These metamaterial components (SRRs and metallic via holes) are useful for compact narrow stopband and narrow bandpass filters applications at 3GHz resonance frequency. The process of adding many numbers of SRRs and taking many widths of strip conductor (W) with gap variations between resonators and strip line are studied and simulated. The length (l) of the structures can be as small as 0.3 times the signal wavelength at resonance frequency (fo).
ppt file
SV-M/D: Support Vector Machine-Singular Value Decomposition Based Face Recognition
Mukundhan Srinivasan (Indian Institute of Science, India)
In this paper, we present a novel method for Face Recognition (FR) by applying Support Vector Machine (SVM) in addressing this Computer Vision (CV) problem. The SVM is a capable learning classifier capable of training polynomials, neural networks and RBFs. We use Singular Value Decomposition (SVD) for feature extraction and SVM for classification. The proposed algorithm is tested on three databases, viz., FERET, FRGC Ver. 2.0 Databases. Also, we verify the accuracy of this method on two other databases with external variation in terms of pose and illumination, viz., CMU-PIE and Indian Face Database. The results are compared with other well-known methods to establish the advantage of SV-M/D. The recall rate for the proposed system is about 90%.
pdf file
The Exploitation of Unused Spectrum for Different Signal's Technologies
Ammar Abdul-Hamed (Jamia Millia Islamia, India); Mainuddin Mainuddin (Engineering, Jamia Millia Islamia New Delhi, India); Mirza Tariq Beg (Jamia Millia Islamia New Delhi, India)
Technological advances and market developments in the wireless communication area have been astonishing during the last decade and the mobile communication sector will continue to be one of the most dynamic technological drivers within comparative industries. This paper extend our previous work for detection and discrimination signals, and deals with a cognitive radio system (CR) to improve spectral efficiency for three signals (WiMAX, Frequency Hopping and CDMA2000) by sensing the environment and then filling the discovered gaps of unused licensed spectrum with their own transmissions. We mainly focused on energy detector spectrum sensing algorithm. The simulation shows that the CR systems can work efficiently by sensing and adapting the environment, and showing its ability to fill in the spectrum holes then serve its users without causing harmful interference to the licensed user
pptx file
Artificial Immune System Based Image Enhancement
Sushmita Ganguli, Prasant Mahapatra and Amod Kumar (CSIR-Central Scientific Instruments Organisation, India)
Artificial immune system (AIS) inspired by the immune system of vertebrates can be used for solving optimization problems. In this paper, Negative Selection Algorithm (NSA), a model of AIS is used for image enhancement, which is considered as a problem of optimization. Here, image enhancement is done by enhancing the pixel intensities of the images through a parameterized transformation function. The main task is to achieve the best enhanced image by optimizing the parameters. The results have proved better when compared with other standard enhancement techniques like Histogram equalization (HE) and Linear Contrast Stretching (LCS).
pdf file
Benchmarking Support Vector Machines Implementation Using Multiple Techniques
Sukanya MV (Amrita Viswavidhyapeetham, India); Shiju Sathyadevan and Unmesha Sreeveni UB (Amrita Vishwa Vidyapeetham, India)
Data management becomes a complex task when hundreds of petabytes of data are being gathered, stored and processed on a day to day basis. Efficient processing of the exponentially growing data is inevitable in this context. This paper discusses about the processing of a huge amount of data through Support Vector machine (SVM) algorithm using different techniques ranging from single node Linear implementation to parallel processing using the distributed processing frameworks like Hadoop. Map-Reduce component of Hadoop performs the parallelization process which is used to feed information to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression analysis. Paper also does a detailed anatomy of SVM algorithm and sets a roadmap for implementing the same in both linear and Map-Reduce fashion. The main objective is explain in detail the steps involved in developing an SVM algorithm from scratch using standard linear and Map-Reduce techniques and also conduct a performance analysis across linear implementation of SVM, SVM implementation in single node Hadoop, SVM implementation in Hadoop cluster and also against a proven tool like R, gauging them with respect to the accuracy achieved, their processing pace against varying data sizes, capability to handle huge data volume without breaking etc.
pptx file
Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers
Natteshan N v s (College of Engineering & Anna University Chennai, India); Angel Arul Jothi J (Anna University, India)
Computer Aided Diagnosis(CAD) is a technique where diagnosis is performed in an automatic way. This work has developed a CAD system for automatically classifying the given brain Magnetic Resonance Imaging(MRI) image into 'tumor affected' or 'tumor not affected'. The input image is preprocessed using wiener filter and contrast limited adaptive histogram equalization(CLAHE). The image is then quantized and aggregated to get a reduced image data. The reduced image is then segmented into four regions such as gray matter, white matter, cerebrospinal fluid and high intensity tumor cluster using Fuzzy C Means (FCM) algorithm. The tumor region is then extracted using the intensity metric. A contour is evolved over the identified tumor region using active contour model(ACM) to extract exact tumor segment. Thirty five features including gray level co-occurrence matrix (GLCM) features, gray level run length matrix features (GLRL),statistical features and shape based features are extracted from the tumor region. Neural network and Support Vector Machine (SVM) classifiers are trained using these features. Results indicate that support vector machine classifier with Quadratic kernel function performs better than RBF kernel function and Neural network classifier with fifty hidden nodes performs better than twenty five hidden nodes. It is also evident from the result that average running time of FCM is less when used on reduced image data.
Blending Concept Maps with Online Labs for STEM Learning
Raghu Raman (Amrita University, India); Mithun Haridas (Create@Amrita, India); Prema Nedungadi (Amrita University, India)
In this paper we describe the architecture of an e-learning environment that blends concept maps with Online Labs (OLabs) to enhance student performance in biology. In the Indian context, secondary school student's conceptual understanding of hard topics in biology is at risk because of lack of qualified teachers and necessary equipments in labs to conduct experiments. Concept map provides a visual framework which allows students to get an overview of a concept, its various sub concepts and their relationships and linkages. OLabs with its animations, videos and simulations is an interactive, immersive approach to practicing science experiments. The blended e-learning environment was tested by systematically developing a concept map for the concept "Photosynthesis" and by successfully integrating it into the OLabs environment. Our blended approach to science concept understanding has interesting implications for the teacher training programs.
pptx file
Cognitive Load Management in Multimedia Enhanced Interactive Virtual Laboratories
Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India); Lakshmi Bose (Amrita Vishwa Vidyapeetham Kollam, India); Sayoojyam Brahmanandan (Amrita Vishwa Vidyapeetham, India)
Learning in multimedia enhanced interactive environments has distinctly impacted the cognitive processing of information. Theoretical learning requires conceptual understanding while experimental learning requires cognition of underlying phenomena in addition to a firm grasp of procedures and protocols. Virtual laboratories have been recently introduced to supplement laboratory education. In this paper, an outline of the modes of knowledge representation for virtual laboratories is presented. The results from this work show how the combination of physical and sensory representations in virtual laboratories plays a key role in the overall understanding of the content. Information processing through visual, auditory, pictorial as well as interactive modes offers unique pathways to cognition. An analysis of comprehension for N=60 students showed a significant change in the time taken to answer questions as well as an overall improvement in scores when exposed to multimedia enhanced interactive virtual laboratories (MEIVL). This study also portrayed a reduction in the perception of difficulty in understanding physics experiments. Statistical tests on various modes of assessments were done both online and in classroom quantify the extent of improvement in learning based on the enabling, facilitating and split attention aspects of MEIVL.
Comparative Analysis of Radial Basis Functions with SAR Images in Artificial Neural Network
Abhisek Paul (NITA, India)
Radial Basis Functions (RBFs) is used to optimize many mathematical computations. In this paper we have used Gaussian RBF (GRBF), Multi-Quadratic RBF (MQ-RBF), Inverse-Multi-Quadratic RBF (IMQRBF) and q-Gaussian RBF (q-GRBF) to approximate singular values of SAR (Synthetic Aperture Radar) color images. Simulations, mathematical comparisons show that q-Gaussian RBF gives better approximation with respect to the other RBF methods in Artificial Neural Network.
rar file

S39: ISI-2014: Pattern Recognition, Signal and Image Processing-IIgo to top

Room: 110 Block E First Floor
Chairs: Gustavo Fernández Domínguez (AIT Austrian Institute of Technology, Austria), Bharathi R K (S. J. College of Engineering, Mysore, Karnataka & SJCE, Mysore, India)
A Fuzzy Regression Analysis Based No Reference Image Quality Metric
Indrajit De (MCKV Institute of Engineering, India); Jaya Sil (Bengal Engineering and Sc. University, India)
In the paper quality metric of a test image is designed using fuzzy regression analysis by modeling membership functions of interval type 2 fuzzy set representing quality class labels of the image. The output of fuzzy regression equation is fuzzy number from which crisp outputs are obtained using residual error defined as the difference between observed and estimated output of the image. In order to remove human bias in assigning quality class labels to the training images, crisp outputs of fuzzy numbers are combined using weighted average method. Weights are obtained by exploring the nonlinear relationship between the mean opinion score (MOS) of the image and defuzzified output. The resultant metric has been compared with the existing quality metrics producing satisfactory result.
pptx file
A New Single Image Dehazing Approach Using Modified Dark Channel Prior
Dehazing is a challenging issue because the quality of a captured image in bad weather is degraded by the presence of haze in the atmosphere and hazy image has low contrast in general. In this paper we proposed a new method for single image dehazing using modified dark channel prior and adaptive Gaussian filter. In our proposed method, hazy images are first converted in to LAB color space and then Adaptive Histogram Equalization is applied to improve the contrast of hazy images. Then, our proposed method estimates the transmission map using dark channel prior. It produces more refined transmission map than that of old dark channel prior method and then Adaptive Gaussian filter is employed for further refinement. The quantitative and visual results show proposed method can remove haze efficiently and reconstruct fine details in original scene clearly.
pptx file
A Novel Image Encryption and Authentication Scheme Using Chaotic Maps
Amitesh Singh Rajput (Rajiv Gandhi Proudyogiki Vishwavidyalaya (RGPV), India); Mansi Sharma (SOIT, RGPV, India)
The paper presents an amalgam approach for image encryption and authentication. An ideal image cipher should be such that any adversary cannot modify the image and if any modifications are made, can be detected. The proposed scheme is novel and presents a unique approach to provide two level security to the image. Hashing and two chaotic maps are used in the algorithm where hash of the plain image is computed and the image is encrypted using key dependent masking and diffusion techniques. Initial key length is 132-bits which is extended to 148-bits. Performance and security analysis show that the proposed scheme is secure against different types of attacks and can be adopted for real time applications.
pdf file
A Sphere Decoding Algorithm for Underdetermined OFDM/SDMA Uplink System with an Effective Radius Selection
Ali CK (NIT Calicut, India); Shahnaz K V (National Institute of Technology, Calicut, India)
Multiuser Detection (MUD) Techniques for orthogonal frequency division multiplexing/space division multiple access (OFDM/SDMA) system remain a challenging area especially when the number of transmitters exceed receivers. Maximum Likelihood (ML) detection is the optimal one, but in-feasible due to the high complexity when large number of antennas are used together with high order modulation scheme. Sphere Decoding (SD) algorithm with less complexity but performance near to ML has been explored widely for determined and over determined MIMO channels. Very few papers that efficiently deal with an under determined OFDM/SDMA channel have been published so far. In this paper a simple pseudo-antenna augmentation scheme has been employed to utilize SD in a rank-deficient case. An effective radius selection method is also included.
pdf file
A Survey on Spiking Neural Networks in Image Processing
Julia Tressa Jose, Amudha J and Sanjay G (Amrita Vishwa Vidyapeetham, India)
Spiking Neural Networks are the third generation of Artificial Neural Networks and is fast gaining interest among researchers in image processing applications. The paper attempts to provide a state-of-the-art of SNNs in image processing. Several existing works has been surveyed and the probable research gap has been exposed.
ppt file
AI Based Automated Identification and Estimation of Noise in Digital Images
Karibasappa K G (BVB College of Engineering and Technology, India); Karibasappa K (Dayananda Sagar College of Engineering, India)
Noise identification, estimation and denoising are the important and essential stages in image processing techniques. In this paper, we proposed an automated system for noise identification and estimation technique by adopting the Artificial intelligence techniques such as Probabilistic Neural Network (PNN) and Fuzzy logic concepts. PNN concepts are used for identifying and classifying the images, which are affected by the different type of noises by extracting the statistical features of noises, PNN performance is evaluated for classification accuracies. Fuzzy logic concepts such as Fuzzy C-Means clustering techniques have been employed for estimating the noise affected to the image are compared with the other existing estimation techniques.
pptx file
An Imperceptible Digital ImageWatermarking Technique by Compressed Watermark Using PCA
Shaik Ayesha (Indian Institute of Information Technology Design and Manufacturing (IIITDM) Kancheepuram, India); Masilamani (IIITD&M Kancheepuram, India)
To provide secure communication, a modified digital watermarking scheme using discrete cosine transform (DCT) and principal component analysis (PCA) has been proposed. This scheme uses DCT for watermarking and PCA or Karhunen-Loeve (KL) transform for compressing the watermark. In this technique, PCA in addition to DCT is used for watermarking the digital image to improve the quality of the watermarked image.
pdf file
An Investigation of fSVD and Ridgelet Transform for Illumination and Expression Invariant Face Recognition
Bhaskar Belavadi (SJB Institute of Technology, India); K Mahantesh (SJBIT, India); Geetha G p (SJB Institute of Technology, India)
This paper presents a wide-eyed yet effective framework for face recognition based on the combination of flustered SVD(fSVD) and Ridgelet transform. To this end we meliorate in the sense of computation efficiency, invariant to facial expression and illumination of [21]. Firstly fSVD is applied to an image by modelling SVD and selecting a proportion of modelled coefficients to educe illumination invariant image. Further, Ridgelet is employed to extract discriminative features exhibiting linear properties at different orientations by representing smoothness along the edges of flustered image and also to map line singularities into point singularities, which improves the low frequency information that is useful in face recognition. PCA is used to project higher dimension feature vector onto a low dimension feature space to increase numerical stability. Finally, for classification five different similarity measures are used to obtain an average correctness rate. We have demonstrated our proposed technique on widely used ORL dataset and achieved high recognition rate in comparision with several state of the art techniques.
pdf file
Application of Fusion Technique in Satellite Images for Change Detection
Namrata Agrawal (Indian Institute of Technology Roorkee, India); Dharmendra Singh (Indian Institute of Technology, Roorkee, India); Sandeep Kumar (Computer Science and Engineering, IIT Roorkee, India)
The identification of land cover transitions and changes occurred on a given region is required to understand the environmental monitoring, agricultural surveys etc. Many supervised and unsupervised change detection methods have been developed. Unsupervised method is the analysis of difference image by automatic thresholding. In this paper, an approach is proposed for automatic change detection that exploits the change information present in multiple difference images. Change detection is performed by automatically thresholding the difference image thereby classifying it into change and unchanged class. Various techniques are available to create difference image but the results are greatly inconsistent and one technique is not applicable in all situations. In this work, expectation maximization (EM) algorithm is used to determine the threshold to create the change map and intersection method is selected to fuse the change map information from multiple difference images. MODIS 250-m images are used for identifying the land cover changes.
Application of Simulated Annealing for Inverse Analysis of a Single-Glazed Solar Collector
Ranjan Das (Indian Institute of Technology Ropar, India)
This work presents the application of simulated annealing (SA)-based evolutionary optimization algorithm to solve an inverse problem of a single-glazed flat-plate solar collector. For a given configuration, the performance of a solar collector may be expressed by heat loss factor. Four parameters such as air gap spacing, glass cover thickness, thermal conductivity and the emissivity of glass cover have been simultaneously estimated by SA to meet a given heat loss factor distribution. Many possible and nearly unique combinations of the unknowns are observed to satisfy the same requirement, which results in satisfactory reconstruction of the required heat distribution.
ppt file
Coslets: A Novel Approach to Explore Object Taxonomy in Compressed DCT Domain for Large Image Datasets
K Mahantesh (SJBIT, India); Manjunath Aradhya (Sri Jayachamarajendra College of Engineering, India); Niranjan SK (Sri Jayachamarajendra College of Engineering (SJCE) - Mysore, India)
The main idea of this paper is to take forward our earlier work of image segmentation [11] and to propose a novel transform technique known as Coslets which is derived by applying 1D wavelet in DCT domain to categorize objects in large multiclass image datasets. Firstly, k-means clustering is applied to an image in complex hybrid color space and obtained multiple disjoint regions based on color homogeneity of pixels. Later, DCT brings out low frequency components expressing image's visual features and further wavelets decomposes these coefficients into multi-resolution sub bands giving an advantage of spectral analysis to develop robust and geometrically invariant structural object visual features. A set of observed data (i.e. transformed coefficients) is mapped onto a lower dimensional feature space with a transformation matrix using PCA. Finally, different distance measure techniques are used for classification to obtain an average correctness rate for object categorization. We demonstrated our methodology of the proposed work on two very challenging datasets and obtained leading classification rates in comparison with several benchmarking techniques explored in literature.
pdf file
Design of Multiband Metamaterial Microwave BPFs for Microwave Wireless Applications
Ahmed Reja and Syed Ahmad (Jamia Millia Islamia, India)
This work proposes an end-coupled half wavelength resonator dual bandpass filter (BPF). The filter is designed to have a 10.9% fractional bandwidth (FBW) at the center frequency (fo) of 5.5GHz. Dual-band BPF with more drastic size reduction can be obtained by using metallic vias act as shunt-connected inductors to get negative permittivity (-ε). The process of etching rectangular split ring reso-nators (SRRs) instead of open-end microstrip transmission lines (TLs) for planar BPF to provide negative permeability (-µ) is presented. The primary goals of these ideas are to get size reduction, dual- and multi- band frequency responses. These metamaterial transmission lines are suitable for microwave filter applications where miniaturization, dual- and multi-narrow pass-bands are achieved. Numerical results for the end-coupled microwave BPFs design are obtained and filters are simulated using software package HFSS. These designs are imple-mented on the Roger RO3210 substrate material that has; dielectric constant εr=10.8, and substrate height h=1.27 mm
ppt file
Grayscale to Color Map Transformation for Efficient Image Analysis on Low Processing Devices
Shitala Prasad (NTU Singapore, India); Piyush Kumar (IIIT Allahabad, India); Kumari Priyanka Sinha (NIT PATNA, India)
This paper presents a novel method to convert a grayscale image to a colored image for quality image analysis. The grayscale IP operations are very challenging and limited. The information extracted from such images is inaccurate. Therefore, the input image is transformed using a reference color image by reverse engineering. The gray levels of grayscale image are mapped with the color image in all the three layers (red, green, blue). These mapped pixels are used to reconstruct the grayscale image such that it is represented in a 3 dimensional color matrix. The algorithm is very simple and accurate that it can be used in any domain such as medical imaging, satellite imaging and agriculture/environment real-scene. The algorithm is implemented and tested on low cost mobile devices too and the results are found appreciable.
pptx file
Incorporating Machine Learning Techniques in MT Evaluation
Nisheeth Joshi and Iti Mathur (Banasthali University, India); Hemant Darbari and Ajai Kumar (DIT, MIT, Govt. of India, India)
From a project manager's perspective, Machine Translation (MT) Evaluation is the most important activity in MT development. Using the results produced through MT Evaluation, one can assess the progress of MT development task. Traditionally, MT Evaluation is done either by human experts who have the knowledge of both source and target languages or it is done by automatic evaluation metrics. These both techniques have their pros and cons. Human evaluation is very time consuming and expensive but at the same time it provides good and accurate status of MT Engines. Automatic evaluation metrics on the other hand provides very fast results but lacks the precision provided by human judges. Thus a need is being felt for a mechanism which can produce fast results along with a good correlation with the results produced by human evaluation. In this paper, we have addressed this issue where we would be showing the implementation of machine learning techniques in MT Evaluation. Further, we would also compare the results of this evaluation with human and automatic evaluation.
pptx file

S40: SSCC-2014: Authentication and Access Control Systems; Encryption and Cryptographygo to top

Room: 015 Block E Ground Floor
Chair: Shunmuganathan K l (RMK Engineering College, India)
Ideal and Computationally Perfect Secret Sharing Schemes for Generalized Access Structures
Dileep Pattiapti (University of Hyderabad, India); Appala Naidu Tentu (CR Rao AIMSCS, India); China Vadlamudi (University of Hyderabad, India)
A secret sharing scheme is proposed in this paper. The scheme is ideal and uses computationally perfect concept. It uses a one way function and realizes generalized access structure. The scheme is useful for non-ideal access structures. For example, Stinson has identified eighteen possible non-isomorphic monotone access structures with four participants. Fourteen of them admit ideal and perfect secret sharing schemes. The remaining four cannot be made both perfect and ideal. By making use of the computationally perfect concept, we propose ideal scheme for those four access structures. Novelty of the scheme is that it is applicable for any number of participants and generates the least amount of public information. In fact, we show results that establish that the proposed scheme is optimal for access structures consisting of four or less number of participants. Our scheme can be extended to multiple secrets. Since some applications re- quire that a secret sharing scheme designed for it be extended to the case of multiple secrets, our approach finds it useful in such scenarios.
pdf file
Security Enhancement in Web Services by Detecting and Correcting Anomalies in XACML Policies At Design Level
Priyadharshini M (Anna University, India); Yowan J (AnnaUniversity, India); Baskaran R (Anna University, India)
The significance of XACML (Extensible Access Control Markup Language) policies for access control is immeasurably increasing particularly in web services. XACML policies are web access control policies which are used to permit the genuine users to access the resources and also deny the sham users. Generation of this XACML policy is very important task in order to avoid security seepage. Detecting and Correcting inconsistencies in access control policies are highly time consuming and tedious when size of XACML polices are high. The Process when done at execution time could even need more time and effort. The purpose of this work is to devise an anomaly detection and correction tool which could be used at the time of designing policies so as to reduce time and effort. Policy designer could easily discover and resolve the inconsistencies such as conflicts and redundancies in the XACML policies with the help of our XACML Policy Analyzer tool.
ppt file
Cheating Prevention Using Genetic Feature Based Key in Secret Sharing Schemes
L Jani Anbarasi (Research Scholar ,Anna University); Modigari Narendra (Sri Ramakrishna Institute of Technology, India); Anandha Mala G. s (AnnaUniversity, Chennai, India)
Shamir proposed a t out of n secret sharing scheme where secrets are encrypted into scrambled images called shares or shadows. The secrets can be reconstructed when t or more participants pool their shares or shadow images together. Major drawback in such schemes are if a forged shares is pooled then it leads to wrong secret. Cheating is also possible by some participants who can deceive the remaining participants by pooling a forged shares . Many cheating prevention schemes has been proposed which uses authentication bits, hash codes etc. This paper proposes a new biometric personal authentication technique which prevents the cheating of participants. The results of the system and the security analysis shows that the proposed scheme gives secret sharing participants the confidence of the recovered original secret without the need to worry about forging of shadow images or dishonest participants.
ppt file
Design for Prevention of Intranet Information Leakage Via Emails
Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India); Neenu Manmadhan (Amrita University, India); Hari Narayanan (Amrita Vishwa Vidyapeetham, India); Jayaraj Poroor (Amrita Vishwa Vidyapeetham (Amrita University), India)
The ubiquitous presences of internet and network technologies have enabled electronic mail systems as the primary medium of communication. Both between and within organizations, sensitive and personal information often transits through the electronic mail systems undetected. Information leakage through this mode of communication has become a daunting problem in today's world. Often the mail volume within an organization is quite large making manual monitoring impossible. In this paper an integration of secure information flow techniques on intranet electronic mail systems is investigated. Categorization of emails based on the sensitivity is accomplished effectively using machine learning techniques. Analyzing the information flow and simultaneously mapping, categorizing and sorting emails in real time prior to receipt of emails has been characterized in this study. Defining security policies and application of lattice models for controlled exchange of emails is discussed. The paper proposed a secure architecture for an email web application. Experimental analysis on the accuracy of the application was determined using Enron email dataset.
ppt file
Tag Digit Based Honeypot to Detect Shoulder Surfing Attack
Nilesh Chakraborty (Indian Institute of Technology Patna, India); Samrat Mondal (IIT Patna, India)
Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. However, in this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others' credentials then there is a high chance that they will be caught red handed. Experiment analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.
pdf file
PWLCM Based Video Encryption Through Compressive Sensing
Abhishek Kolazi (National Institute Of Technol Calicut, India); Sudhish N George (National Institute of Technology, Calicut, India); Deepthi P.p (NIT Calicut, India)
In this paper, a new approach for encrypting the video data through compressive sensing is proposed. Even though, Orsdemir's cryptography key based measurement matrix (Φ matrix) generation technique [11] provides a robust encryption method for CS framework, this scheme can not provide large key space and security. Hence the aim of this work is to improve the security and key space of the compressive sensing paradigm without affecting the basic features of compressive sensing such as good reconstruction performance, robustness to noise characteristics and also a low encoder complexity. In order to improve the key space and security, piecewise linear chaotic map (PWLCM) based of Φ matrix generation technique is proposed. The PWLCM is run for random number of iterations to form a random array which is used as the initial seed for generating the secure Φ matrix. The initial & system parameter values and the number of iterations of PWLCM are kept as secret key. The proposed Φ matrix generation technique is validated with popular CS based video reconstruction techniques and it is found that the proposed method provides an improvement in the key space and security without affecting the basic features of compressive sensing.
pptx file
Cryptanalysis of Image Encryption Algorithm Based on Pixel Shuffling and Chaotic S-box Transformation
Pankaj Kumar Sharma (Jamia Millia Islamia, India); Musheer Ahmad (Jamia Millia Islamia, New Delhi, India); Parvez M Khan (Integral University & IUL, India)
Recently Hussain et al. proposed an image encryption algorithm which has three independent phases as: (1) total pixel shuffling performed, in spatial domain, with permutation sequences extracted from chaotic skew tent map, (2) diffusion carried out using random codes generated from same chaotic map, and (3) extra confusion induced with substitution-box transformation. Though, the encryption algorithm achieves optimal value of Shannon's confusion and diffusion and exhibits great encryption strength. But, a careful analysis unveils its inherent security flaw, leaving it vulnerable to cryptographic attack. In this paper, we analyze its security weakness and proposed a chosen plaintext-attack with inverse S-box to break the algorithm completely. It is shown that the plain-image can be successfully recovered without knowing the secret key. The computer simulation of chosen plaintext attack highlights the ineptness of Hussain et al. algorithm and shows that it is not commendable to deploy it for practical encryption of digital images.
A Mathematical Analysis of Elliptic Curve Point Multiplication
Ravi Kishore Kodali (National Institute of Technology, Warangal, India)
This work presents a mixed-coordinate system based elliptic curve point multiplication algorithm. It employs the width-w Non-Adjacent Form (NAF) algorithm for point multiplication and uses the Montgomery trick to pre-compute the odd points Pi = iP for i = 1, 3, · · · , 2w − 1 with only one field inversion.
An Approach to Cryptographic Key Exchange Using Fingerprint
Subhas Barman (Govt. College of Engineering and Textile Technology, Berhampore, India); Samiran Chattopadhyay (Jadavpur University, India); Debasis Samanta (Indian Institute of Technology, Kharagpur, India)
Cryptography is the most reliable tool in network and information security. The security of cryptography depends on the cryptographic key management. It consists of key generation, key storing and key sharing. A randomly generated long key (of 128, 190 or 256 bits) is difficult to remember. As a consequence, it is needed to be stored in a secured place. An additional authentication like knowledge or token based authentication is used to control the unauthorized access to the key. It is found that password is easy to break and token can be damaged or stolen. Moreover, knowledge or token based authentication does not assures the non-repudiation of a user. As an alternate, it is advocated to combine biometric with cryptography, known as crypto-biometric system (CBS), to address the above mentioned limitations of traditional cryptography as well as enhance the network security. This paper introduces a CBS to exchange a randomly generated cryptographic key with user's fingerprint data. Cryptographic key is hidden within fingerprint data using fuzzy commitment scheme and it is extracted from the cryptographic construction with the production of genuine fingerprint data of that user. Our work also protects the privacy and security of fingerprint identity of the user using revocable fingerprint template.
pdf file

S41: SSCC-2014: Security and Privacy in Networked Systemsgo to top

Room: 016 Block E Ground Floor
Chair: Deepayan Bhowmik (Sheffield Hallam University, United Kingdom (Great Britain))
Security Analysis of an Adaptable and Scalable Group Access Control Scheme for Managing Wireless Sensor Networks
Odelu Vanga (Indian Institute of Information Technology Chittoor, India); Ashok Kumar Das (International Institute of Information Technology, Hyderabad, India); A. Goswami (Indian Institute of Technology, Kharagpur, India)
Recently, Wu et al. proposed an adaptable and scalable group access control scheme (GAC) for managing wireless sensor networks (WSNs) [Telematics and Informatics, 30:144-157, 2013], and they claimed that their proposed GAC mechanism provides forward secrecy and backward secrecy, and it also prevents man-in-the-middle attack. However, in this paper, we revisit Wu et al.'s scheme and show that Wu et al.'s scheme fails to provide the forward secrecy as well as the backward secrecy and also their scheme does not prevent the man-in-the-middle attack. As a result, Wu et al.'s scheme is not suitable for practical applications.
pdf file
Secure Hierarchical Routing Protocol (SHRP) for Wireless Sensor Network
Sohini Roy (Arizona State University, USA); Ayan Das (Calcutta Institute of Engineering and Management, India)
Wireless Sensor Network (WSN) has emerged as an important supplement to the modern wireless communication systems due to its wide range of applications. The communication of sensitive data and working in hostile environmental condition needs security. The energy constraints, limited computational ability and low storage capacity of the sensor nodes have made the implementation of security more challenging. The proposed scheme adopts a level based secure hierarchical approach to maintain the energy efficiency. It incorporates light-weight security mechanisms like, nested hash based message authentication codes (HMAC), Elliptic-Curve Diffie-Hellman (ECDH) key exchange scheme and Blowfish symmetric cipher. Simulation results show that the scheme performs better than existing secure routing protocols FBSR and ATSR.
pptx file
Power Aware and Secure Dynamic Source Routing Protocol in Mobile Ad Hoc Networks
Mohit Miglani (Computer Sciences Corporation, India); Deepika Kukreja (University School of Information and Technology & Netaji Subhas Institute of Technology, India); Sanjay Kumar Dhurandher (Netaji Subhas Institute of Technology, India); B Reddy (GGSIPU, India)
Mobile Ad Hoc Networks (MANETs) show better and valuable performance in the circumstances where the generally used wireless networks fail to work. In order to make routing in MANETs secure, number of security based routing protocols have been proposed in the literature but none of them is compliant with the MANETs environment. We propose a protocol, termed as Power aware Secure Dynamic Source Routing (PS-DSR) that makes the standard Dynamic Source Routing protocol secure by using power aware trust based approach. The monitoring operation is distributed among a few set of nodes called monitor nodes. The set of monitor nodes is selected sporadically which makes the proposed method adaptable to the two focal concerns of MANETs: dynamic network topology and energy constraint devices. The method detects malicious packet dropping and packet modification attacks. It ensures the trustworthy and authentic selection of routes by the PS-DSR protocol and improves the overall performance of the protocol in presence of malicious nodes.
pptx file
Peers Feedback and Compliance Based Trust Computation for Cloud Computing
Jagpreet Sidhu, Er. (Panjab University, India); Sarbjeet Singh (Panjab University, Chandigarh, India)
Cloud computing is a new computing model where software, platform and infrastructure resources are delivered as services using pay-as-you-go model. It gives an excellent way to lease numerous types of distributed resources but it also makes security problems further complicate and more important for cloud users than before. The key barrier to extensive usage of cloud computing is the lack of confidence (trust) in cloud services by potential cloud users. For critical business applications and other sensitive applications, cloud service providers must be selected based on high level of trustworthiness. In this paper, we present a trust model to evaluate service providers in order to help cloud users select the most reliable service providers and services in business. We have made an attempt to design a trust model which enables clients to determine trustworthiness of service providers by taking into account three different types of trust viz. interaction-based trust, compliance-based trust and recommendation-based trust. These types can be assigned appropriate weights to designate precedence among them in order to compute total trust. The model has been simulated using MATLAB. Simulation has been done to validate the design objectives of flexibility, robustness and scalability. The proposed model is initial stride towards a trust model where diverse facets of trust contribute in formation of trust (confidence) in the mind of cloud user about service provider and its offered services.
pptx file
A Review on Mobile Sensor Localization
Jeril Kuriakose (St. John College of Engineering & Manipal University Jaipur, India); Amruth V (Bearys Institute Of Technology, India); Sandesh Ag. (Einsys Consulting Pvt. Ltd., India); Abhilash V (Freelancer, India); Prasanna Kumar (National Institute of Engineering, India); Nithin K (Assistant Profesor, India)
Wireless sensor networks (WSNs) are on a steady rise in the current decade because of its progressions in hardware design, resource efficiency, communication and routing protocols, and other aspects. Recently, people started preferring mobile nodes in the place of static nodes, which brought mobile sensor network into focus. Location information always plays a key role in Mobile wireless sensor network (MWSN) and precise localization has always been a challenge for mobile sensor nodes. Deploying GPS receivers for each node would also render network deployment cost for a dense network. The unavailability of GPS in indoor and underground environment has put the installation of GPS into question. This makes the sensor nodes to identify its location coordinates or location reference without using GPS, and is achieved with the help of a special node that knows its location coordinates and protocols, called beacon node. This paper's goal is to confer different localization techniques used by mobile sensor nodes to identify their location information. Problems and future issues have also been discussed.
pptx file
Research on Access Control Techniques in SaaS of Cloud Computing
Shabana Rehman (Salman bin Abdul Aziz University, Saudi Arabia); Rahul Gautam (Center of Development of Advanced Computing, India)
Where the flexibility of Cloud Computing provides number of usage possibilities to the organizations, the security threats stop them in fully relying on it. Among all security threats, 'Unauthorized Access Threat' is one of the most important and difficult to manage. In SaaS, access control issues are of foremost concern. The aim of this paper is to explore the current trends that cloud providers are following in implementing access control measures in SaaS. In this article, a critical review of these measures is done and their advantages and drawbacks are discussed. On the basis of ongoing research, future research directions in the area of SaaS access control are also identified.
pptx file
Fair-Trust Evaluation Approach (F-TEA) for Cloud Environment
Kandaswamy Gokulnath and Rhymend Uthaiaraj (Anna University, India)
The Main objective of the current work is to evaluate the trust of service provider in a fair manner. Fairness is introduced by considering the type of service(IaaS, PaaS and SaaS) accessed by the cloud user. While updating the trust value, the type of service accessed is not usually considered. Proposed approach identifies the type of service and updates the trust accordingly. In the literature several works exist for trust evaluation, but no attempt are made to evaluate trust in a fair manner. In this work,quantitative approaches are proposed to evaluate trust in a fair manner. Recursive nature of the proposed method restricts the time complexity to linear. Since cloud computing provides three types of services, it becomes vital to consider the type of service along with trust value. This enables the future users to access the precisely trusted providers according to their requirement. The dynamic nature of cloud computing challenges the trust evaluation by frequently changing the behavioral pattern. The quantitative metrics used in this work also addresses this problem. Simulation results showed good performance improvement over available methods towards QoS metrics.
pdf file
Quantifying the Severity of Blackhole Attack in Wireless Mobile Adhoc Networks
Satria Mandala (Universitas Telkom, Indonesia); Maznah Kamat (Universiti Teknologi Malsysia, Malaysia); Md Asri Ngadi (Universiti Teknologi Malaysia & UTM, Malaysia); Yahaya Coulibaly (Universiti Teknologi Malaisia, Malaysia); Kommineni Jenni (Universiti Teknologi Malaysia, Malaysia)
Blackhole attack is one of the severe attacks in MANET routing protocols. Generation of this attack is simple, which does not require specific tools or sophisticated attack techniques. However, this attack seriously corrupts the routing tables of nodes in the network. Even worse, the attack could increase the chances of losing confidential data and deny network services. Many researchers have proposed a variety of solutions to prevent this Conventional Blackhole Attack (CBA). Unfortunately, none of them has measured the severity of this attack. Filling gaps in measuring the severity of this attack, this research proposes a new security metrics, namely Corruption Routing Table (CRT), Compromising Relay Node (CRN) and Compromising Originator Node (CON). In addition, this research also introduces a new blackhole attack, namely Hybrid Black Hole Attack (HBHA) with two variants --- Independent and Cooperative HBHAs. The proposed metrics proved effective in measuring the severity of both CBA and HBHAs. Simulation using Java in Time Simulator/Scalable Wireless Adhoc Network Simulator (JiST/SWANS) showed that Independent HBHA is the most severe attack compared to Cooperative HBHA and CBA. In addition, Cooperative HBHA is the most efficient attack than Independent HBHA and CBA.
Cryptanalysis of an Efficient Biometric Authentication Protocol for Wireless Sensor Networks
Ashok Kumar Das (International Institute of Information Technology, Hyderabad, India)
In 2013, Althobaiti et al. proposed an efficient biometric-based user authentication scheme for wireless sensor networks. We analyze their scheme for the security against known attacks. Though their scheme is efficient in computation, in this paper we show that their scheme has some security pitfalls such as (1) it is not resilient against node capture attack, (2) it is insecure against impersonation attack, (3) it is insecure against man-in-the-middle attack, and (4) it is also insecure against privileged insider attack. Finally, we give some pointers for improving their scheme so that the designed scheme needs to be secure against various known attacks.

S42: SSCC-2014: System and Network Securitygo to top

Room: 204 Block E Second Floor
Chair: Sabu M Thampi (Indian Institute of Information Technology and Management - Kerala, India)
Low Complex System for Physical Layer Security Using NLFG and QCLDPC Code
Celine Stuart (National Institute of Technology, India); Deepthi P.p (NIT Calicut, India)
In practical communication applications, the channels for intended users and eavesdroppers are not error-free and Wyner's wiretap channel model deals with the scenario. Using this model, the security of a stand-alone stream cipher can be strengthened by exploiting the properties of physical layer. In this paper, a joint channel coding and light weight cryptography for setting a Gaussian wiretap channel is proposed. The scheme is based on a keyed Quasi Cyclic Low Density Parity Check (QCLDPC) encoder and light weight stream cipher based on Linear Feedback Shift Register (LFSR). The significant contribution is that, highly complex non-linear function that provides security in a Non-Linear Filter Generator (NLFG) is replaced by a simple non-linear function without compromising security. Enhanced security with lesser complexity is achieved by embedding security in channel encoder. Results show that attacker cannot extract the secret key because of the errors introduced in the physical layer due to unknown structure of the channel encoder.
pdf file
Design and Analysis of Online Punjabi Signature Verification System Using Grid Optimization
Ankita Wadhawan (DAV Institute of Engineering and Technology, India); Dinesh Kumar (DAV Institute of Engineering & Technology)
Signature verification is the major research topic in the area of biometric authentication. Signature is a behavioral attribute based on ones behavior. In this a given input is examined and is either rejected as forgery or accepted as genuine. To the best of our knowledge no work has been done on online signature verification of Indian Languages. This paper deals with the on-line signature verification of Punjabi signatures. A digitizing tablet with stylus is used for acquiring signatures online. Support vector machines were used for recognition of Signatures. The performance of the system was explored by radial basis function in which grid optimization is used. Numbers of experiments are performed by increasing the number of samples and it has been found that the accuracy of the system increases as more and more number of samples are trained. Experiments were performed by using different gamma values to obtain error rates.
pptx file
An Improved EMHS Algorithm for Privacy Preserving in Association Rule Mining on Horizontally Partitioned Database
Rachit Adhvaryu (Gujarat Technological University, India); Nikunj Domadiya (S. V. National Institute of Technology - Surat (Gujarat), India)
The advances of data mining techniques played an important role in many areas for various applications. In context of privacy and security issues, the problems caused by association rule mining technique are recently investigated. The misuse of this technique may disclose the database owner's sensitive information to others. Hence, the privacy of individuals is not maintained. Many of the researchers have recently made an effort to preserve privacy of sensitive knowledge or information in a real database. In this paper, we have modified EMHS Algorithm to improve its efficiency by using Elliptic Curve Cryptography. We have used ElGamal Cryptography technique of ECC for homomorphic encryption. Analysis of the experiment on various datasets show that proposed algorithm is efficient compared to EMHS in terms of computation time.
rar file
A Novel Comparison Based Non Blocking Approach for Fault Tolerence in Mobile Agent Systems
Richa Mahajan (GNDU, India); Rahul Hans (D A V Institute of Engineering and Technology, India)
Mobile agent is an intelligent agent which acts on the behalf of user. In the area of distributing computing mobile agent is having a wide scope. Security and fault tolerance are the two main issues in the progress of mobile agent computing. Fault tolerance makes system versatile and provides reliable execution even in case of an occurrence of any fault. This paper proposes a novel fault tolerance approach for read only as well as read/write applications. To achieve fault tolerance use the concept of checkpointing and cloning of original agent and to make it suitable for write applications integrate it with a mechanism which preserves exactly once execution in it also integrated the mechanism with footprints approach which are helpful for location tracking of the agent. For the sake of implementation we need aglet mobile agent platform to run an agent successfully within its itinerary. The results have been evaluated on the basis of parameters like checkpointing, round trip time and exactly once mechanism. The evaluated results shows that our proposed approach is suitable for read as well as read/write applications.
pptx file
An Integrated Approach of E-RED and ANT Classification Methods for DRDoS Attacks
P Mohana Priya and Akilandeswari V (Anna University, India); G Akilarasu and Mercy Shalinie (Thiagarajar College of Engineering, India)
The main objective of this paper is to detect the Distributed Reflector Denial of Service (DRDoS) attack based on a protocol independent based detection technique. The proposed system applies Enhanced-Random Early Detection (E-RED) algorithm and Application based Network Traffic (ANT) classification method in order to detect and classify the DRDoS attack according to their types. In the experimental analysis, the performance of the proposed system is evaluated by the Transmission Control Protocol (TCP) and Domain Name System (DNS) response packets. It detects the DRDoS attacks with 99% true positives and 1% false positive rates and classifies the types of attacks with 98% classification accuracy. The results and discussions show that the proposed method detects and classifies the highest probability of reflected response traffic as compared to the traditional methods.
Category Based Malware Detection for Android
Android, being the most popular operating system for the mobile devices, has attracted a plethora of malware that are being distributed through various applications (apps). The malware apps cause serious security and privacy concerns, such as accessing/leaking sensitive information, sending messages to the paid numbers, etc. Like traditional analysis and detection approaches for desktop malware applications, there have been many proposals to apply machine learning techniques to detect malicious apps. However unlike classical desktop applications, Android apps available on the "Google Play" [1] have a feature in "category" of app. In this initial work, we propose and investigate the possibility of improving the efficiency of machine learning approach for android apps by exploiting the category information. Experiment results performed over a large dataset, are encouraging which shows the effectiveness of our simple yet productive approach.
pdf filepptx file
Framework of Lightweight Secure Media Transfer for Mobile Law Enforcement Apps
Suash Deb (Cambridge Institute of Technology, India); Simon Fong (University of Macau, Macao); Sabu Thampi (Indian Institute of Information Technology & Management, India)
Framework of Lightweight Secure Media Transfer for Mobile Law Enforcement Apps
Internal Hardware States Based Privacy Extension of IPv6 Addresses
Reshmi Tr (Ramanujan Computing Centre & Anna University, India); Shiney Manoharan and Krishnan Murugan (Anna University, India)
The Internet Protocol Version 6 (IPv6) usage is booming up in recent years due to the address scarcity of existing protocol. This protocol faces various security threats and is under research for few decades. Although IPsec is mandated for security over IPv6 end-to-end communication, it does not support link local communication. Link Local Security issues are considered to be important during the autoconfiguration. The existing mechanism SeND used to provide security during autoconfiguration faces issues related to algorithmic complexity, router functionality implications, key generation etc. The paper proposes a privacy extension method for link local address generation by using the internal hardware states of the system, thus overcoming the existing issues. The prototype is implemented in a test bed and compared with SeND. The proposed method has proven to outperform in terms of algorithmic strength by reducing the complexity and time delay during implementation.
pptx file
DDoS Detection System Using Wavelet Features and Semi-Supervised Learning
Srihari V and R Anitha (PSG College of Technology, India)
Protection of critical information infrastructure is a major task for the network security experts in any part of the globe. There are certain threats that will never evade away despite sophisticated advancements in defense strategy. Among them, Distributed Denial of Service (DDoS) attacks have witnessed continual growth in scale, frequency and intensity. The impact of DDoS attacks can be devastating such that it creates severe ripples to the cyberworld. Nowadays, attackers are advancing towards different variants of DDoS attacks to escape from the detection mechanisms. To acknowledge them, a novel defense mechanism with Detection scheme is proposed. Initially, wavelet based features are extracted and classified using semi-supervised learning to detect the DDoS attacks. Different wavelet families are studied and the combination of them seems to be robust and efficient and hence used as features. Machine learning algorithms are highly appreciated in many classification problems. There is considerable demand for labeled dataset and hence to bridge the gap between them and unlabeled dataset, semi-supervised learning algorithm is employed to classify the attack from normal traffic. Extensive analysis is performed by conducting experiments and by using real-time dataset. Results obtained are convincing and hence can be modeled for real-time approach.
pptx file
Secure Communication Using Four-wing Hyper-chaotic Attractor
Arti Dwivedi and Ashok Mittal (University of Allahabad, India); Suneet Dwivedi (University of Allahabd, India)
It is shown how a four-wing hyper-chaotic attractor can be used for secure communication using parameter convergence. Using some variables for complete replacement and some for feedback control and unknown parameter adaptation, two hyper-chaotic attractors are synchronized in a time less than the time scale of their chaotic oscillations. This synchronization is used for and secure communication of digital messages. The coding parameter of the transmitting system changes so rapidly that an intruder cannot infer any information about the attractors corresponding to the two coding parameters. The scheme presented in this paper is more secure as compared to other similar schemes. This is demonstrated by comparison with an existing scheme based on parameter adaptation and Lyapunov stability theory.
ppt file
Watermark Detection in Spatial and Transform Domains Based on Tree Structured Wavelet Transform
Ivy Prathap (PSG College of Technology, Coimbatore, India); Ramalingam Anitha (PSG College of Technology, India)
This paper presents an efficient, robust and blind approach to detect watermarks embedded in spatial and frequency domains of images. Spatial and transform domain energy features are extracted from the images using Tree Structured Wavelet Transform. An efficient classifier,Totalboost is used to classify the images as watermarked or unwatermarked images. In addition to this, the proposed detector can be able to detect watermarks even after various image processing and signal processing attacks. Simulation results show the effectiveness of the proposed scheme in terms of specificity, sensitivity and accuracy. Comparison with the state-of-the-art schemes demonstrate the efficiency of the proposed scheme.
pptx file
Forensic Analysis for Monitoring Database Transactions
Harmeet Khanuja (University of Pune, India); Dattatraya S Adane (Ramdeobaba K.N.Engg., college, RTM Nagpur University, India)
Database forensics aids in the qualification and investigation of databases and facilitates a forensic investigator to prove a suspected crime which can be used to prevent illegitimate banking transactions. The banks deals in public money but unfortunately are becoming vulnerable by receiving illegal money in the form of legitimate business. The absence of any preventive measures in the banks to monitor such scam would be perilous some day. If they violate relevant laws and regulatory guidelines they can unknowingly keep raising Money Laundering practices in their system. In this article we propose a forensic methodology for private banks to have ongoing monitoring system as per Reserve Bank of India (RBI) guidelines for financial transactions which will check their database audit logs on continuous basis for marking suspected transactions if any. These transactions are then precisely analyzed and verified with Dempster Shafer Theory of Evidence to generate suspected reports automatically as required by Financial Intelligence Unit.
pdf file

S43: SSCC-2014: Work-in-Progressgo to top

Room: 108-A Block E First Floor
Chairs: Vikrant Bhateja (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India), Shunmuganathan K l (RMK Engineering College, India)
Piracy Control Using Secure Disks
Jitendra Lulla (Chelsio Communications, India); Varsha Sharma (Wipro Technologies, India)
A lot of money and efforts have been invested in controlling unauthorized sharing of copyrighted contents but still the problem persists. This paper introduces a method to control and limit the disk duplication. Confidentiality and authenticity measures have been employed in this method to ensure that the disk if copied will result in an unusable and unauthentic copy. It also ensures that any attempt to tamper the contents on the disk while copying will also result in an unusable copy of the disk. The method however requires 2 constraints to be fulfilled: first, the disk will be playable on special physical drives and the disk will be made available to the authorized end user in a restricted manner. This paper also mentions that the approach being described is a generic one and different disk and disk player vendors can surely have compatibility
pptx file
Authentication of Trusted Platform Module Using Processor Response
Vikash Kumar Rai (Defence Institute of Advanced Technology, Pune, India); Arun Mishra (Defence Institute of Advanced Technology, India)
Authentication is the process which allows both the communicating entities to validate each other. Authentication is the base for the trust between the two communicating party if both party wants to properly communicate. Trusted Platform Module (TPM) can be used to securely store artifacts like passwords, certificates, encryption keys or measurements required to authenticate the platform. In the present scenario there is no concrete mechanism to authenticate the TPM chip. In this project, a method has been proposed to enable user of a system to authenticate the TPM chip of the communicating system. The proposed system uses public endorsement key of the TPM chip and the unique response the processor gives while executing a program with predefined set of step delays.
Design of Security System of Portable Device: Securing XML Web Services with ECC
Gopinath V (Sathyabama University & TCS ltd, India); Raghuvel Subramaniam Bhuvaneswaran (Anna University, Chennai, India)
In this paper, a design of a security system for portable drive is proposed using ECC to secure XML web service and associated application. An XML web service component to an existing VPN gateway is integrated to provide security solution for both the XML web service and traditional network-based application. Both share the same digital ECC key, which is utilized by the XML Web Service security component. The high level design of the SSL VPN on a portable device based SSL VPN application helps in connecting the peer to peer network. It secures the data transmission over the entire network route from the client to the remote server, and greatly improves the data processing speed of the server. The design is experimented with customized java coding; its performance is analyzed and compared with existing scheme. The design is found to be more secure and faster based on the preliminary results
ppt file
Analyzer Router: An Approach to Detect and Recover From OSPF Attacks
Deepak Sangroha and Vishal Gupta (AIACT&R, India)
Open Shortest Path First (OSPF) is the most widely deployed interior gateway routing protocol on the Internet. We here present an approach to detect the attacks to which OSPF is vulnerable. As security feature, OSPF uses "fight-back" mechanism to detect false LSA flooded in network and take appropriate action. But few attacks have been proposed which bypass/overtake this mechanism to inject false LSA. And few attacks that are out of range of this mechanism. We will try to implement our approach to detect and mitigate these attacks. This approach is reactive so it may take small interval of time to detect and recover network from attack but is effective in doing so and securing the infrastructure.
ppt file
Vulnerability of MR-ARP in Prevention of ARP Poisoning and Solution
Mukul Tiwari (Trinity College Dublin, Ireland); Sumit Kumar (ABB Global Industries and Services, India)
In this paper we are discussing ARP poisoning and its solution over Local Area Network. As we know that Enhanced ARP (MITM-Resistant Address Resolution Protocol (MR-ARP)) is the prevention of ARP poisoning in which two tables are used, for cross checking any ARP request and if any new request arises then it uses voting process. First table is long term table which is updated for 60 minutes and other one is short term table which is same as normal ARP table. In some cases update policy of the long term table of MR-ARP causes the attack over LAN and ARP poisoning can be possible. MITM is quite possible over MR-ARP when any node become offline for some time and any other node wants to perform attack over that node. Here we will discuss the attack over MR-ARP and some corrective measures
ppt file
A Heuristic Model for Performing Digital Forensics in Cloud Computing Environment
Digambar Povar (BITS Pilani Hyderabad, India); Geethakumari G (BITS-Pilani, Hyderabad Campus, India)
Cloud computing is a relatively new model in the computing world after several computing paradigms like personal, ubiquitous, grid, mobile, and utility computing. Cloud computing is synonymous with virtualization which is about creating virtual versions of the hardware platform, the Operating System or the storage devices. Virtualization poses challenges to implementation of security as well as cybercrime investigation in the cloud. Although several researchers have contributed in identifying digital forensic challenges and methods of performing digital forensic analysis in the cloud computing environment, we feel that the requirement of finding the most appropriate methods to evaluate the uncertainty in the digital evidence is a must. This paper emphasizes on the methods of finding and analysing digital evidence in cloud computing environment with respect to the cloud user as well as the provider. We propose a heuristic model for performing digital forensics in the cloud environment.
pdf file
CAVEAT: Credit Card Vulnerability Exhibition and Authentication Tool
Ishu Jain (GGSIPU, India); Rahul Johari (GGSIP University, India); R Ujjwal (GGSIPU, Delhi, India)
Online banking (or Internet baking or E-banking) makes people capable to do financial transactions on a secured website. It allows users to manage their money without going to their respective banks. Today, the users can do the financial transactions of their daily life like bill payments, shopping, booking movie, train, air and various other event tickets through online banking. Since the online banking involves circulation of money so it should be secured but as the use of online banking is increasing, the security threats to the banking applications are also increasing. In this paper, we have shown the exploitation of Injection(OWASP Top 10-2013 A1 Vulnerability) using SQL Injection attack and Broken Authentication(part of OWASP Top 10-2013 A2 Vulnerability) using Brute Force Attack and Dictionary Attack and the prevention of all these attack by storing the data in our database in encrypted form using AES algorithm.
Detection of Active Attacks on Wireless IMDs Using Proxy Device and Localization Information
Monika Darji (Gujarat Technological University, India); Bhushan H Trivedi (GLS Institute Of Computer Technology, India)
Implantable Medical Devices (IMDs) are used to monitor and treat physiological conditions within the body. They communicate telemetry data to external reader/programmer device and receive control commands using wireless medium. Wireless communication for IMDs increases cost effectiveness, flexibility, ease of use and also enables remote configuration and monitoring. However, it makes the IMDs vulnerable to passive and active attacks. While passive attacks on IMDs can be addressed using encryption techniques, active attacks like replay, massage injection and MITM need more advanced techniques to be detected and prevented. In case of other wireless devices one can deal with these security issues by installing one or more security mechanisms, but the same cannot be applied to IMDs. This is due to their positioning inside human body which makes replacement and power charging extremely difficult, their miniaturization which makes them storage, processing and power scarce, their unusual access requirements during device or patient emergency and their incapability of renewing shared secrets. It is advisable to use the resources of IMDs for life critical medical care and minimalist communication. This leads to the implied option of using an external proxy device which can offload security related processing from IMDs. Therefore, to address the problem of Active Attacks, we propose use of RF-signal based localization technique which leverages multi-antenna Proxy Device to profile the directions at which reader/programmer signal arrives and use of triangulation techniques to construct a signature that uniquely distinguishes an authorized reader/programmer from unauthorized one.
SQL Filter - SQL Injection Prevention and Logging Using Dynamic Network Filter
Jignesh C Doshi (L J Institute of Management Studies, Ahmedabad, India); Maxwell Christian (GLS University & Gujarat Technological University, India); Bhushan H Trivedi (GLS Institute Of Computer Technology, India)
Web has become buzz word for business in recent times. With the increase in attacks, web database applications become more vulnerable. Structure Query language is most commonly used for database attack. As per OWASP the top 5 attacks out of 10 are related to SQL. Database attack solutions fall into two category: Defensive coding and filters. The focus of such attacks is on data manipulation, steal and by pass authorization. In this paper authors have prepared a Dynamic Network filter to detect and prevent database attacks.
pptx file
Results on (2,n) Visual Cryptographic Schemes
Praveen K and Sethumadhavan M (Amrita Vishwa Vidyapeetham, India)
In the literature a lot of studies are proposed on (2, n) VCS using either XOR or OR operations but there must be a compromise on either pixel expansion or contrast on the existing schemes. A previous study on ideal contrast (2, n) VCS with reversing was seen in the literature using combined OR and NOT operations. Ideal contrast means after reconstruction the original secret image is obtained without change in resolution. This paper shows one construction on an ideal contrast (2, n) VCS using combined XOR and OR operations with less amount of transparencies (pixel expansion)than ideal contrast (2, n) VCS with reversing using OR and NOT operations.This paper also shows a construction of non expandable (2, n) VCS which perfectly reconstruct the white pixels and probabilistically reconstruct the black pixel using XOR operation.
ppt file
Zero Distortion Technique: An Approach to Image Stagenography Using Strength of Indexed Based Chaotic Sequence
Shivani Sharma (ABES-EC, Ghaziabad, India); Virendra Kumar Yadav (ABES Engineering College, India); Saumya Batham (ABES Engineering College, Ghaziabad, India)
Steganography is an art of hiding information. There are several existing approaches, of which LSB is the popular known technique. While performing image steganography, there are certain limitations, in terms of time, robustness, distortion, quantity of data to hide etc. A common major limitation involved in these approaches is: altering the pixel value of the image which leads to distortion in the cover image. It can be easily detected by histogram and PSNR value. Zero Distortion Technique (ZDT) is proposed to overcome the limitation, as no changes are reflected in the histogram and PSNR value of the cover and the stego image. Experimental results on certain images shows that the proposed algorithm gives refine results. The proposed technique is robust, fast and helpful in providing security to our confidential data.
pptx file
Attack Graph- Generation, Visualization and Analysis: Issues and Challenges
Ghanshyam Bopche (Institute for Development and Research in Banking Technology (IDRBT) & School of Computer and Information Sciences (SCIS), University of Hyderabad, India); Babu Mehtre (IDRBT - Institute for Development and Research in Banking Technology & Reserve Bank of India, India)
In the current scenario, even the well-administered enterprise networks are extremely susceptible to sophisticated multi-stage cyber attacks. These attacks combine multiple network vulnerabilities and use causal relationship between them in order to get incremental access to enterprise critical resources. Detection of such multi-stage attacks is beyond the capability of present day vulnerability scanners. These correlated "multi-host, multi-stage" attacks are potentially much more harmful than the single point/ isolated attacks. Security researchers have proposed an Attack Graph-based approach to detect such correlated attack scenarios. Attack graph is a security analysis tool used extensively in a networked environment to automate the process of evaluating network's susceptibility to "multi-host, multi-stage" attacks. In the last decade, a lot of research has been done in the area of attack graph- generation, visualization and analysis. Despite significant progress, still there are issues and challenges before the security community that needs to be addressed. In this paper, we have tried to identify current issues and important avenues of research in the area of attack graph generation, visualization and analysis.
pdf file

Friday, September 26, 15:00 - 18:00 (Asia/Calcutta)

T10: Tutorial 10- IEEE 802.11ah, an Enabling Technology for the Internet of Things. How Does It Work?Detailsgo to top

Dr. Evgeny Khorov, Senior Researcher, MIPT and IITP RAS, Russia
Room: 004 Block E Ground Floor

Smart technologies play a key role in sustainable economic growth. They transform houses, offices, factories, and even cities into autonomic, self-controlled systems acting often without human intervention and thus sparing him doing routine caused by information collecting and processing. Some analysts forecast that by 2020 the total number of smart devices connected together in a network, called Internet of Things (IoT), will reach 50000000000. Apparently, the best way to connect such a huge number of devices is wireless. Unfortunately, the state-of-the-art wireless technologies cannot provide connectivity for such a huge number of devices, most of which are battery supplied. 3GPP, IEEE and other international organizations are currently adapting their standards to the emerging IoT market. For example, the IEEE 802 LAN/MAN Standards Committee (LMSC) has formed IEEE 802.11ah Task Group (TGah) to extend the applicability area of the IEEE 802.11 networks by designing an energy efficient protocol allowing thousands of indoor and outdoor devices working at the same area. In this tutorial, we will focus on very promising revolutionary changes introduced by TGah and adopted in November 2013 as the first draft standard of the Low Power Wi-Fi (IEEE 802.11ah) technology. From the tutorial, you will learn how IEEE 802.11ah operates. Also we will pay attention to some research challenges in this area.

T11: Tutorial 11 - High Efficiency Video CodingDetailsgo to top

Mr. Shailesh Ramamurthy, Arris India
Room: 009 Block E Ground Floor

High Efficiency Video Coding (HEVC) is the latest video compression standard from ISO/IEC MPEG and ITU-T VCEG, and promises to be a spectacular successor to the H.264/MPEG-4 AVC. It targets to be doubly efficient at compression with respect to H.264/MPEG-4 AVC when benchmarked at the same video quality. It is conducive towards resolutions like 4K and 8K Ultra High Definition Resolutions.

The tutorial would cover the following modules:

Introduction to compressing and delivering visual media of the next generation Enabling Techniques: Overview of Tree Block Structures, Intra and inter prediction techniques, Entropy Coding, Motion Compensation, Motion Vector Prediction, Transform Techniques, Deblocking and Sample Adaptive offset filters Parallel Processing Tools Applicability in end-to-end use-cases Scalable coding and 3D extensions This tutorial will benefit participants from academia and industry interested in understanding HEVC. Theoretical concepts will be linked to end-to-end use-cases to drive home the applicability of various tools and techniques.

T12: Tutorial 12- Design Automation for Quantum Computing CircuitsDetailsgo to top

Dr. Amlan Chakrabarti, University of Calcutta, India
Room: 010 Block E Ground Floor

Harnessing the power of quantum mechanical properties of atomic and sub-atomic particles to perform useful computation creates the new paradigm of quantum computation. The motivation of quantum computing was initiated by pioneers like Richard Feynman and Charles H. Bennett. Though new, quantum computing has created lot of excitement amongst computer scientists due to its power in solving some important computational problems faster compared to the present day classical machines. The quantum phenomenon like superposition, interference and entanglement are the key players in enabling quantum machines to outperform the classical machines. Quantum algorithms can be applied in a variety of applications to name a few, systems of linear equations, number theory, database search, physical simulation, chemistry and physics etc. Quantum algorithms are usually described in the commonly used circuit model of quantum computation, which acts on some input quantum state and terminates with a measurement.

This tutorial will give an overview of quantum computing algorithms and circuits with a brief insight on the design automation for quantum circuit design. The key steps involved in the quantum circuit design for a given quantum algorithm for the different target quantum technologies will be addressed in this lecture.

Outline:

  1. Introduction

Why Quantum Computing?

Classical vs. Quantum

Key aspects of power of Quantum Computing

QC Technologies of today

  1. How to design Quantum Computers

  2. Quantum Logic

Basic Gates

Universal set of gates

Reversible Logic

Quantum gate cost model

Circuits for Quantum Algorithms

  1. Design Automation for Quantum Circuit Synthesis

Quantum Algorithm Description (QCL)

Quantum Assembly Format (QASM)

Logic Synthesis

  • Reed-Muller Synthesis

  • Multi-Controlled Toffoli Synthesis

  • Nearest Neighbor Synthesis

Technology Mapping

  • PMD specific identities

  • PMD specific Fault Tolerant Quantum Logic Synthesis

Quantum Error Correcting Codes

Layout for Quantum Circuits

T8: Tutorial 8 - Design-for-testability automation of analog and mixed-signal integrated circuitsDetailsgo to top

Dr. Sergey Mosin, Vladimir State University, Vladimir, Russia
Room: Auditorium Block D Ground Floor

Testing takes an important place in the processes of electronic circuits design and implementation. About 40-60% of total time required to IC development is spent on test procedures. According to the "rule of ten" the cost of testing is increased tenfold at each next manufacturing cycle. High expenditure is dealt with increasing a complexity of IC and complication of test related efforts. Therefore the increasing of efficiency in test preparation and realizing for analog and mixed-signal integrated circuits is actual task. The following factors provide complexity of IC testing: changes of technological processes, increasing the scale of integration, high functional complexity of developed devices, limited access to internal components of IC, etc. The reduction of test cost (both time and money) may be providing at development and application new efficient test strategies. Design-for-Testability (DFT) is one of the perspective approaches, which ensures a selection of proper test solution already at early stages of IC design.

This tutorial will focus on design-for-testability automation issues. The four key processes of design-for-testability automation will be considered: simulation, test generation, testing sub-circuits generation and decision making. Simulation provides calculation of main parameters and characteristics for a designed circuit using sets of mathematical models and methods for numerical modeling of electronic circuits. Test generation provides a selection of controlled parameters, test nodes and test stimuli for a designed circuit, fault dictionary construction and efficiency estimation for obtained test patterns. Testing sub-circuits generation provides selection and inclusion in original circuit some test structures (DFT-solution) for analog and digital sub-circuits ensuring reduction of test complexity for a manufactured mixed-signal integrated circuit on the whole. Decision making provides comparison of proposed DFT-solutions based on cost model and fault coverage, and selection the reasonable DFT-solution for a designed IC taking into account such features as used integrated technology for manufacturing, volume of production, chip area, wafer effective radius, etc.

Outline:

Importance of IC testing: Defects and faults. The role and place of testing in life cycle of IC and system. Verification, testing and diagnosis. Rule of ten. The mission of design-for-testability.

Methodology of design-for-testability automation: Design flow of analog and mixed-signal IC. Design-for-testability automation. Selection of test nodes. Selection of test signals and test patterns. Fault dictionary construction. Econometrical (cost) models. Criteria of selecting the testing circuitries for analog and digital sub-circuits.

Approaches to testing analog sub--circuits: Increasing observability and controllability of internal nodes by application in-circuit mux and dmux. Oscillation built-in self-test technique. Signature analyzer. Test bus IEEE 1149.4.

Approaches to testing digital sub-circuits: Built-in self-test: LFSR, MISR, BILBO. Test bus IEEE 1149.1

T9: Tutorial 9 - Network Security and Beyond: Network Anomaly Detection in the FieldDetailsgo to top

Dr. Christian Callegari, University of Pisa, Italy
Room: 003 Block E Ground Floor

This tutorial provides an overview of the most relevant approaches to network anomaly detection, as well as of the main challenges in applying anomaly detection to "real world" scenarios. The tutorial is structured into three main parts: in the first one, starting from the seminal work by Denning, the basic concepts about anomaly detection will be introduced. Then, in the second part, some of the most recent and relevant works about statistical anomaly detection will be discussed. For each of the presented methods, the description of the theoretical background, focusing on why the method should be effective in detecting network anomalies and attacks, will be accompanied by a discussion on the anomalies that can be detected and on the achievable results, as well as on the main limitations of the method. Finally, the third part of the tutorial will focus on the challenges that arise when applying Anomaly Detection in the field, e.g., how to deal with huge quantities of data or with the privacy concerns typical of highly distributed scenarios.

Outline of the presentation

I. Introduction and Motivation (10 min)

II. Basics of Statistical Intrusion Detection Systems (20 min)

  • General Concepts about Anomaly Detection

  • IDES Intrusion Detection Expert System: the use of a statistical approach to detect anomalies in the network traffic was first introduced by Denning. The author proposed an early, abstract model of an Intrusion Detection Expert System (IDES), based on the statistical characterization of the behavior of a subject with respect to a given object.

III. Statistical approaches for anomaly detection (90 min)

  • Snort: does the most famous IDS perform anomaly detection? on which basis and to which extent?

  • Clustering: clustering is a well-known technique, usually applied to classification problems. In the context of anomaly detection, two distinct approaches have been developed, which will be both discussed.

  • Heavy Hitters and Heavy Changes: monitoring the changes in the distribution of the heavy hitters of the network traffic can be used to detect DoS/DDoS attacks, as well as other distributed attacks (e.g., Botnets).

  • CUSUM: cusum based approaches, which aim at detecting abrupt changes in the time series given by the temporal evolution of several traffic parameters (e.g. number of received bytes), can be used to detect anomalies in the network traffic.

  • PCA: principal component analysis is effectively used to tackle the problem of high dimensional datasets, which usually affects network monitoring systems. In this field, PCA is often used as a detection scheme, applied to reduce the dimensionality of the audit data and to detect anomalies, by means of a classifier that is a function of the principal components. In spite of being one of the most widely applied tools for Anomaly detection (also in commercial products), it presents many limitations.

IV. Anomaly Detection in the Field (90min)

  • Dealing with traffic seasonality: seasonality of the traffic poses several problems in the application of most of the anomaly detection techniques. Some of the most classical approaches (e.g., Wavelet analysis) to pre-filter such seasonal components will be discussed, highlighting the improvements (and the drawbacks) introduced in the system.

  • Dealing with huge quantities of data: the explosive growth of the traffic poses several problems when applying techniques that need to process the whole traffic. We will discuss pros and cons of several data mining techniques (e.g., Sketch and Reversible Sketch) that permits to analyze a data flow, almost in real-time, without storing all the data.

  • Dealing with distributed environment: highly distributed, multi-domains environments pose several constraints to the application of any traffic monitoring techniques (e.g., privacy concerns). We will discuss how to deal with them, so as to respect the legislation, still being able to effectively perform anomaly detection.

IV. Discussion and perspectives (30min)

Intended Audience: This tutorial is addressed to all researchers and practitioners working in the field of networking, who can be interested in detecting an anomalous behavior in the network, and in particular to those dealing with intrusion detection systems, anomaly detection, DoS/DDoS attack detection. In addition to this, the tutorial may be of interest to all those people also dealing with statistical approaches for traffic monitoring and classification.

Since all the theoretical notions necessary to understand the covered topics will be provided in the tutorial, no particular knowledge is required for attendees, except for some basics of networking (IP/TCP architecture).

Friday, September 26, 17:30 - 19:15 (Asia/Calcutta)

S44: Poster/Demo Tracksgo to top

Room: Lawn Area Block E
Fingerprint Recognition Password Scheme Using BFO
Harpreet SIngh Brar (Panjab University, India); Varinder Pal Singh (Thapar University, India)
Fingerprint recognition is mostly used for the identification of the person amid all the biometric techniques. This is the only outline used for ridges and valleys of the finger on the surface. Accuracy measure plays a vital role in the recognition of finger print. The database architecture should be trained so efficiently that is can recognize the finger print in all gestures and circumstances. This paper presents a unique bacterial forging optimization method for the recognition of finger print on the basis of the minutia extracted points whose mechanism has been explained in the sub sections of the paper. This paper also presents a study of the comparison of accuracy of BFO and SVM classifie
pptx file
Gray Testing Support in Software Repository with Keyword Based Extraction
Inderjit Singh (Thapar University, India)
In the software industry, generally software manufacturers develop the component and use it only once. As a result of which there is lot of wastage in effort, cost as well as time in developing software's. The solution to the above is to reuse the software components further in the development of other similar kind of products or the products having same functionality. So, a successful component storage structure is required to increase the availability of reusable components. Repository must be developed which supports essential features like efficient storage and retrieval. In this paper gray testing tool has been implemented with an efficient storage structure which will provide both black box and white box testing support to the reusable components and will store all the test cases generated while performing gray testing of the component for future use.
ppt file
Group Search Optimizer Algorithm in Wireless Sensor Network Localization
R Krishnaprabha (MES College of Engineering, Kuttippuram, India); Gopakumar Aloor (MES College of Engineering, Kuttippuram & University of Calicut, India)
Localization is one of the main challenges in Wireless Sensor Networks. It is performed to determine the location information of sensor nodes in the network. When the sensors collect data and report events, it is important to know the origin of data and events. In this work, a nature-inspired population based optimization algorithm called Group Search Optimizer (GSO) is proposed for locating sensor nodes in a distributed WSN environment. WSN localization is formulated as a non-linear optimization problem and solved using GSO. Performance evaluation of GSO based localization algorithm is carried out through simulations.
ppt file
GoldenCrops: A Software Tool for Analysis of a Social Network
Bashim Akram Khan and Ash Mohammad Abbas (Aligarh Muslim University, India)
In this paper, we describe the design of a software tool that we call GoldenCrops for analysis of a social network. We analyse the evolution or growth of a social network using GoldenCrops. We compare the evolution or growth of a social network computed using GoldenCrops with the evolution of some real life social networks. We observe that the trend of evolution of the real life social networks is similar to that obtained using GoldenCrops. Further, using GoldenCrops we analyze the graph-theoretical properties of a social network such as distribution of degree of nodes and clustering coefficient.
ppt file
Health Cloud - Healthcare as A service(HaaS)
Nimmy John and Sanath Shenoy (Siemens Technology & Services Pvt. Ltd)
Health care industry has come a long way from more than just Hospital Information Systems (HIS), Electronic Medical Records (EMR) to computer assisted surgeries and remote patient care, since the advent of information technology into the health care domain. With the advances in information technology, healthcare in all kinds of markets is becoming more digital, more collaborative, more patient-centered and more data-driven and aims towards accessing information anytime, anywhere. The traditional technology infrastructure of health care sector will not be able to cater to this massive amount of data generated and the various health care services to be offered to the patients. Cloud Computing is a fast growing trend that includes several service, all offered on demand over the internet in a pay-as-you-go model. It promises to increase the speed with which applications are deployed and lower costs. Cloud computing can play a critical role in managing the current trend of digital data growth and anywhere anytime availability of medical services. Cloud computing can also contribute significantly to containing healthcare integration costs, optimizing resources and ushering a new era of innovations in healthcare. This paper examines in brief, few of the digital data challenges that the healthcare industry is facing .The paper describes a system that is capable of offering various health care services that utilize cloud computing . The paper also presents the implementation of one service offered as part of the system described.
ppt file
Hybrid Model to Improve Bat Algorithm Performance - Solution for Software Cost Optimization
Rajan Gupta and Neeharika Chaudhary (University of Delhi, India); Saibal K. Pal (DRDO, India)
Bat Algorithm is one of the successful meta-heuristic algorithms, which is used prominently for the purpose of optimization. But its inherent feature of non-changing parameters with the various iterations makes it less appropriate for optimization of software cost estimation techniques like COCOMO. So the current study proposes a hybrid model for the improvement of Bat algorithm by enhancing the search (global) and thus helping in optimizing the fitness function by generating new solutions. The data set used for testing is NASA 63 and the fitness function used for cost estimation is Mean Magnitude of Relative Error (MMRE). The simulations are done using MATLAB version R2010a. Results shows a better MMRE for the hybrid model as compared to the original Bat algorithm used for the optimization of COCOMO II for software cost estimation.
pptx file
Image Steganography by Closest Pixel-pair Mapping
Adnaan Ahmed (Student of Heritage Institute of Technology, India); Nitesh Agarwal and Sabyasachee Banerjee (Heritage Institute of Technology, India)
Steganography is one of the important and elegant tools used to securely transfer secret message in an imperceptible manner. Visual Steganography is another added feature of it. It is the steganographic method involving multimedia files like image,video etc to hide a secret message. However this method may result in the distortion of the colour frequencies of the cover image which is predictable by some analysis.Here in this paper we have proposed a method for steganography which results in no distortion of the cover image. The proposed image is independent of the size of the cover image and the secret image i.e. a larger image can be hidden in a smaller image. The proposed method also uses AES Encryption for secure transfer of the stego-key.The nexus of this cover image and the encrypted data serves the purpose of secure transfer of secret data.
ppt file
Impact of Ge Substrate on Drain Current of Trigate N-FinFET
Vimal Mishra (MADAN MOHAN MALAVIYA ENGG COLLEGE GORAKHPUR, India)
In this paper, impact of lightly doped Ge substrate on drain current of N-FinFET has been investigated. The result obtained was compared with heavily doped Si substrate N-FinFET. The highest peak saturated drain current was found to increased by 25% in input characteristics and 8% in output characteristics. The fin height kept constant, hence without affecting the area consumed. The simulation has been done using VISUAL TCAD.
pdf file
Myself: Local Perturbation for Location Privacy in LBS Applications
Balaso Jagdale (University of Pune, India); Jagdish Bakal (University of Mumbai, India)
The location security in current location-based services (LBS) meets threat where mobile users have to report their actual location knowledge to the LBS provider in order to get their desired POI, (Points of Interests). We consider location privacy techniques that work using obfuscation operators and provide different information services using different cloaking techniques without any trusted components other than the client's mobile device. The techniques are then covered according to the random category. It blurs the accurate user location (i.e., a point with coordinates) and replaces it with a well-shaped cloaked region (e.g. Circle, Rectangle, Pentagon etc.). We have proposed the methods where instead of communicating with peers, user directly communicates with LBS. We have presented techniques where first technique which provides different privacy levels using obfuscation operators. The second technique for query processing generates the region of different shapes. Third demonstrates regional cloaking and two more new ideas are presented. We have shown effectiveness and performance of these techniques.
ppt file
Performance Evaluation of High Quality Image Compression Techniques
Amol Baviskar (Mumbai University & Universal College of Engineering, Vasai, Maharashtra,., India); Jaypal Jagdish Baviskar (VJTI, Mumbai & Veermata Jijabai Technological Institute, Mumbai, India)
The area of image processing has offered remarkable ideas and competent compression algorithms in the past few decades. The accretion of data generated by applications validate the use of different image compression schemes to decrease the storage space and time for transferring the images over the link. Each of the compression schemes provide different image quality metrics viz. PSNR, MSE, Compression ratio, Normalized Cross Correlation, Normalized Absolute Error etc. This paper implements 3D-Discrete Cosine Transform (DCT) for compressing high resolution images with substantial amount of background. It presents the evaluation of the implemented algorithm, along with the performance comparison of other compression schemes viz. JPEG Lossy, Sub-Band replacement DWT and K-Means. It proves to be a dominant technique for multi-spectral images with improved quality parameters achieving 40-54dB PSNR.
ppt file
Poster: Real-time Simulator of Collaborative Autonomous Vehicles
Farid Bounini (University of Sherbrooke, Canada); Denis Gingras (Université de Sherbrooke, Canada); Vincent Lapointe (Opal-RT Technologies Inc, Canada); Dominique Gruyer (LIVIC-IFSTTAR, France)
Collaborative autonomous vehicles will appear in the near future and will transform deeply road transportation systems, addressing in part many issues such as safety, traffic efficiency, etc. Validation and testing of complex scenarios involving sets of autonomous collaborative vehicles are becoming an important challenge. Each vehicle in the set is autonomous and acts asyn-chronously, receiving and processing huge amount of data in real time, coming from the environment and other vehicles. Simulation of such scenarios in real time require huge computing resources. This poster presents a simulation platform combining the real-time OPAL-RT Technologies for processing and parallel computing, and the Pro-SiVIC vehicular simulator from Civitec for realistic simulation of vehicles dynamic, road/environment, and sensors behaviors. The two platforms are complementary and their combining allow us to propose a real time simulator of collaborative autonomous systems.
pdf file
Proposal for Integrated System Architecture in Utilities
Rajan Gupta and Sunil K Muttoo (University of Delhi, India); Saibal K. Pal (DRDO, India)
For the public usage, there are various utilities that exists and operates in many countries. Major utility are in the sector of Power, Gas and Water distribution. Their major role is to distribute the essentials to the customers and in turn provide Customer support and Billing system through offline and online mode. Most of them fall under government control but may be managed by Franchise or Public private partnership model. All of them have individual existence in Delhi, India. We propose a common architecture through which the information systems and related processes can be merged for these utilities which will benefit the customers. The security analysis is done for the proposed architecture with respect to various threats.
ppt file
Raspberry Pi for the Automation of Water Treatment Plant
Sonali Lagu (University of Mumbai, India)
Automation of water treatment plant has already been developed and widely used in many countries. But most of them use programmable logic controllers-PLCs. This paper focuses on an innovative and intelligent control and monitoring system for Water Treatment Plant by using "Raspberry Pi" as an effective alternative to PLCs for the automation of small water treatment plants. Raspberry Pi is a minicomputer which has an ability to control the system comes with advantages like low cost and compact size.
ppt file
Real Time Jitters and Cyber Physical System
Hemangi Gawand (Homi Bhabha National Institute, India); Anup Bhattacharjee and Kallol Roy (Bhabha Atomic Research Centre, India)
Embedded Controllers form a vital part of industrial control system. In computer driven control system, a computer behaves like a controller that executes various task like control and monitoring using real time kernel or real time operating system (RTOS). Delay in task execution can lead to jitter and unexpected behavior of the system under control. This paper investigates the causes of jitter; and software implementation of the control task that can lead to jitter and denial of service attack. TRUETIME, a Simulink based simulator was used for simulation of jitter in real time control system.
ppt file
RFID Network Administration and Control
Devendra Rohila (Rajasthan Technical University, Kota, India); Neha Jain (Lecturer, India)
Network Administration provides the access to administrative controls over the issues like unsecured authorization present in networking applications i.e. it deals with all related issues. This document gives Radio Frequency Identification and Detection(RFID) based network application, which is accessed remotely from the administrator's system to all system present in its network via client-server application architecture. The application allows administrator to control remotely by TCP/IP Socket Programming. Duties of client and server systems are distributed as server application listen and run on remote computer while client provides the interface to access application on remote computer.This application provides Administration remotely that's why it is termed as Remote Administration Program Interface(RAPI).
pptx file
GridSys- A State of the Art Grid Framework
Sanath Shenoy (Siemens, India); Nimmy John (Siemens Technologies and Services Private Limited, India); Raghavendra Eeratta (Siemens Technology and Services Pvt Ltd, India); Ranjith Nair (Siemens Technologies and Services Pvt Limited, India)
Today many applications are developed using distributed technologies such as cluster, cloud and grid computing. These applications demand more resources for computation and storage. They demand flexible scaling and improved performance. Application now days can make use of multiple nodes (machines) to get the tasks completed. In this paper we discuss the, implementation details of a grid computing framework known as GridSys. This framework provides a fast and easy way to program a grid. It can easily help the application break the problem to compute intensive tasks. The framework distributes these tasks to different nodes of the grid efficiently and easily aggregate the results of these tasks provide fault tolerance and reliability.
ppt file
Special Projection Interfacing Device for Enhanced Routing-S.P.I.D.E.R
Charu Gandhi and Parth Gargava (Jaypee Institute of Information Technology, India); Sridhar Sharma, Ayush Arora and Akshay Jain (Student, Jaypee Institute of Information Technology, India)
In urban areas, local commuting is mostly done using automobiles comprising of cars and motor cycles. With complex road networks, it becomes essential to use a device (smart phone or tablet) for viewing routes on the Global Position System (GPS) to reach the destination. However repeated looking at the device while driving becomes risky and cumbersome for safety reasons. Encountering this problem, this project deals with simplifying directions to the destination along with notifications such as missed calls, text messages and weather updates on a small part of the screen of a car's wind shield with the use of a mini inexpensive projector. This paper discusses how it was achieved through acquiring images of navigation, applying digital image processing using Matrix Laboratory (MATLAB) and increasing its speed with the help of Compute Unified Device Architecture (CUDA).
ppt file
Survey of Fast Block Motion Estimation Algorithms
Shaifali Madan Arora (Guru Gobind Singh Inderpratha University, Dwarka, New Delhi & Maharaja Surajmal Institute of Technology, India); Navin Rajpal (GGSIP University, India)
Tremendous advancements in video capturing and display technologies and increased video applications in all arenas of life has raised the demand for enhancement in the field of video compression. Motion Estimation ,a key component in most of the video data processing based applications, has led to an ongoing research in this field. Lots of algorithms to estimate block based motion and criteria's to find best matching block had been developed. The current work reviews the advantages, disadvantages and various issues pertaining to these algorithms. Also the factors that have their impact on accuracy and efficiency of motion estimation like block matching criteria, edge matching in blocks, correlation between the neighboring blocks, pixel sub-sampling, size of search window, size of blocks and zero motion pre-judgment have been discussed. Apart from all of these the applicability, advantages and disadvantages of various block matching criteria are reviewed and mentioned.
ppt file
Synthesizing Perception Based on Analysis of Cyber Attack Environments
Sherin Sunny (Amrita Center for Cyber Security); Krishnashree Achuthan (Amrita Center for Cybersecurity Systems and Networks & Amrita University, India); Vipin Pavithran (Amrita Centre for Cybersecurity Systems & Networks, Amrita Vishwa Vidyapeetham, Amrita University, India)
Analyzing cyber attack environments yield tremendous insight into adversary behavior, their strategy and capabilities. Designing cyber intensive games that promote offensive and defensive activities to capture or protect assets assist in the understanding of cyber situational awareness. There exists tangible metrics to characterizing games such as CTFs to resolve the intensity and aggression of a cyber attack. This paper synthesizes the characteristics of InCTF (India CTF) and provides an understanding of the types of vulnerabilities that have the potential to cause significant damage by trained hackers. The two metrics i.e. toxicity and effectiveness and its relation to the final performance of each team is detailed in this context.
ppt file
Sys-log Classifier for Complex Event Processing System in Network Security
Keerthi Jayan (Amrita Vishwa Vidyapeetham & Amrita University, India); Archana K Rajan (Amrita University, India)
Internet is growing very rapidly; so is its security issues. There are a wide variety of attacks possible in networked machines. DOS attack, buffer overflow attack, cross site attack, DNS exploit attack are a few to name. Without security measures and controls in place, network and data might be subjected to attacks. The commonly deployed security devices are firewall, IDS, IPS, anti-virus etc. Potential number of threats is still pervading which are formulated as attacks by combining many unnoticed primitive events. The best solution is to install a Complex Event Processing (CEP) system which can analyze multiple devices to infer attack patterns. Log information of network devices is the best choice for analysis. In a large network, there will be millions of events logged. Correlated analysis of this huge volume of log is the main challenge in CEP system. We describe a method to reduce the input to the CEP system, using Support Vector Machine (SVM) classifier. Our experiment shows that the input size can be considerably reduce using the classifier. Hence improves the working of CEP system.
ppt file
System for Dynamic Configuration of TCP Buffers Based on Operator
Vidhi Goel (Samsung India Electronics Pvt. Ltd. & Samsung Research Institute, India); Deep Shikha Aggarwal (Samsung India Electronics Limited, India); Arun Nirwan (Samsung India Electronics Pvt. Ltd & SRI Noida, India)
For any network connection, the data throughput of the end device is related to its TCP (Transmission Control Protocol) buffer size, network latency and network bandwidth. In regions where open market devices are popular like European and Asian markets, the devices come with a static value of buffer sizes that are independent of operator's network conditions consequently resulting in either low throughput or wastage of kernel memory in case of low bandwidth networks. To solve this problem, we propose the concept of configuring the TCP buffer sizes dynamically based on the values obtained from the operator's server. The optimal buffer sizes would remain same for an operator network in a region under specific network conditions and can be calculated on the basis of the bandwidth and round trip time. Experimental results presented later demonstrate the significance of setting the buffer sizes concordant with the operator's configuration.
ppt file
Texture Classification by Rotational Invariant DCT Masks (RIDCTM) Features
T Ray (Tata Steel Ltd., India); Pranab K. Dutta (IIT Kharagpur, India)
Features extracted from texture database after convolution with the zero and ninety degree flipped version of the original sub-mask of Discrete Cosine Transform (DCT) basis filtering masks of size 8X8 have been proposed as Rotational Invariant DCT Masks (RIDCTM) features. Based on these features query images are classified excellently by minimum distance classifier. Also proposed rotational invariant feature extraction technique has been applied to segment captured images of coal particle belonging to different category of size range. Although the proposed technique almost equals the performance of the recent rotational invariant technique based on Gabor transform in terms of classification accuracy, its efficacy lies in easier implementation and lesser computational burden like any real transform.
ppt file
Video Based Indoor Navigation Using Smart Devices
Shweta MA and Nimmy John (Siemens Technologies and Services Private Limited, India); Rahul Raj (Siemens Technology and Services Pvt. Ltd., India)
Mobile technology has revolutionized the modern world in countless ways. Smart devices have transformed nearly every aspect of our day-to-day lives. Mobile technology has left an everlasting imprint on every industry, domain and technology. One such field that has undergone tremendous change with the advent of mobile technology is navigation. With the help of mobile technology, we can navigate ourselves to any corner of the world. Smart devices have brought navigation to the finger tip of users by providing built-in navigation and map applications. These applications facilitate accurate outdoor navigation using a Global Positioning System (GPS). Though there are several applications and techniques for outdoor navigation, accurate navigation within an indoor premise is still a challenge. This paper puts forth a video based indoor navigation solution using a portable device.
pptx file
Virtual Machine for Android Based Smart Phone
Rakhi Joshi (University of Pune, India)
This Paper emphasizes Virtual Machine implementation for Android based Smart Phone. It aims to achieve Virtualization of Android Environment and offload it to Cloud based Server. User access to Android Environment on Server will be through their Android Smart phone. The offloading of Computation and Storage will minimize usage of User's Smart phone resources leading to increased availability of battery capacity. User's Client module installed on Smart phone will require fast internet connection for seamless operations on the Virtual Machine. All the User's inputs will be executed on Virtual Machine on Cloud Server. Results of these operations will be sent to the User's Smart phone via Image transfers.
ppt file
XDSched:A Synchronized Web Based and Mobile Based Solution for Examiner's Duty Schedule
Maxwell Christian (GLS University & Gujarat Technological University, India)
Allotment and notification about the examination duties to the concerned examiners is of the prime importance when it comes to the smooth and regular conduction of examinations. But if these process involves time lag and communication gaps, then it can arise major issues/problems. So here has been idealized and implemented a solution to narrow down the communication gap using a synchronized website and mobile application used both for duty allocation as well as notifications.
pdf file
Empirical Analysis of Factor Influencing ERP Implementation in Indian SMEs
Prashant Deshmukh and Gopakumar Thampi (Mumbai University, India)
Small and Medium scale enterprises (SMEs) are opting for ERP (enterprises resource planning) implementation so as to remain competitive in global market. The aim of the SMEs to opt for ERP implementation principally is to manage and standardize business process effectively so as to stream line variety of local and global compliance requirement of customer. In developing countries like India SME represent spinal chord of Indian economy by contributing about 17 % to India's Gross Domestic Products (GDP) and 45 % to aggregate industrial output. However SMEs are facing challenges to provide high quality product at low cost. Literature shows successful implementation of ERP in large scale enterprises however unable to derive ERP implementation in Indian SMEs. This paper focuses on identification of factors for successful implementation of ERP in Indian SMEs through exhaustive industrial survey in different type of industries ranging from manufacturing to service sector. Further paper analyses identified factors and prioritize them for successful implementation of ERP in SMEs
ppt file
Analysing Cohesion and Coupling for Modular Ontologies
Shriya Sukalikar (Indian Institute of Technology Roorkee, India); Sandeep Kumar (Computer Science and Engineering, IIT Roorkee, India); Niyati Baliyan (Indian Institute of Technology Roorkee, India)
Ontologies are an essential component of the Semantic Web and in recent times, the significance of modular ontologies is largely increasing due to their superiority over monolithic ontologies. Out of the multiple modularization choices available, the one which guarantees best system design and performance should be applied. Some attributes of modular structure such as cohesion and coupling, determine the goodness of modularization technique applied. Few works are available in the field of assessing modular ontology in terms of cohesion and coupling. Most of such works are either syntax based or do not handle the structure of the ontological hierarchy during evaluation of cohesion and coupling. We propose an approach for analysing cohesion and coupling of modular ontology, which may be used in order to formulate measures for the same. The extent of dependence among the components of an ontology is analysed for quantification, which acknowledges subtle differences among relationship types. Moreover, our approach accounts for structural dependencies in ontology, thus making it a comprehensive model.
ppt filepdf file

Friday, September 26, 19:00 - 20:30 (Asia/Calcutta)

C1: Banquetgo to top

Room: Lawn Area Block E

Saturday, September 27

Saturday, September 27, 04:30 - 08:45 (Asia/Calcutta)

S56: Third International Symposium on Natural Language Processing (NLP'14)/ International Workshop on Authorship Analysis in Forensic Linguistics (AFL-2014)/ International Workshop on Language Translation in Intelligent Agents (LTIA-2014) - IIgo to top

Room: 016 Block E Ground Floor
Chair: Rajeev RR (IIITM-K, India)
A Modified Technique for Word Sense Disambiguation Using Lesk Algorithm in Hindi Language
Radhike Sawhney and Arvinder Kaur (Chandigarh University, India)
Word Sense Disambiguation (WSD) is a key factor in written and verbal communication of natural language processing. It is a method of selecting the appropriate sense of an ambiguous word in the given context. This paper aims at determining the correct sense of the given ambiguous word in Hindi language. A modified Lesk approach is used which uses the concept of dynamic context window. Dynamic context window is the number of left and right words of ambiguous word. The basic assumption of this approach is that the target word which has same meaning must have a common topic in its neighborhood. Furthermore the improvement in precision shows that the proposed algorithm gives better results than the previous approaches which uses fixed size context window.
pptx file
A Rule Based Bengali Stemmer
Md. Redowan Mahmud and Mahbuba Afrin (University of Dhaka, Bangladesh); Md. Abdur Razzaque (University of Dhaka & Bangladesh, Bangladesh); Ellis Miller and Joel Iwashige (Code Crafters International, Bangladesh)
One of the biggest challenges in doing word lookups is to derive the appropriate base word for any given word in Bengali. The basic concept to the solution of the problem is to eliminate inflections from a given word to derive its stem word. Stemmers attempt to reduce a word to its root form using stemming process, which reduces an inflected or derived word to its stem or root form. Existing works in the literature use lookup tables either for stem words or suffixes, increasing the overheads in terms of memory and time. This paper develops a rule-based algorithm that eliminates inflections stepwise without continuously searching for the desired root in the dictionary. To the best of our knowledge, this paper first investigates that, in Bengali morphology, for a large set of inflections, the stems can be computed algorithmically cutting down the inflections step by step. The proposed algorithm is independent of inflected word lengths and our evaluation shows around 88% accuracy.
pptx file
Speech Re-synthesis From Spectrogram Image Through Sinusoidal Modelling
Mayank Garg (Bits Pilani, Pilani Campus & IEEE Student Member, India); Rahul Singhal (BITS Pilani, India)
A novel method to extract parameters i.e. frequencies and their bandwidth for intelligible speech synthesis is presented in the paper. The parameters are extracted from the spectrogram image of the pre-recorded male and female voice samples and used to re-synthesize speech by employing sinusoidal signals. The phase continuity is preserved by quantifying time-scale and identifying phase at temporal boundaries for a given frequency. The amplitude distribution of the sinusoidals follow Gaussian distribution and use frequency overlap to extend the bandwidth from 4 kHz to 6 kHz for the improvement in clarity of synthesized speech. The synthesized speech is further passed through a weighting filter to improve the envelope of re-synthesized time-domain signal. The synthesized speech is synthetic but noticeably intelligible.
A Fused Forensic Text Comparison System Using Lexical Features, Word and Character N-grams: A Likelihood Ratio-based Analysis in Predatory Chatlog Messages
Shunichi Ishihara (The Australian National University, Australia)
This study investigates the degree that the performance of a likelihood ratio (LR)-based forensic text comparison (FTC) system improves by using logistic-regression fusion on LRs that were separately estimated by three different procedures, involving lexical features, word-based N-grams and character-based N-grams. This study uses predatory chatlog messages. The number of words used for modelling each group of messages is 500 words. The performance of the FTC system is assessed in terms of its validity (= accuracy) and reliability (= precision) using the log-likelihood-ratio cost (Cllr) and 95% credible intervals (CI), respectively. It is demonstrated that 1) out of the three procedures, the lexical features procedure performed best in terms of Cllr; and that 2) the fused system outperformed all three of the single procedures. The Cllr value of the fused system is better than that of the procedure with lexical features by a value of 0.14. It is also reported that the validity and reliability of a system is negatively correlated; the fused system that yielded the best result in terms of Cllr has the worst CI value.
pdf file
Semi-Automatic RDFization of Hindi Agricultural Words Using IndoWordNet
Megha Garg (DEITY India, India); Bhaskar Sinha (W3C India, India); Somnath Chandra (DEITY, India)
Generation of semantically meaningful ontology of Indian languages based terms, concepts, and their relations specially when talking about Hindi language, becomes a tedious task to approach without any disambiguation, where in a single change of relation, specific individual sub-node of ontology changes. This paper presents an approach for semi-automatic conversion of agricultural Hindi terms, concepts and their relations into structured ontology and processes into subsequent RDFization. Structured format uses its powerful mechanism of inheritance and reasoning capability to inference learning feature of loosely coupled and high bonding between nodes. This increases reusability, enhances data sharing on the web and graphically connected, making data machine readable and interoperable and so on. Hence, linking data which of machine readable format may prove to be of great significance specifically where information and data is highly sharable. This paper explores and collects some unstructured terms, concepts, relations of agriculture domain using IndoWordNet and generates ontology which finally gets converted into RDF/OWL format meeting with accordance of W3C standards.

Saturday, September 27, 09:00 - 10:30 (Asia/Calcutta)

R4: Conference Registrationgo to top

Room: Block E, Ground Floor (Reception)

Saturday, September 27, 09:30 - 13:45 (Asia/Calcutta)

S45: Sensor Networks, MANETs and VANETs - IIgo to top

Room: 003 Block E Ground Floor
Chair: Sanat Sarangi (Tata Consultancy Services, India)
Reviving Communication in Post Disaster Scenario Using ZIGBEE/GSM Heterogeneous Network
Debopriyo Banerjee (IIEST, Shibpur (formerly Bengal Engineering and Science University, Shibpur)); Sipra DasBit (Indian Institute of Engineering Science and Technology, Shibpur, India)
The occurrence of a catastrophic natural disaster disrupts all communication system, and it takes a lot of time to recover from that scenario. A temporary ad-hoc communication system is required for post-disaster emergency rescue operations. In developing countries, where very small population of people possesses smart expensive gadgets, a cost effective network solution is required in this regard. This paper provides the design of a Cell Phone Extension, which is used to extend the low-end cell-phones using ZIGBEE or WiFi connectivity. We also propose three alternative architectures of a heterogeneous ad-hoc personal area network, viz. combining ZIGBEE nodes and ZIGBEE/GSM dual radio nodes; smart-phones only and a combination of smart-phones and laptops. The proposed personal area networks help in delivering text messages opportunistically to the desired recipient in absence of cellular network. Out of the three alternatives we have considered the first architecture, where the nodes can be air dropped from air-crafts on affected areas, and thus get deployed. All the experiments are performed on test-bed, and the performance of our scheme has been evaluated by taking results in outdoor environment. The results show the viability of the solution, in terms of delay and success rate of message delivery, in reviving communication while communication infrastructure is partially damaged.
pptx file
Cryptanalysis and Enhancement of A Distributed Fine-grained Access Control in Wireless Sensor Networks
Santanu Chatterjee (Research Center Imarat & Defence Research and Development Organization, Hyderabad, India); Sandip Roy (Asansol Engineering College, India)
Fine-grained access control is used to assign unique access privilege to a particular user for accessing the relevant information. Recently, Yu et al. and Ruj et al. proposed a couple of fine grained access control schemes using public key cryptography. These schemes exploit and maneuver the concept of KP-ABE and a cryptographic technique based on bilinear pairing on elliptic curve groups. In this paper, we first show that though these schemes are efficient, but both schemes suffer from some fatal weakness such as vulnerability against an insider attack, specifically key abuse attacks by the genuine users. Therefore, a user with lower access privilege can access the secret data sent for a user of higher access privilege. This contradicts the basic objective of fine grained access control. Also, information sent for a particular user can be revealed to an adversary. In order to remedy that weakness, we propose some simple countermeasures to prevent key-abuse insider attack while the merits of existing fine grained access control scheme are left unchanged. Further, our scheme is unconditionally secure against various attacks such as man-in-the-middle attack, replay attack and denial of service attack. While providing these extra security features, our scheme incurs no such extra communication, computation or storage overhead as compared to the existing schemes.
ppt filepptx file
Investigating the Security Threats in Vehicular Ad Hoc Networks (VANETs): Towards Security Engineering for Safer On-Road Transportation
Parul Tyagi (JECRC, India); Deepak Dembla (JECRC University Jaipur, India)
The state-of-the-art improvements in cellular communication and ubiquitous availability of internet have led to significant breakthroughs in intelligent transportation systems where connectivity, autonomous driving and infotainment play a pivotal role in the enhanced driving experience. The Vehicle ad hoc Networks (VANET) have emerged as a distinguished branch of wireless communication pertaining to transportation systems. VANET is intended to dispense on-road vehicle safety and to boost the comfort experienced by drivers, passengers and other commuters. Whereas VANET offers exciting applications and explores unfamiliar dimensions in transportation, concerns regarding VANET security also continue to intensify. Security of vehicular networks, the authenticity and integrity of data dissemination remains a concern of utmost significance in VANET deployment. VANET architecture, by virtue of an abundance of networked vehicles, is susceptible to illegal use, unauthorized access, protocol tunneling, eavesdropping, and denial-of-service as the vehicles are unknowingly exposed to illegitimate information from unidentified adversaries. This paper investigates the security aspects of VANET and the attacks and vulnerabilities the VANET architecture is prone to. The study of security features and flaws is expected to lead to developed broadcasting and routing services, adding to the quality-of-service. Due to mobility of vehicles, large scale networks, rapidly restructuring nodes and frequently changing topological structure; a fundamental requirement of VANETs is to ensure safe transmission of the time critical data. The paper examines various security threats in VANETs, analyses how they are implemented and their impact on the VANET security architecture. A few gaps in the VANET security frameworks have also been highlighted which can be worked upon in the future.
ppt file
Energy Efficient Unequal Clustering and Routing Algorithms for Wireless Sensor Networks
Srikanth Jannu (Vaagdevi Engineering College, India); Prasanta Kumar Jana (Indian Institute of Technology(ISM) Dhanbad, India)
The sensor nodes deployed in wireless sensor networks (WSNs) are severely energy constrained. Thus, maximizing lifetime of the network is primarily determined in the algorithm design. In many applications, the nodes closer to the sink are overburdened with huge traffic load as the data from the entire region are forwarded through them to reach to the sink. As a result, their energy gets exhausted quickly and the network is partitioned. This is commonly known as hot spot problem. On the other hand the equal size clusters waste power according to the network density. This problem is called equal clustering problem. In this paper, we address the hot spot problem as well as equal clustering problem and present unequal size clustering and routing algorithms by considering the energy efficiency of the WSN. The algorithms are tested with various scenarios of WSN and experimental results show that the proposed algorithms perform better than the existing ones in terms of network life time, average energy consumption of a sensor node and number of active sensor nodes.
Compact Clustering Based Geometric Tour Planning for Mobile Data Gathering Mechanism in Wireless Sensor Network
Indrajit Banerjee (Indian Institute of Engineering Science and Technology, Shibpur, India); Bishakha Datta, Anamika Kumari and Shrabani Mandal (IIEST, Shibpur, India)
In this paper we propose a new clustering based data gathering mechanism for large scale wireless sensor networks. Our proposed mechanism first stores the locations of the sensors by GPS information and then sends a mobile data-collector (which can be a autonomous robot or a vehicle equipped with a transceiver and battery) which can move into the whole sensing field and collect data from the static sensors like a movable base station. Here at first our algorithm divides the region into a number of compact regions according to the range of the mobile collector and then determines a geometrical routing path along which the mobile collector can move and collect data from the sensors all in single hop data transmission technique and in minimal time. Since data packets are directly gathered without relays and collisions, therefore the lifetime of the sensors are expected to be prolonged. In our algorithm we have focused mainly on the facts like maximizing the network coverage, minimizing the overlapping of the regions , maximizing the number of nodes getting attended in one poll by the mobile collector and minimizing the path length so that the collector can cover the whole region in minimum time. The simulation results show a significant improved performance of proposed model.
zip file
Evaluating the Performance of Reactive I-LEACH
Vinay Kehar (Punjab Technical University, India); Rajdeep Singh (PIT, Kapurthala (PTU Main Campus), India)
Due to limited battery of sensor nodes, so energy efficiency found to be a main constraint in wireless sensor network. Therefore the main focus of the present work is to find the ways to minimize the energy consumption problem that will results; enhancement in the network stability period and life time. To achieve this we propose a technique in which algorithm will work in a reactive way, along with this cluster head will be selected on basis of three level decision tree. Residual energy is one of main parameter in decision tree; protection mechanism is also used to maintain the balance between clusters. The location of base station is also optimized in such a way that base station will be kept in the high node density region. Proposed algorithm is designed and implemented in MATLAB. Proposed algorithm provides better results than existing clustering protocols.
ppt file
Irrigation with Grid Architecture Sensor Network
Ravi Kishore Kodali and Lakshmi Boppana (National Institute of Technology, Warangal, India)
There are various irrigation techniques used to save water like localized irrigation, sprinklers and sub-irrigation. With the usage of wireless sensor network in irrigation only those specific areas which have the need are irrigated with limited amount of water to save over irrigation. In this paper WSN uses the grid routing technique where the whole of the field area under observation is divided into grids of selected length and breadth. The soil water content readings are measured using a soil water content sensor made by watermark and is observed for a time period. The grid routing architecture is simulated in ns-3 and the average energy of network, the dead node pattern and grid structure is plotted against the number of rounds.
An Improved Cluster Maintenance Scheme for Mobile AdHoc Networks
Sunil Pathak (JK Lakshmipat University, Jaipur, India); Nitul Dutta (MEF Group of Institutions, Rajkot, India); Sonal Jain (JK Lakshmipat University, India)
Cluster based routing in Mobile AdHoc Networks (MANETs) improves performance of routing by maintaining route information within a cluster head (CH). However, due to the movement of CH, new CHs are required to be selected from time to time and hence introduce additional overhead. If the frequency of CH changing could be reduced then the cluster based routing would be a better choice for routing in MANETs. This paper proposes an Improved Cluster Maintenance Scheme (ICMS) primarily focused on minimizing frequency of CH changing process. The proposed method makes cluster more stable. The proposed ICMS is simulated in ns-2 and compared with Least Cluster head Change (LCC), (CBRP) and Incremental Maintenance Scheme (IMS) in terms of number of cluster head changes, number of cluster member changes and clustering overhead by varying speed and pause time for mobile nodes. The simulation results show that ICMS perform better over LCC, CBRP and IMS.
ppt file
AnchLP: An Anchor Based Localization Protocol for Wireless Sensor Networks
Ash Mohammad Abbas and Hamzah Ali Abdulrahman Qasem (Aligarh Muslim University, India)
Devising a protocol for localization in a wireless sensor network is a challenging task. In this paper, we present a localization protocol for a wireless sensor network. In our protocol, a node computes its location using the locations of either anchor nodes or nodes whose locations are already computed and their distance estimates. Our protocol is distributed and does not need the information about the topology of the whole network to be available at a centralized node. Our protocol is asynchronous as it does not need the clocks of nodes in the network to be synchronized. Each node in the network relies on its local clock. Our protocol is scalable and can be applied to a network with relatively a large number of nodes. To evaluate the performance of our protocol, we carried out simulations. We study the effect of transmission range and anchor density on the localizability and localization errors. Further, we provided an expression for computing the probability of localization of a sensor in a wireless sensor network.
ppt file
Scheduled Collision Avoidance in Wireless Sensor Network Using Zigbee
Dnyanesh S Mantri (Pune University, India & Sinhgad Institute of Technology, Lonavala, Denmark); Neeli Rashmi Prasad (ITU, Center for TeleInFrastructure (CTIF), USA); Ramjee Prasad (Aalborg University, Denmark)
Transmission reliability and energy consumption are two critical concerns associated with wireless sensor network design as battery-powered sensor nodes are expected to operate autonomously for a long time. With the increase in reliability of transmission, the energy consumption increases by affecting the efficiency of the network. This paper proposes the scheduled based Collision Avoidance (SCA) algorithm for finding the trade off between reliability and energy efficiency by fusion of CSMA/CA and TDMA techniques in Zigbee/ IEEE802.15.4. It uses the multi-path data propagation for collision avoidance and effective utilization of channel providing efficient energy consumption. It analyses different scheduling schemes to provide an appropriate solution for reducing collisions and improving network lifetime.
ppt file

S46: Sensor Networks, MANETs and VANETs - IIIgo to top

Room: 004 Block E Ground Floor
Chair: Dnyanesh S Mantri (Pune University, India & Sinhgad Institute of Technology, Lonavala, Denmark)
An Energy Aware Routing Design to Maximize Lifetime of a Wireless Sensor Network with a Mobile Base Station
Indrajit Banerjee (Indian Institute of Engineering Science and Technology, Shibpur, India); Suriti Chakrabarti (Indian Institute of Engineering, Science and Technology, Shibpur, India); Arunava Bhattacharyya (Indian Institute of Engineering Science and Technology, India); Utsav Ganguly (Indian Institute Of Engineering, Science And Technology, Shibpur, India)
Due to the nature of many-to-one or many-to-few traffic patterns in Wireless Sensor Networks (WSNs), some critical nodes overload and tend to exhaust their energy at an early age bringing the whole network down. Typically, most routing algorithms employ a reliable cost metric to route the data but are unable to ensure balanced energy consumption from all the nodes in the WSN. In this paper, we have employed an energy aware routing algorithm to route data packets in a network with mobile base station. The routing algorithm balances the energy utilized from the hotspot nodes and ensures proper utilization of the whole network. We have evaluated the performance of our algorithm with other contemporary routing algorithms and the simulation results show a significant improvement in performance in terms of network lifetime.
pptx file
Energy Efficient Approach Through Clustering and Data Filtering in WSN
Nidhi Gautam (Panjab University Chandigarh, India); Renu Vig (Panjab University, India)
Wireless Sensor Network has been widely used in various application areas like patient care, habitat monitoring, sensing physical parameters, traffic monitoring and so on. The resource limitation of sensor nodes has forced the researchers to innovate new techniques for improving the network lifetime. Many techniques have been proposed like clustering, data fusion, data filtering, routing in homogenous as well as in heterogeneous networks. Due to resource limitation and availability of different types of sensor nodes; the focus has been shifted towards heterogeneous networks. The approach of limited mobility with few mobile sensor nodes has also been suggested for network longevity. Clustering and data aggregation in heterogeneous networks has been playing an important role in wireless sensor networks. In this paper; clustering and data filtering approach has been used in heterogeneous networks for network longevity. Among clustering algorithms, a comparison of VAS (Voronoi Ant Systems) and LEACH-C (Low Energy Adaptive Clustering Hierarchy-Centralized) has been presented. Among data filtering algorithms, a comparison of MTWSW (Modified Two Way Sliding Window) and TWSW (Two Way Sliding Window) algorithm has been presented. The approach used in this paper is applicable both for critical as well as for non-critical applications in wireless sensor networks.
Adaptive Learning Assisted Routing in Wireless Sensor Network Using Multi Criteria Decision Model
Suman Sankar Bhunia (National University of Singapore, Singapore); Sarbani Roy and Nandini Mukherjee (Jadavpur University, India)
Wireless Sensor Network (WSN) is one of the most demanding research topics prevalent in the modern research domain. WSN consists of densely deployed sensor nodes in the area which is to be sensed or monitored. Every tiny sensor node needs to transmit the sensed data to more powerful sink nodes. Sensed data may reach sink node through multiple hops. As the sensor nodes are resource constraint, most of the data forwarding algorithms employ putting threshold of key parameters and implement event driven routes. Instead, we would consider multiple parameters simultaneously for determining route to send the data to sink. In this paper, we have proposed routing scheme using Multi Criteria Decision Making technique where each criterion is assigned with a weight. An adaptive learning method is used to determine the weights. This routing scheme ensures robust packet reception ratio.
pdf file
Context-awareness based intelligent driver behavior detection: Integrating Wireless Sensor networks and Vehicle ad hoc networks
Abhishek Gupta (JECRC, India); Venimadhav Sharma (Rajastahn Technical University, India); Naresh Ruparam (JECRC, India); Surbhi Jain (JECRC, Jaipur, India); Abdulmalik Alhammad (De Montfort University, United Kingdom (Great Britain)); Md Afsar Kamal Ripon (DMU, United Kingdom (Great Britain))
The state-of-the-art advancements in wireless communication and pervasive internet capabilities have led to significant breakthroughs in intelligent transportation systems where connectivity, autonomous driving and infotainment play a key role in the enhanced driving experience. The Vehicle ad hoc Networks has emerged as a distinguished branch of wireless communication; that pertains categorically to transportation systems. In the recent years, the evolutionary arc, depth and scope of internet utilities in the transportation systems, particularly for on-road vehicles, have been applied to solve some of the security and road-safety issues. The road transportation system with the help of VANET is being evolved into a safer and efficient architecture. This paper introduces a conceptual design of VANET based novel driver-behavior detection system that utilizes wireless sensors and the concept of context-awareness, marking a vital step towards improving road safety. The driver-behavior is defined as an uncertain context characterized by constant interaction between the driver, the vehicle and the environment. This paper introduces novel 3-tier architecture for real-time driver-behavior detection that uses Swarm Intelligence to reason about the contextual uncertainty and to deduce the driver behavior. The system is designed to detect five styles of driving behavior viz.: drunk, wayward, rash, fatigued/drowsy and acceptable driving and sends out an alert if the driver is detected in an undesired state while driving. The system is proposed to be implemented in Network Simulator-3. The objective of this paper is to contribute towards developing suitable safety measures for road transport where drivers, smart vehicles and road-side infrastructure work in collaboration using the available context information
Adaptive Ant Colony Network Coding To Neighbour Topology Based Broadcasting Techniques in MANETs
Geet Kalani (Central University of Rajasthan, India); Kanakala Srinivas (Vaagdevi College of Engineering & Osmania University, India); Aitha Nagaraju (CURAJ, India)
In Mobile Ad-Hoc networks, broadcasting is most frequently used operation in the network layer to forward the control packets to all its neighbour nodes. Source or Intermediate node transmits a rebroadcast packet to any or all other nodes whenever it receives, which could generate duplicate transmissions and it ends up in a significant downside 'Broadcast Storm problem'. The researchers have been proposed 2-Hop Neighbour based protocol like DP, TDP, PDP and APDP to reduce broadcast storm in MANETs by choosing the minimum number of forwarding nodes using self pruning and dominant pruning. Nowadays, the researchers have been adapting Network coding idea (COPE) to neighbour topology based protocols which overcome the number of transmission by victimisation the using arithmetic operation i.e. XOR of packets. In this paper, we have made an effort to introduce Ant colony optimization to the COPE protocol with Pruning Algorithm. In this approach, we are using pheromone value to decide the packet combination. The pheromone value is evaluated based on the intersection of the sender packet list and Pruning algorithm forward node packet list. We have made an attempt to seek out the network coding gain in the high and low load situations and also in delay tolerant applications. We've shown simulation details in result section.
ppt file
DDEC: Distance based Deterministic Energy Efficient Clustering Protocol for Wireless Sensor Networks
Prabhleen Kaur (Punjab Institute of Technology, Kapurthala(Punjab Technical University Main Campus), India); Rajdeep Singh (PIT, Kapurthala (PTU Main Campus), India)
Due to easy deployment of wireless sensor network at unreachable track, it is achieving a great attention in all fields. Wireless sensor networks composed of little size smart sensor nodes placed at a particular region to sense data and send data being sensed to sink directly or with the help of other sensor nodes. Energy of sensor nodes is very less and hence efficient utilization of energy of sensor nodes is considered as a major component to prolong the network lifetime. We have proposed a distance based deterministic energy efficient clustering protocol which is self-organizing, distributed, dynamic and energy efficient protocol. It selects cluster head on the criteria of residual energy and approximate radius of cluster. Our protocol guarantees uniform and well distributed election of fixed number of cluster head. Simulation implemented in MATLAB reflects better results of our proposed protocol comparable to the existing protocols in heterogeneous environment in terms of stability period and network lifetime.
pptx file
Multi-Agent Data Aggregation in Wireless Sensor Network Using Source Grouping
Divya Lohani (Shiv Nadar University, India); Priti Singh (IIITA, India); Shirshu Varma (Indian Institute of Information Technology Allahabad, India)
Mobile agents in wireless sensor networks provide many advantages in comparison of conventional client/server architecture. Mobile Agents follow code to data approach and thus perform the task of data aggregation at the nodes rather than at the processing element. This approach reduces in- network data transmission, thus improving bandwidth usage and prolonging network lifetime. Use of multiple mobile agents instead of single mobile agent allows the task to get completed in a cooperative manner. In this paper, we present a multi agent solution for data aggregation using source grouping along with tree based ordering. Extensive simulations have proved better results in terms of energy consumed and latency in comparison of other approaches.
pptx file
Performance Analysis of Topology based Routing in a VANET
Raj Bala (NITTTR, Chandigarh, India); Rama Krishna Challa (National Institute of Technical Teachers Training & Research & NITTTR, Sector 26 Chandigarh, India)
Vehicular Ad Hoc Networks (VANET) is a variant of Mobile Ad Hoc Networks (MANET) in which communication nodes are mainly vehicles. VANETs are heterogeneous networks which provide wireless communication among vehicles and vehicle to Road Side Units (RSU). Now-a-days, it has become an interesting area of research as it is intended to improve Intelligent Transport System (ITS). To exploit effective communication among vehicles, routing is the key factor which needs to be investigated. This paper intends to analyze the performance of AODV routing protocol in a VANET in various scenarios under different traffic conditions with respect to Packet Delivery Ratio (PDR) and Average End-to-End Delay (E2ED). Simulation is performed using NS-2.35 in combination with VanetMobiSim. It has been found that AODV performs better in urban scenario than in highway scenario in terms of PDR and E2ED. It has also been found that the performance of AODV is improved by using IEEE 802.11p instead of IEEE 802.11.
pptx file
G-MOHRA:Green Multi-Objective Hybrid Routing Algorithm for Wireless Sensor Networks
Nandkumar Kulkarni (Aalborg University, Denmark); Neeli Rashmi Prasad (ITU, Center for TeleInFrastructure (CTIF), USA); Ramjee Prasad (Aalborg University, Denmark)
In Wireless Sensor Networks (WSNs), multi-objective optimization comprises more than one objective function to be improved simultaneously where there is a compromise between two or more contradictory objectives. The optimization method must be energy competent in terms of utilization and communication. This paper proposes a novel multi-objective optimization method known as Green (Energy Efficient)-Multi-Objective Hybrid Routing Algorithm (G-MOHRA) in WSNs. G-MOHRA uses hierarchical clustering. The information is dispatched using the finest path that uses a weighted average of various metrics in order to achieve energy efficiency and energy stability in the entire network. G-MOHRA uses various metrics such as Average Energy consumption (AEC), Control Overhead, Reaction Time, Link Quality Indicator (LQI), and HOP Count for identifying the best path from source to sink. In this paper, G-MOHRA uses objective functions that are said to be conflicting and provides Pareto optimal solutions. The performance of G-MOHRA is evaluated through intensive simulation and equated with Simple Hybrid Routing Protocol (SHRP) and Dynamic Multi-objective Routing Algorithm (DyMORA). The metrics such as AEC, Residual Energy, Packet Delivery Ratio, Jitter, and Normalized Routing Load are used for comparison. Performance of G-MOHRA has been observed to outclass SHRP and DyMORA. It improves the Packet Delivery Ratio by 18.72% as compared to SHRP and 24.98 % as compared to DyMORA. G-MOHRA outperforms SHRP and DyMORA in terms of Average Energy Consumption by a factor of 19.79 % and 15.52 % as compared to SHRP and DyMORA respectively.
pdf file
A 4-Stage Heterogeneous Network Model in WSNs
Samayveer Singh (NSIT, India); Satish Chand and Bijendra Kumar (Netaji Subhas Institute of Technology, India)
In this paper, we propose a 4-stage heterogeneous network model that can defines the stage-1, stage-2, stage-3, and stage-4, heterogeneity. We consider the stable election protocol (SEP) to estimate the network lifetime and accordingly call its implementations as HSEP-1, HSEP-2, HSEP-3, and HSEP4. The HSEP-1 is the original SEP protocol in which the network contains the same energy for all nodes. The HSEP-2, HSEP-3, and HSEP-4, contain two, three, and four stage of energy, respectively. Increasing the stage of energy, the network lifetime increases considerably. The HSEP-2, HSEP-3 and HSEP-4 increase the network lifetime by 49.42%, 110.29%, and 248.64% corresponding to the increase in the network energy by 38.58%, 72.58%, and 169.40% with respect to the HSEP-1.

S49: Pattern Recognition, Signal and Image Processing-IIIgo to top

Room: 104 Block E First Floor
Chairs: Vikrant Bhateja (Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow (UP), India), Angshul Majumdar (Indraprastha Institute Of Information Technology-Delhi & University of British Columbia, India)
Classification of White Blood Cells Based on Morphological Features
Anjali Gautam (Uttarakhand Technical University); Harvindra Bhadauria (Uttarakhand Technical University, India)
The extraction of nucleus from the blood smear images of white blood cells (WBC) provides the valuable information to doctors for identification of different kinds of diseases as most of the diseases present in body can be identified by analyzing blood. Manually it very soporific and tiresome to segment the nucleus and after that classification is done on the basis of that besides that the instruments which are used by experts for segmentation and classification of white blood cells are not affordable by every hospitals and clinics, so automatic system is preferable which reduces the times of segmentation and classification. In our research we focus on segmentation of nucleus from blood smear images using Otsu's thresholding technique applied after contrast stretching and histogram equalization of image followed by minimum filter for reducing noise and increasing brightness of nucleus, mathematical morphological is done to remove the components which are not WBCs, then shape based features are extracted on the basis of that classification rule is applied to classify them in their five category. The classification of nucleus is necessary as they are used to identify different kind of diseases which are related to each type of white blood cells and also help in differential blood count of cells.
pptx file
Co-occurrence Matrix and Statistical Features as an Approach for Mass Classification
Jaya Sharma (Amity University, India); Jaynendra Kumar Rai (Amity School of Engineering and Technology, Amity University Uttar Pradesh, India); Ravi Tewari (MNNIT, Allahabad, India)
This paper presents a texture based approach for distinguishing mass from normal breast tissue in a mammogram. Identification of high probability area as mass is done on the basis of statistical features obtained from Gray-Level-Co-occurrence Matrix (GLCM) of mammogram image. The input mammogram is first pre-processed to remove the labeling artifacts and enhanced using adaptive histogram equalization. Unwanted details from the mammogram are excluded on the basis of block processing and histogram based features are extracted. Features based on GLCM are computed and analyzed to distinguish a suspicious mass from a non-mass region. Obtained results are promising in terms of correct classification. Contrast and energy measure from GLCM and mean, standard deviation and entropy helps to appropriately differentiate malign mass and normal tissue area.
ppt file
Adaptive Matrix Design For LDPC Based Image Processing System
Jaypal Jagdish Baviskar (VJTI, Mumbai & Veermata Jijabai Technological Institute, Mumbai, India); Afshan Mulla (Veermata Jijabai Technological Institute, India); Amol Baviskar (Mumbai University & Universal College of Engineering, Vasai, Maharashtra,., India); Chirag Warty (Intelligent Communication lab & Director, Quantspire, India)
In the field of communication, coding the data is the most crucial part. Since transmission channel is very much susceptible to noise, it becomes imperative to channel code the data to be transmitted. This paper presents an improved and faster Low Density Parity Check (LDPC) based algorithm, for channel coding grayscale images of 'n*n' dimension. It constructs the encoded image, by arranging bits into 8n data vectors and modulating it with BPSK modulation technique.The novel approch for designing an adaptive matrix as a function of input image size is described in the paper. The decoding algorithm is well equipped with a probabilistic approach. Further analyzing the performance of the algorithm shows improved restitution of images, that are affected by AWGN noise. Hence, LDPC codes prove their dominance over other coding schemes viz. Turbo codes, BCH etc, for applications such as medical imaging, deepspace communications and multimedia.
pptx file
Image Compression Scheme Based on Zig-Zag 3D-DCT and LDPC Coding
Jaypal Jagdish Baviskar (VJTI, Mumbai & Veermata Jijabai Technological Institute, Mumbai, India); Afshan Mulla (Veermata Jijabai Technological Institute, India); Amol Baviskar (Mumbai University & Universal College of Engineering, Vasai, Maharashtra,., India); Chirag Warty (Intelligent Communication lab & Director, Quantspire, India)
The prime necessity in communication systems is utilizing the available bandwidth efficiently. Hence, compression of data to be sent over the link has become inevitable. In this paper, an algorithm for compressing hyperspectral space images based on ZigZag 3D-DCT (Discrete Cosine Transform) technique is proposed. This method converts 2D gray-scale images into a 3 dimensional cube formation of 8*8*8 pixels and is operated with DCT. Thereafter the quantization and zig-zag scanning processes are implemented. After completing the processes, the 1D data vector formed facilitates in achieving better compression using run-length coding. Also, in order to design a complete practical system, a suitable irregular LDPC encoder is implemented in order to mitigate losses present in communication link. The performance of the algorithm is verified by plotting various quality measurement graphs, and determining its dominance over standard JPEG.
pptx file
A Rate Allocation Method for Motion JPEG2000
Shailesh Ramamurthy (Arris India, India)
This paper describes a methodology for rate control in Motion JPEG2000. Rate control on the chosen frame(s) of the sequence is first carried out under the "Chosen-Frame-Rate-Allocator" (CFRA) Phase. The CFRA phase progresses in tandem with the main compression process, under which the initial prediction of "Distortion-Rate" (RD) slopes for coding data from all code-blocks is followed by a greedy approach for rate allocation within the chosen frame(s). In the next phase, referred to as the "Adaptation Phase", the Motion JPEG2000 rate allocator uses an elegant adaptation of the Lagrangian multiplier across frames, to achieve rate allocation for the remaining frames, i. e. other than those involved in the CFRA phase. Details on the methodology and the resulting computational savings have been covered in the paper.
Contrast Limited Adaptive Histogram Equalization Based Enhancement for Real Time Video System
Garima Yadav (Govt Women Engineering College, India); Saurabh Maheshwari (Government Women Engineering College Ajmer & Student Member IEEE, India); Anjali Agarwal (Govt. Women Engineering College, Ajmer, India)
Contrast limited adaptive histogram equalization (CLAHE) is a contrast enhancement method which is used to improve the visibility level of foggy image or video. In this paper we used CLAHE enhancement method for improving the video quality in real time system. Adaptive histogram equalization is different from normal histogram equalization because in this method use several method each corresponding to different parts of image and used them to redistribute the lightness value of the image and in case of CLAHE 'Distribution' parameter are used to define the shape of histogram which produce the better quality result compare then adaptive histogram equalization (AHE). In this algorithm rayleigh distribution parameter are used which create bell shaped histogram. The drawback of AHE is work over homogeneous fog but CLAHE applied over both homogeneous and heterogeneous fog and single image and video system. And the second drawback of AHE is used 'cumulation function' which applied over only gray level image but CLAHE used both images colored and gray level.
pptx file
Novel Method for Image Splicing Detection
Saba Mushtaq (NIT Srinagar, India); Ajaz Mir (National Institute of Technology, India)
This paper presents an image splicing detection method based on texture features of spliced image. Splicing is a very common operation performed for image forgery. Splicing involves merging of two or more images to form a composite image that is significantly different from the original image. The proposed approach calculates grey level run length matrix (GLRLM) texture features for the forged images and original images from CASIA database and DB2 database. The statistical features thus extracted from GLRLM are used for detection of tampering. Support vector machine is used for classification. Results show that the proposed algorithm is very effective in detection of splicing forgery.
An Integrated Approach to Content Based Image Retrieval
Roshi Choudhary, Nikita Raina, Neeshu Chaudhary and Rashmi Chauhan (Graphic Era University, India); R H Goudar (Visvesvaraya Technological University, Belagavi, India)
From the past few years, content based image retrieval has received a wide attention. Content Based Image Retrieval (CBIR) is a technique to retrieve the images from a large database similar to query image. In the context of image retrieval, CBIR is closer to human semantics. Content based image retri eval technique has its application in different domains such as medical imaging, crime prevention, weather forecasting, surveillance, historical research and remote sensing. Here content refers to the visual information of images such as color, texture and shape. Contents of image are richer in information for an efficient retrieval in comparison to text based image retrieval. In this paper, we have proposed a content based image retrieval technique which extracts both the color and texture feature. To extract the color feature, color moment (CM) is used on color images and to extract the texture feature, local binary pattern (LBP) is performed on the grayscale image. Then both color and texture feature of image are combined to form a single feature vector. In the end similarity matching is performed by Euclidian distance comparing feature vector of query image and database images. LBP mainly used for face recognition. But we are going to use LBP for natural images. This combined approach provides accurate, e fficient, less complex retrieval system.
pptx file
Maximally Flat Compensated-Comb Decimation Filter with Filter Sharpening Technique
Lila Haresh V. (Guru Gobind singh Indraprastha University, India); Chakrapani Vinitha (Guru Gobind Singh Indraprasth University, India)
This paper describes the generalized scheme to design the wideband comb-based decimation filter in an efficient multistage structure. In this design, we use the maximally flat second-order compensator and the filter sharpening technique. The resulting structure provides wideband compensation in the passband region without degrading the attenuation in the alias bands of the comb filter. Here we consider the multistage comb-based decimation filter, and each stage is compensated by particularly maximally flat second-order compensation filter. Last stage is realized by the sharpening technique and this sharpened stage will work at lower rate by the decimation factors of all previous stages. A common maximally flat second-order compensator is implemented. The polyphase decomposition is applied to non-recursive form of the comb filters which reduce the power consumption and to compare the result with higher order compensation filter.
zip file
K-Mean Algorithm for Image Segmentation Using Neutrosophy
Nadeem Akhtar, Nishi Agarwal and Armita Burjwal (Aligarh Muslim University, India)
Image Segmentation plays a crucial role in the major applications such as Image Processing, Recognition Tasks, Object Detection, Medical Imaging etc. Method used for image segmentation is responsible for the quality of resultant segments. High quality segmentation requires a method that segments an image into more accurate and relevant results. This paper introduces a new approach for segmenting an image. It combines two learning algorithms, namely the K-means Clustering and Neutrosophic logic, together to obtain efficient results by removing the uncertainty of the pixels. A Neutrosophic domain is defined to characterize an image into three membership sets: Truth, Falsity and Indeterminacy. The Indeterminacy Set is compared against a threshold value. If Indeterminacy is found to be greater than threshold, which means that the pixel may belong to more than one cluster, we change the intensity of the pixel depending upon the truth value. The K-means Clustering algorithm is then employs on modified pixels to obtain hard clusters. Experimental Results verify that the results obtained are more accurate, thereby improves the quality of segmentation.
pptx file
Study of Wrist Pulse Signals Using a Bi-Modal Gaussian Model
D Rangaprakash and D Narayana Dutt (Indian Institute of Science, India)
Wrist pulse signals contain important information about the health of a person and hence diagnosis based on pulse signals has assumed great importance. In this paper we demonstrate the efficacy of a two term Gaussian model to extract information from pulse signals. Results have been obtained by conducting experiments on several subjects to record wrist pulse signals for the cases of before exercise and after exercise. Parameters have been extracted from the recorded signals using the model and a paired t-test is performed, which shows that the parameters are significantly different between the two groups. Further, a recursive cluster elimination based support vector machine is used to perform classification between the groups. An average classification accuracy of 99.46% is obtained, along with top classifiers. It is thus shown that the parameters of the Gaussian model show changes across groups and hence the model is effective in distinguishing the changes taking place due to the two different recording conditions. The study has potential applications in healthcare.
pptx file
Morphological Gradient Based Approach for Text Localization in Video/Scene Images
B H Shekar (Mangalore University, India); Smitha ML (KVG College of Engineering, Sullia, India)
In this work, we present an approach for detecting the text present in videos/scene images based on morphological gradient information. The system detects the gradient information using morphological operations and the obtained results are binarized. The resultant binarized image contains some non-text regions which are then morphologically opened so that the small components with less than 4-pixel connectivity are eliminated producing another binary image. Finally, we employ connected component analysis and morphological dilation operation to determine the text regions and hence to localize text blocks. The experimental results obtained on publicly available standard datasets illustrate that the proposed method accurately detect and localize texts of various sizes, fonts and colors in videos and scene images.
ppt file

S50: Pattern Recognition, Signal and Image Processing-IVgo to top

Room: 105 Block E First Floor
Chairs: Ajinkya S. Deshmukh (Uurmi System Pvt. Ltd., India), Karibasappa K G (BVB College of Engineering and Technology, India)
A Neural Network Approach to Edge Detection using Adaptive Neuro - Fuzzy Inference System
Shamama Anwar (Birla Institute of Technology, India); Sugandh Raj (BIT, Mesra, India)
This paper highlights the importance of edge detection in action recognition and presents an edge detection method based on Artificial Neural Network. To implement this concept the Adaptive Neuro-Fuzzy Inference System (ANFIS) has been used. The ANFIS is first designed, trained and checked for average error tolerance. The system is then tested with a few sample images whose results are discussed at the end. A comparison between the traditional edge detectors and the ANFIS method is also provided.
ppt file
A Fused Feature Approach on Content Based Image Retrieval Based on Fuzzy Rule-Set
Nikita Raina, Neeshu Chaudhary, Roshi Choudhary and Rashmi Chauhan (Graphic Era University, India); R H Goudar (Visvesvaraya Technological University, Belagavi, India)
Research in content-based image retrieval today is a lively discipline, expanding in breadth. Discovering new and innovative processes of locating a desired image from an expanding collection of images has been a major area of interest for many professional fields. This has led to the increasing use of a technique called Content Based Image Retrieval which is used to retrieve images based on low-level features which are automatically driven such as color, shape and texture. CBIR draws many of its methods from the field of image processing and computer vision and can be regarded as subset of that field. Using this technique, the entire database may be searched to find the most closely matching image. In this paper, color and texture features of an image have been used to retrieve all the similar images from the image database. For this purpose, Block Color Histogram as color feature and Gray-level Co-occurrence Matrix (GLCM) and Gabor wavelets as texture features have been used. The proposed approach is very efficient since it uses the prominent and distinct features of an image useful for effective image retrieval using fuzzy heuristics according to human visual system.
pptx file
A Hybrid Approach for Color Based Image Edge Detection
Chinu Jethi (Guru Nanak Dev University ,Amritsar); Amit Chhabra (Guru Nanak Dev University, Amritsar, India)
Edge detection is an elementary step used in various image processing applications. The main problems in existing edge detection algorithms are poor edge localization, less noise removal capacity, unable to detect edges in complex background images and inability to properly detect the color edges in images. In this paper a sequential hybrid approach is proposed to overcome all the limitations of existing edge detection algorithms. The operations performed by image edge detection algorithm can be computationally expensive and takes lots of execution time for processing the data. This research work presents hybrid color based image edge detection technique by using the data parallelism approach. The comparison among sequential and parallel edge detection will be drawn based upon different parallel metrics. The experimental results have shown that parallel strategy achieves a performance gain of 68% as compared to sequential approach.
pptx file
Regression Tree Algorithm for Classification of Fused Multispectral and Panchromatic Image
Shingare Pratibha (Pune University & College of Engineering Pune, India); Priya Hemane (College of Engineering Pune, India); Duhita Dandekar (College of Engineering, Pune, India)
In this paper, classification of satellite image is done to detect vegetation, water, soil, built-up area etc. present in the region of which satellite image is captured. For classification regression tree algorithm is used. Regression tree algorithm uses threshold to detect class of data that gives least mean square error. This threshold is applied to NDVI, NDWI, SAVI, and NDBI to detect vegetation, water, soil and built-up area respectively. For classification Landsat 7 ETM+ multispectral images are used. But they have low spatial resolution. Hence they are fused with panchromatic image which is of high resolution. Image fusion is done using HIS transform, Brovey transform, PCA method, HPF method and wavelet transform. Fused image is given as input to regression tree for classification. Using this method various areas can be detected effectively as compared to the original satellite image. Classified image is compared with reference data to check the accuracy of classified image. It is observed that image fused using HPF method when classified using regression tree algorithm gives more effective results.
ppt file
Pitch Contour Modelling and Modification for Expressive Marathi Speech Synthesis
Rohit Sanjay Deo (University of Pune & S. K. N. College of Engineering, Pune, India); Pallavi Deshpande (BVDU College of Engineering, Pune, India)
In this paper, we have measured and analyzed features of speech signal such as fundamental frequency, jitter and shimmer its statistical modeling for Marathi. These models can be used for modifying prosody of the neutral speech further. Jitter and shimmer are measures of cycle-to-cycle variations of fundamental frequency and amplitude respectively. It characterizes the emotion and differs in values as emotion varies. An emotion or target model mentioned here is in the form of interrogate. A pitch target model is developed to model and modify the prosody of the Marathi words. The study comprises the study of existing pitch contour of words whose prosody is to be modified and target pitch contour. Its statistical analysis is done. At the end Gaussian normalization is employed to modify the prosody with help of analyzed data. Result of the subjective experiments satisfies the native listeners.
Irregular Pixel Imaging
Sherin Sugathan (Siemens Healthcare Pvt. Ltd., India); Alex Pappachen James (Nazarbayev University, Kazakhstan)
Pixels form the basic building block of a digital image. The shape of a pixel has a significant role in deciding the accuracy and precision of a digital image. In conventional square pixel representation, the decrease in resolution results a loss of image quality. The paper put forwards methods to improve the quality of the image by using variable shaped pixels. Low resolution images can be brought to better viewing accuracy by using irregular shaped pixels instead of traditional square pixels. Filtering techniques are applied along with the irregular shaped pixels approach so that noise is reduced and the overall quality of the image gets enhanced.
pdf file
Statistical Analysis of Image Processing Techniques for Object Counting
Sandeep Konam (Rajiv Gandhi University of Knowledge Technologies, R. K. Valley, India); Nageswara Rao Narni (Rajiv Gandhi University of Knowledge Technologies, India)
Automation of object counting in digital images has received significant attention in the last 20 years. Objects under consideration varied from cells, bacteria, trees, fruits, pollen,insects to people. These applications cast light on the importance of shape identification and object counting. We developed an algorithm and methodology for detection of mathematically well-defined shapes and calculated the probability of shapes crossing equally spaced lines. Simulations for detection and counting of regular mathematical shapes such as lines and circles were performed in a random environment. Simulation results are compared with the empirical probability calculations. Results seem promising as they converge to the empirical calculations with the increase in number of shapes.
pdf file
An Approach of Efficient and Resistive Digital Watermarking Using SVD
Shaifali Bhatnagar (Mtech JUET, India); Shishir Kumar (Jaypee University of Engineering and Technology, India); Ashish Gupta (Jaypee University of Engineering and Technology, Guna, India)
The validation of images taken by digital cameras has become a great concern as the digital photography is gaining rapid popularity. So, due to lack of protection of digital content as they can be easily duplicated and disseminated without the owner's consent , publishers, artists, and photographers, however, are unwilling to distribute pictures over the Internet. Therefore, to protect confidential images we propose an efficient digital watermarking technique as a way to deal with this tough issue. In the proposed technique, the image is first converted into YCbCr from RGB domain then watermark is embedded in Y component of YCbCr using discrete wavelet transform and Singular Vector Decomposition (SVD). The results show that our algorithm is resistant toward various attacks.
pptx file
Virtual Fault Simulation for Diagnosis of Shaft Misalignment of Rotating Machine
Dipti Prakash Behera (IIT Kharagpur & Indian Institute of Technology, Khragpur, India); Rashmi Behera (JPA IIT Kharagpur, India); Vallayil N.A Naikan (IIT Kharagpur, India)
This paper presents an innovative and efficient way of E-learning, E-experimenting and E-assessment of Rotating Machinery Fault Simulation. A model based experiment for detection and diagnosis of misalignment faults there are generally observed in rotating machinery systems has been proposed in this paper. Coupling misalignment is a condition where the shafts of the driver machine and driven machine are not on the same centre line. Non-coaxial misalignment may be either due to the parallel misalignment or an angular misalignment or presence of both either in the horizontal or vertical direction. Misalignment is temperature dependent. All materials expand with increase in the temperature and metal is no exception. Motors warm up to several degrees and the driven machine may warm up or cool down from ambient temperature depending on the handling fluid. The experiment presented in this paper explains how a parallel misalignment in a system can be monitored based on vibration spectral analysis. Also, this paper explains how the other two types of misalignments are detected.
ppt file
Shape Representation and Classification Through Height Functions and Local Binary Pattern - A Decision Level Fusion Approach
B H Shekar (Mangalore University, India); Bharathi Pilar (University College Mangalore)
In this paper, we propose a combined classifier model based on Height functions (HF) and Local Binary Pattern (LBP) to classify shapes accurately. The height functions are insensitive to geometric transformations and nonlinear deformations and LBP is capable of capturing the region information. We propose to integrate these two techniques for accurate shape classification. The Dynamic Programming (DP) in case of HF and the Earth Movers Distance (EMD) metric in case of LBP were respectively employed to obtain similarity values and hence fused to classify given query shape based on minimum similarity value. The experiments are conducted on publicly available shape datasets namely MPEG-7, Kimia-99, Kimia-216, Myth and Tools-2D and the results are presented by means of Bull's eye score and precision-recall metric. The comparative study is also provided with the well known approaches to exhibit the retrieval accuracy of the proposed approach. The experimental results demonstrate that the proposed approach yields significant improvements over baseline shape matching algorithms.
ppt file

S51: Mobile Computing and Wireless Communications-IIIgo to top

Room: 110 Block E First Floor
Chair: K. Mani Anandkumar (Anna University, Chennai, India)
Brownfield Design Approach Towards Minimization of Handoff Cost in Next Generation Wireless Cellular Networks with Dual- Homed RNCS
Bedadipta Bain and Madhubanti Maitra (Jadavpur University, India)
Next generation cellular architectural hierarchy imposes that a pre-fixed group of Node Bs be connected to a Radio Network Controller (RNC) and a pre-designed set of such RNCs be then assigned to a Serving GPRS support Node (SGSN). In addition to that, the set of RNCs is also wired to one Mobile Switching Center (MSC). In case of a single-homed network, the operating cost, comprises the cabling or link cost and the cost incurred for supporting handoff whenever an mobile user (MU) visits a different RNC (horizontal/simple handoff) or a different SGSN or MSC (vertical handoff/complex handoff). In contrast, during the deployment phase, provision could be kept where a Node B could be connected to any of the RNCs and thereafter any RNC could be assigned to any of the SGSNs/MSCs, if needed. This concept of multi-homing allows nodes to be assigned adaptively to the switches, particularly, under constrained situation. In this work, we present a framework for multi-homing of cells to switches and endeavor to present some evolutionary optimization techniques for minimizing the total cost of resource management. We have proposed a novel meta-heuristic based approach, that evolves into a complete state-space based search tool namely, Best Contributors Search (BCS). Computational results indicate that the proposed BCS algorithm out-performs some other well-known heuristics such as matrix-based technique, Optimal Dual Home (ODH), Tabu Search (TS) and Ant Colony optimization (ACO) based methods cited so far.
pptx file
Reconfigurable Concurrent Dual-Band Low Noise Amplifier for Noninvasive Vital Sign Detection Applications
Amarjit Kumar (Indian Institute of Technology Roorkee, Roorkee, India); Nagendra Prasad Pathak (Indian Institute of Technology, Roorkee, India)
Concurrent dual-band reconfigurable low noise amplifier (LNA) for noninvasive detection of human vital signs has been reported in this paper. The reconfigurable output matching network used in this LNA (PHEMT ATF-36163) consists of two shunt stubs terminated by varactor diodes (SMV1232-079LF); while the matching network at input side of the LNA has been designed for fixed dual band operation. The measured characteristics of the fabricated prototype shows that the gain of the LNA varies from 3 dB to 12.2 dB in the frequency range of 2.3-2.5 GHz; while it varies from 9.5dB to 12.9 dB in the frequency range of 4.2-4.6 GHz for a variation in varactor voltages from 0-10V. The simulated value of the Noise Figure of the amplifier is ~ 0.5 dB and 2.5 dB at the centre frequency of the corresponding frequency bands.
pdf file
Performance Analysis of Full Duplex Relaying in Multicell Environment
Gaurav Sharma (National Institute of Technology, India); Prabhat Kumar Sharma (Visvesvaraya National Institute of Technology, India); Parul Garg (Netaji Subhas Institute of Technology, New Delhi, India)
This paper investigates the performance of a full duplex relaying (FDR) cooperative system in a multicell environment. In multicell environment the FDR is prone to co-channel interferers due to higher frequency reuse. Owing to the possibility of imperfect mitigation at the relay we model the echo-interference channel between transmitting and receiving antennas of relay as a fading channel with Nakagami-m fading. For decode and forward (DF) strategy at the relay we analyze the error performance derive an exact closed form expression of average bit error rate (BER). Further an exact closed form expression is obtained for outage probability of the system.
pdf file
Packet Size Optimization in Wireless Sensor Network Using Cross-Layer Design Approach
Shaktisinh R Chudasama (GCET,Gujarat Technological University); Samir D. Trapasiya (GCET, Gujarat Technological University, India)
In this paper, our aim is to obtain optimum size of data packet for wireless communication. Because of limited energy of each source, we try to reduce an energy consumption by packet size optimization. There are several issues in variable packet size such as hardware limitations, additional overhead and resource management cost issue, so we use fixed packet size optimization. In the context of energy constrained sensor nodes, the energy consumption of starting transients can be quite large. Energy consumption in speed-up energy and start-up energy are more than energy consumption of the data aggregation. If the number of start-up increases then total energy required is more for sent data from sender to receiver. Hence, packet size affects energy consumption. Using binary BCH codes, it is possible to significantly improve the energy efficiency. When ARQ and FEC are considered, for higher values of signal to noise ratio (SNR) threshold, ARQ apparently exceeds the FEC codes but for lower SNR threshold values, the energy consumption can be decreased using FEC codes. However, longer packet sizes are less energy consumption per useful bit upon adequate link quality is guaranteed. But when an insufficient link quality then we have to choose small packet size due to increase reliability. In this paper, we have used routing based on number of affecting parameters like signal to noise ratio, distance between sources and sink along with left over energy level of nodes. We considered the concept of left over energy level of nodes that results in a step towards the increase lifetime of network. Finally, our goal is to optimize packet length with decreased energy consumption per bit and increased network lifetime. Furthermore, an optimization of packet is determined based on objective function, i.e., packet throughput and energy function.
pptx file
Wideband Printed Dipole Antenna with Embedded Loops and Coupling Patches for Digital TV Signal Reception
Deepak C. Karia, Madhuri Bhujbal and Aditya Desai (University of Mumbai, India)
Planar coaxial fed wideband printed dipole antenna is designed to satisfy DTV signal reception needs. This antenna consists of two asymmetric loops separated by step shape feed gap. These loops contains two inner loops which are connected to external loops with the help of rectangular coupling patches. Two resonant modes are excited due to the length of the antenna and length of step feed gap. Operating bandwidth is enhanced due to the addition of the inner loop and coupling patch. Various parameters such as bandwidth, return loss, antenna gain, and radiation efficiency of proposed antenna are simulated and discussed. The simulated radiation pattern is omnidirectional matching to the conventional dipole antenna. The proposed antenna has an antenna gain varying from -8 to -10 dB and operating bandwidth is found to be 470-925 MHz with return loss better than 7.35 dB. Therefore, the proposed antenna is very much compatible to digital television signal receiving applications.
ppt file
Full Composite Fractal Antenna with Dual Band Used for Wireless Applications
Ruchika Choudhary (Govt. Engineering College Ajmer, India); Sanjeev Yadav (Govt. Women Engineering College Ajmer, India); Pushpanjali Jain (Govt. Engineering College Ajmer, India); Mahendra Mohan Sharma (Malaviya National Institute of Technology & Principal Govt Engineering College Ajmer INDIA, India)
In this paper a full composite fractal antenna, having a modified Sierpinski fractal antenna with 50-Ω microstrip line, used for dual band wireless applications. The return loss and radiation characteristics of the antenna design indicate that a Sierpinski fractal antenna with circular slot having Dual Band frequency bandwidth from 3.8 GHz to 4.4 GHz and 4.8 GHz to 5.4 GHz to covering 5.2 GHz for WLAN and X-Band wireless applications in 3.8 GHz- 14 GHz frequency range. The proposed antenna is fed by a 50-Ω microstrip line and designed on a low cost FR4 substrate having dimensions 96 (L) ×72 (W) × 1.5 (t) mm3 with Ɛr= 4.9 and tan δ= 0.025. The antenna shows acceptable gain with nearly omni-directional radiation patterns in the frequency band.
pptx file
An Improved Least Square Channel Estimation Technique for OFDM Systems in Sparse Underwater Acoustic Channel
Sai Priyanjali K and Babu A v (National Institute of Technology Calicut, India)
Underwater acoustic (UWA) communication channels are known to be sparse in nature because of very large delay spread and limited number of significant multipath components. In this paper, we propose an improved least square (LS) based technique for sparse channel estimation applicable to orthogonal frequency division multiplexing (OFDM) systems for UWA communications. Through computer simulations, we demonstrate that the proposed improved LS based sparse channel estimation algorithm outperforms the conventional LS estimation technique by deciding significant channel taps adaptively, based on a predetermined threshold. The mean square error (MSE) and the bit error rate (BER) performance of the proposed improved LS estimator has been observed to be significantly better than that of the conventional LS estimator. Specifically, the results reveal an improvement of approximately 4 dB in SNR at a BER of 〖10〗^(-2) for the proposed improved LS estimation technique as compared to the conventional LS estimator.
A Novel MGF-Based Approach to Analyze SIM-based Relayed FSO Systems
Mona Aggarwal (Northcap University, Gurgaon, India); Parul Garg (Netaji Subhas Institute of Technology, New Delhi, India); Parul Puri (Jaypee Institute of Information Technology, India)
In this paper, we analyze a subcarrier intensity modulation (SIM) based relayed free space optical (FSO) system assuming independent but not necessarily identically distributed (i.n.i.d) gamma-gamma turbulence channels. System employs a channel-state-information-assisted (CSI-assisted) amplify and forward (AF) relaying protocol. We derive the moment generating function (MGF) of the end-to-end signal to noise ratio (SNR) of the system and using the novel MGF, accurate closed-form expressions of capacity under different adaptive transmission schemes such as optimal rate adaptation (ORA) and channel inversion with fixed rate (CIFR) are derived. Further, we derive closed form expression of average symbol error rate (SER) for M-ary phase shift keying (MPSK) modulation schemes in terms of Meijer's G-function.
Improving Handoff Performance of Micro-Mobility Protocol in NEMO and a Comparison with SINEMO
Palash Kundu (Jadavpur University, India)
Network Mobility (NEMO) basic support protocol is standardized by IETF to provide seamless and uninterrupted services to the mobile hosts in NEMO. Seamless IP diversity based NEMO (SINEMO) outperforms NEMO BSP in terms of handoff latency and related packet loss by utilizing the advanced loss recovery mechanism and multi-homing feature of stream control transmission protocol (SCTP). To support micro mobility in NEMO, HMIPv6 can be used with NEMO BSP which together reduce handoff latency and related packet loss significantly. For further improvement of handoff performance of NEMO BSP, FHMIPv6, the extension of FMIPv6 and HMIPv6 could be utilized in NEMO. In this paper, handoff performance of NEMO BSP along with micro mobility support of HMIPv6 (M-NEMO) and FHMIPv6 (FM-NEMO) are analytically compared with SINEMO based on handoff latency, handoff blocking probability and packet loss during handoff. The numerical results show that FM-NEMO outperforms M-NEMO and SINEMO in terms of above metrics.
pdf fileppt file
Performance Analysis of IR-UWB TR Receiver Using Cooperative Dual Hop AF Strategy
Ranjay Hazra (IIT Roorkee, India); Anshul Tyagi (Indian Institute of Technology Roorkee, India)
Non-Coherent receivers are attractive for IR-UWB systems and are preferred over its coherent counterpart, because of its implementation simplicity and non-requirement of channel state information (CSI). UWB systems have a low Power Spectral Density (PSD) which limits it from achieving a wide coverage and high data rate despite, providing an adequate system performance. Hence, cooperative diversity technology is introduced which helps in expansion of coverage area, improvement in Quality of Service (QoS) and BER performance. The paper evaluates the BER performance of IR-UWB TR receiver using dual hop cooperative strategy with Amplify and Forward (AF) relaying analytically and compares it with the simulation results. Simulations clearly show that with increase in number of frames from 1 to 2, the BER performance degrades.
pptx file

S52: Mobile Computing and Wireless Communications-IVgo to top

Room: 110 Block E First Floor
Chair: Manju A (GCET Greater Noida, India)
Quad-Band Unequal Power Divider with Coupled Line Section Using Stepped-Impedance Transformers
Sandeep Kumar (University College of Engineering RTU kota Rajasthan, India); Mithilesh Kumar (Rajasthan Technical University, Kota, Rajasthan-INDIA, India)
A novel quad-band unequal power divider with coupled line section using stepped-impedance transformers is presented in this paper. Coupled lines section is used to achieve the necessary impedance transformation at four selected frequencies. The topology of the structure, different from conventional Wilkinson power divider, is the coupled lines instead of the transmission lines. In this paper the structure is presents on the RT5880 substrate with dielectric constant is 3.2, Using CST Microwave studio, by this substrates design there is simulated return loss is greater than 10 dB at center frequencies 9.8 GHz, 11.8 GHz, 19 GHz and 20.5 GHz. The average insertion loss is good and group delays are 0.6 ns at 9.8 GHz, 0.54 ns at 11.8 GHz, 0.47 ns at 19 GHz, 0.38 ns at 20.5 GHz within these quad- band. In addition, the isolations between the outputs ports are greater than 10 dB over these quad-band.
pptx file
Re-configurable Optimized Area CTC Codec for Wireless Applications
Mansi Rastogi (Gujarat Technical University, India); Rajesh Mehra (National Instititute for Technical Teachers' Training and Research, India)
Today's need of wireless communication is undergoing astounding growth in day to day process. The quality of wireless communication must be enhanced with reduced cost. A coding system is required which can provide high data rate with error free communication and reduced area utilization. Among various error correcting codes, the turbo codes known as Parallel Convolutional Concatenated Code (PCCC) provides performance improvement with miniaturization in communication system. In this paper, an area efficient Convolutional Turbo Codec of constraint length 3 is proposed. To reduce the area consumption, the proposed turbo decoder uses a single SISO (Soft Input Soft Output) decoder architecture. In SISO decoders SOVA (Soft Output Viterbi Algorithm) is used as a decoding algorithm. It is based on two step algorithm. The proposed codec design has been synthesized on Xilinx Virtex-4 (xc4vlx25-ff676-10) FPGA. The performance of proposed Turbo Codec compared for FPGAs in terms of number of slices, slice flip-flops and LUTs. The Synthesis result shows 7% improvement in the utilized no. of slices and slice flip-flop of proposed encoder and approximately 3% improvement in the utilized number of slice flip-flop of the proposed decoder. The Simulink model for proposed CTC encoder and decoder is generated accordingly.
rar file
Design of Orthogonal Waveform for MIMO Radar Using Modified Ant Colony Optimization Algorithm
Roja Reddy B (R V College of Engineering, Bangaore); Uttarakumari M (R V College of Engineering, Bangaore, India)
A Modified Ant Colony Optimization Algorithm (M_ACO) is promoted to design orthogonal polyphase waveforms for MIMO with good autocorrelation and crosscorrelation properties in lesser computational time. M_ACO algorithm is combination of Ant Colony Optimization (ACO) and Hamming Scan algorithm. This algorithm is a graph based approach to find the best optimized sequence. The simulation of the orthogonally optimized polyphase code set sequences generated using M_ACO shows better results than other optimization algorithms known in the literature.
pptx file
A PROMETHEE Approach for Network Selection in Hetrogeneous Wireless Environment
Chava Anupama (Jntuk, India)
In a heterogeneous wireless network environment always best connected concept requires the selection of an optimal access network. Selection of a non-optimal network can result in undesirable effects such as higher costs and poor service experience. In this paper an attempt is made to find a solution for network selection by using two multiple attribute decision making (MADM) methods namely Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE ) and Analytic Hierarchy Process (AHP). The AHP method is used to determine weights of the criteria and the PROMETHEE method is used to rank the networks. Four traffic classes namely conversational, streaming, interactive and background are included to illustrate the method. The performance of PROMETHEE is validated by comparing with AHP in terms of consistency, ranking abnormality, robustness and accuracy. Simulation results show that PROMETHEE is slightly preferable to AHP in network selection.
ppt file
Joint Channel Estimation and Data Detection for SFBC MIMO OFDM Wireless Communication System
Gajanan Patil (Army Institute of Technology & Sinhgad College of Engineering, India); Vishwanath Kokate (Sinhgad College of Engineering, Pune, India)
This paper presents a joint channel estimation and data detection (JCEDD) for multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) system. Initial estimate of the channel is obtained using semi-blind channel estimation (SBCE). The whitening rotation (WR) based orthogonal pilot maximum likelihood (OPML) method is used to obtain the channel estimate. The estimate is further enhanced by extracting information through the received data symbols. The performance of the proposed estimator is studied under various channel models. The simulation study shows that this approach clearly outperforms the TBCE and OPML SBCE methods but at the cost of higher computational complexity
ppt file
Simulated Annealing Based Solution for Limited Spectrum Availability in Composite Wireless Network
Mainak Sengupta (Jadavpur University, India); Ayan Paul (BSNL, India); Madhubanti Maitra (Jadavpur University, India)
Requirement of multiple services from the customers has compelled the wireless service providers (WSPs) to maintain multiple radio access networks (RANs) of different radio access technologies (RATs). Further, dynamic spectrum allocation (DSA) approach offers a great opportunity to the WSPs to utilize their available spectrum in more efficient manner. As the growth of demand for wireless services is expected to continue unabated, the WSPs may also face a scenario in which, spectrum demands from the RANs is more than the availability of spectrum with the WSP. In this work, we have addressed this limited spectrum availability scenario in the composite wireless network (CWN) considering realistic constraints such as different propagation characteristics of channels at different spectrum region and discrete channelizations of the RANs. The problem has been posed as a bankruptcy game and Shapley value solution of the game has been computed. For maintaining the fairness of spectrum allocation, maximizing equality of distribution (MED) is considered to be the objective of the WSPs. Subsequently, we have proposed simulated annealing (SA) based spectrum allocation solution that would maximize the objective. Our simulation result shows that the SA based solution outperforms existing solutions such as Shapley value and max-min fairness by wide margin as far as objective MED in concerned.
ppt filepptx file
Performance of New Dynamic Benefit-Weighted Scheduling Scheme in DiffServ Networks
Reema Sharma (VTU, The Oxford College Of Engineering, India); Navin Kumar (Amrita University & School of Engineering, India); Srinivas Talabattula (Indian Institute of Science, India)
In this paper, we design a new dynamic packet scheduling scheme suitable for differentiated service (DiffServ) network. Designed dynamic benefit weighted scheduling scheme (DBWS) uses a dynamic weighted computation scheme loosely based on weighted round robin (WRR) policy. It predicts the weight required by expedited forwarding (EF) service for the current time slot (t) based on two criteria; (i) previous weight allocated to it at time (t-1), and (ii) the average increase in the queue length of EF buffer. This prediction provides smooth bandwidth allocation to all the services by avoiding overbooking of resources for EF service and still providing guaranteed services for it. The performance is analyzed for various scenarios at high, medium and low traffic conditions. The results show that packet loss is minimized, end to end delay is minimized and jitter is reduced and therefore meet quality of service (QoS) requirement of a network.
pptx file
On the Construction of Frame Length Specific LDPC Codes for Next Generation Low Power Communication
Ashish Goswami (National Institute of Technology, Hamirpur, India); Rakesh Sharma (Indian Institute of Technology Roorkee, India)
In this paper, a new method is proposed for constructing frame length specific LDPC codes which consume low power in encoding and decoding processes. A seed matrix is used and extended to form the parity check matrix having frame lengths divisible by 32 and 64. This method can be used to construct the LDPC codes with frame lengths divisible by even higher power of 2. The memory requirements for encoding and decoding processes of the proposed codes is discussed in this paper. For typical configuration the memory requirements to store the parity check matrix for the encoding and decoding processes are 1:04% and 33:33% respectively in comparison to the memory requirements in the traditional encoding and decoding processes. Hence consume low power in their encoding and decoding processes. The different configurations of the proposed codes are simulated and it is found that the codes give desirable error performance over AWGN channel.
pdf file
Data Authentication and Integrity Verification Techniques for Trusted/Untrusted Cloud Servers
Satheesh K S V A Kavuri (Dhanekula Institute Of Engineering & Technology, India); Gangadhara Rao Kancherla and Basaveswara Rao Bobba (Acharya Nagarjuna University, India)
Third party cloud storage and access permissions play a vital role in security analysis and user access control. User access control and data verification are the important revolutionary technologies to provide security and control unauthorized users. Third party cloud servers are built without proper security measures and user control mechanisms. User can access the data such as documents, media or other type of files using third party generated authentication key and secret information. Traditional cloud security mechanisms are independent of data integrity verification to the authorized data users. Third party cloud servers are vulnerable to different type of message integrity attacks. Traditional message integrity algorithms are depend on the file size, hash size and security parameters. In order to overcome the security related issues in commercial cloud servers, an improved hash based message integrity verification process was proposed in this paper. Proposed message integrity algorithm was tested on attributed based encryption process. Proposed cloud based hash algorithm generates 512 bit size hash value to each file in the third party cloud servers. Only authorized users can access the required files using his/her identity along with the message integrity value. Experimental results show that proposed cloud based hash algorithm outperformed well compare to existing models in terms of file size, time and attacks are concern.
ppt file

S53: Computer Architecture and VLSI-IIIgo to top

Room: 007-A Block E Ground Floor
Chair: Saurabh Gautam (None & Cadence Design Systems, India)
An Improved Distributed Iterative Water-filling Spectrum Management Algorithm for Near- Far Problem in VDSL Systems
Sunil Sharma and Om Parkash Sahu (NIT Kurukshetra, India)
In digital subscriber line (DSL) system, crosstalk created by electromagnetic interference among twisted pairs degrades the system performance. Very high bit rate DSL (VDSL), utilizes higher bandwidth of copper cable for data transmission. In upstream data transmission, a 'Near-Far' problem occurs in VDSL system. In this problem the far end crosstalk (FEXT) is produced from near end user degrades the data rate achieved at the far end user. Several dynamic spectrum management (DSM) algorithms were proposed in literature to remove this problem. In this paper, a new distributed DSM algorithm is proposed in which power from only those subcarriers of near end user are reduced which create interference to far end user. This power back off strategy takes place with the help of power spectral density (PSD) masks at those interference creating subcarriers of near end user. The results after simulating the low complex proposed algorithm show an improvement in terms of data rate and approaches near the rate of highly complex optimal spectrum balancing (OSB) algorithm.
ppt file
Design of Power Efficient SPI Interface
Dwaraka N Oruganti (K V G College of Engineering, India); Siva Yellampalli (VTU Extension Centre & UTL Technologies LTd, India)
The paper discusses the design of an SPI interface based on the specifications mentioned in the SPI block guide V03.06 by Motorola . The present design incorporates additional power down mode which is the stop mode for power optimization. The standard design was modified by adding additional power consumption reduction technique-clock gating .The clock gating technique is performed on the shift register as it is simpler to synchronize the shift register with the rest of the design compared to any other module of the design. Verilog is used for coding and I-Sim (Xilinx) is used to verify the design performance.
ppt file
Comparative Analysis of 8 X 8 Bit Vedic and Booth Multiplier
Sasha Garg (IIIT Delhi, India); Swati Garg and Vidhi Sachdeva (ITM University, India)
Speed and power consumption are one of the most important parameters to judge the performance of a computational method. In this paper, we compare two algorithms for 8 Bit multiplication namely Vedic Multiplication Algorithm and Booth algorithm. This paper aims in bringing to the fore the differences in compilation speeds and the chip area consumption of the two methodologies. The programming language used is Verilog and the synthesis has been done on Xilinx 14.5.
pptx file
Directory Based Cache Coherence Modeller in Multiprocessors: Medium Insight
Harsh Arora (V. I. T University & Ex. Sr. Mgmt/Engineering -R&D Professional : Mentor Graphics Inc, Cadence Inc & Motorola Inc, India); Rijubrata Mukherjee, Abhijit Bej and Hillol Adak (V. I. T University, India)
In a multiprocessor environment, cache coherency problem arises when there is a data inconsistency between the private caches and the main memory. The key aspect to the design of efficient multiprocessor systems is a scalable cache coherence protocol. Directory Based approach is used for large scale distributed networks and is seen as a scalable alternative to CMP design, but with the number of on-chip cores increasing, the directory based protocol do not scale beyond a specified number of cores. We investigate several conventional & NUCA based directory protocols such as Full map directory protocol, Sparse directory protocol, duplicate-tag based directory protocol. and analyze some novel cache coherence protocols designed for many-core processor. At the end, we suggest a design for a scalable directory based coherence protocol for optimal performance.
Analysis of Power Distribution Network for Some Cryptocores
Moumita Chakraborty and Krishnendu Guha (University of Calcutta, India); Amlan Chakrabarti (University of Calcutta); Debasri Saha (University of Calcutta, India)
With the progress in VLSI technology increase in circuit density is obvious. In present integrated circuits, electrical power is distributed to the large number of components of the chip over a network of on chip conductors. The systematic arrangement of the power distribution network (PDN) is commonly termed as power grid. The challenge of efficient PDN design escalates to a greater extent especially in the case of embedded system hardware design as they have a very low power budget as well as needs to have a high reliability. In many of the present day computer-aided design (CAD) tools, PDN modeling and analysis are not performed due to lack of proper libraries, designers estimate the PDN in a case to case basis. Our research aims to create suitable models for PDN extraction so that it can be addressed by the CAD tools. We consider the cryptocores for PDN analysis as they are complex as well as are used in a wide range of applications. Our experimental results show 0.03% and 0.142% increments in power dissipation in DES and in AES respectively after inclusion of PDN circuitry. PDN analysis for custom application cores is not available in related research works and hence our work can serve as state of the art benchmarks for PDN.
ppt file

S54: Data Management, Exploration and Mining-IIgo to top

Room: 007-B Block E Ground Floor
Chair: Ganesh Deka (MIR Labs, India)
Differential Private Random Forest
Abhijit Patil (Manipal Institute Of Technology Manipal, India); Sanjay Singh (Manipal Institute of Technology, India)
Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.
pdf file
Serial Multimethod Combined Mining
Arti Shankar Deshpande (G.H.Raisoni College of Engineering, Nagpur)
Combined Mining is an approach to combine various mining techniques to get more understandable and useful patterns from complex data. Different classical data mining techniques have their advantages and disadvantages so a single technique cannot be applied on different business data. Serial Multimethod Combined Mining (SMCM) is an approach where different mining techniques are used to get the patterns and finally the resultant patterns or rules are actionable. The actionable patterns are descriptive and assist us to finalize business decision. In SMCM, different mining methods are applied in predetermined sequential pattern. The resultant patterns of the previous method are also considered as a part of the input for the next method to be executed. SMCM helps to get advantages of different classical mining techniques to generate combined patterns but it needs the domain knowledge of business data for selection of different methods. The SMCM for credit card data by combining clustering and association techniques is demonstrated and experimental results are taken.
Dynamic Colocation Algorithm for Hadoop
Ganesh Babu (National Institute of Technology, Calicut, India); Shabeera T P and S D Madhu Kumar (National Institute of Technology Calicut, India)
Hadoop is a widely accepted platform for developing large-scale data intensive applications. It is an open source implementation of Google's MapReduce framework. The current data placement policy of Hadoop distributes the data among DataNodes using random placement policy for simplicity and load balance. This simple data placement is good for Hadoop applications that used to access data from a single file. But if any application needs data from different files simultaneously, the performance normally degrades. Identifying the related files and placing them in the same DataNode or in adjacent DataNodes reduces network overhead and reduces the query span. We propose a Dynamic Colocation Algorithm, where the average number of machines that are involved in processing a query decreases by colocating the datasets, that are frequently accessed together and hence reduces the network overhead. Our technique checks the relations between datasets dynamically and rearrange the datasets according to their relations. Our experimental results show that, after colocation there is a significant reduction on the execution time of MapReduce programs.
pdf file
A Density Based Clustering with Artificial Immunity Inspired Preprocessing
Swarna Kamal Paul (Tata Consultancy Services & Jadavpur University, India); Parama Bhaumik (Jadavpur University, India)
In this paper we propose an algorithm which can identify varied shaped clusters from wide variety of input dataset with high degree of accuracy in presence of noise. The initial data processing module adopts a novel approach of Artificial Immune system to reduce data redundancy while preserving the original data patterns. The clustering module pursues a density based approach to identify clusters from the compressed dataset produced by the preprocessing module. We introduced several new concepts like selective Antigenic binding, Local Reachability Factor, Global Reachability Factor to effectively recognize clusters with varied shape, varied density and low intercluster separation with acceptable computational cost. We performed experimental evaluation of our algorithm with wide variety of real and synthetic dataset and obtained higher cluster success rate for all dataset when compared to DBSCAN.
pptx file
Polarimetric Decomposition and Statistical Analysis of Chandrayaan-1 Mini-SAR Data for Study of Lunar Craters
Meghavi Prashnani (School Of Electronics DAVV, India); Ravi Shankar Chekuri (IIT Kharagpur, India)
Chandrayaan-1, is India's first scientific mission to the moon. It is launched with prime objective to expand scientific knowledge about the origin and evolution of moon. Chandrayaan-1 mission carried various remote sensing instruments into space for achieving its scientific objectives. Mini-SAR was launched on Chandrayaan-1 mission and operated for nine months. The prime goal of Mini-SAR on Chandrayaan-1 mission is to map permanently dark areas near Polar Regions. Mini SAR data is already analyzed by the researchers in past for identifying water ice deposits. However studies comparing the behavior of lunar craters present near Polar Regions and equatorial Regions is scarce if non-existent. In present work authors have taken initiative for comparing the behavior of lunar craters present near polar and equatorial regions. Various child parameters extracted from strokes parameters for comparison purpose.
Metadata Based Recommender Systems
Paritosh Mittal (Indraprastha Institute of Information Technology, India); Aishwarya Jain (IIIT-Delhi, India); Angshul Majumdar (Indraprastha Institute Of Information Technology-Delhi & University of British Columbia, India)
For building a recommendation system the eCommerce portal gathers the user's ratings on various items in order to determine his/her choice regarding its merchandise. The portal also collects metadata for the user when he/she signs up and becomes a part of the system; therefore the portal has access to information such as user's age, gender, occupation, location, etc. Till date almost all prior studies used the metadata for alleviating the cold-start problem; this information was not used for improving the recommendations. For the first time in this work, we propose a simple neighbourhood selection technique by giving importance to the metadata groups for improving the recommendations.
pptx file
Ancient Indian Document Analysis Using Cognitive Memory Networks
Neethu Kumar and Dinesh Kumar (Enview R&D Labs, India); Swathikiran Sudhakaran (Fondazione Bruno Kessler, Italy); Alex Pappachen James (Nazarbayev University, Kazakhstan)
In this work, we propose a character segmentation system from ancient palm leaf manuscripts written in ancient Arya Ezhuthu(which was popularized as script to write Malayalam from 17th century). The automatic recognition of handwritten text from digital images of palm leaves dating back to 10th century BC. This can enable context based searching in large volumes of digital document. The subject about document image understanding is to extract and classify individual data meaningfully from palm leaf manuscript. We propose an automatic approach to detecting and identifying the characters. The proposed approach can improve the readability of the document. Text line segmentation is then applied and the characters are extracted.
rar file

S55: Artificial Intelligence and Machine Learning-IIgo to top

Room: 110 Block E First Floor
Chair: Rajan Anand Malik (JIMS Engineering College, Greater Noida, India)
Detecting Up-Calls of Right Whales
Soumya Sen Gupta (Indian Institute of Technology, Delhi); Sai Rajeshwar (Indian Institute of Technology, Delhi, India)
The purpose of the study was to develop a machine learning based technique to detect the up-calls of North Atlantic Right Whales from all other noises, like calls of other creatures in the sea, so that ships plying in the seas could be warned of their presence in order to avoid a direct collision with the whales. What made the study quite difficult was the non-stationary component of the signals along with a very low signal to noise ratio. Reduction in the noise content was achieved through a threshold technique based on Steins Unbiased Risk Estimate. To reduce the non-stationary content, the trend and seasonality components of the signals were examined and removed when necessary. This was done in accordance with the Classical Decomposition Theory. In order to find the best features to determine the calls of whales, wavelet packet decomposition technique was used using Daubechies 2 (db2) as mother wavelet. Wavelets were used as they provide a good frequency resolution over other formats like Fourier Transform. This led to the decomposition of the signals into separate filter banks whose energy contents were used as features. A backward sequential feature selection approach then found out the best subset of features to be used for classification. Two classification algorithms, Support Vector Machines and Naive Bayes were used to classify the signals.
pdf file
Analyzing Software Change in Open Source Projects Using Artificial Immune System Algorithms
Ruchika Malhotra and Megha Khanna (Delhi Technological University, India)
Development of software change prediction models, based on the change histories of a software, are valuable for early identification of change prone classes. Classification of these change prone classes is vital to yield competent use of limited resources in an organization. This paper validates Artificial Immune System (AIS) algorithms for development of change prediction models using six open source data sets. It also compares the performance of AIS algorithms with other machine learning and statistical algorithms. The results of the study indicate, that the models developed, are effective means of predicting change prone classes in the future versions of the software. However, AIS algorithms do not perform better that machine learning and other statistical algorithms. The study provides conclusive results about the capabilities of AIS algorithms and reports whether there are any significant differences in the performance of different algorithms using a statistical test.
pptx file
Towards a Generic Framework for Short Term Firm-Specific Stock Forecasting