3rd ICOIACT 2020 logo
IEEE
 

Program

Time Room A Room B Room C Room D Room E

Monday, November 23

01:00 pm-03:00 pm Practice: Free Practice + Simulation

Tuesday, November 24

06:30 am-07:00 am Log In: Log In to ZOOM Meeting
07:00 am-09:00 am 1A: Parallel Session 1-A 1B: Parallel Session 1-B 1C: Parallel Session 1-C 1D: Parallel Session 1-D 1E: Parallel Session 1-E
09:00 am-12:00 pm Opening Ceremony + Plenary Speakers
12:00 pm-01:00 pm Break: Break Time
01:00 pm-03:00 pm 2A: Parallel Session 2-A 2B: Parallel Session 2-B 2C: Parallel Session 2-C 2D: Parallel Session 2-D 2E: Parallel Session 2-E
03:00 pm-03:30 pm Break: Break Time
03:30 pm-04:30 pm 3A: Parallel Session 3-A 3B: Parallel Session 3-B 3C: Parallel Session 3-C 3D: Parallel Session 3-D 3E: Parallel Session 3-E
05:00 pm-05:30 pm Awarding + Closing Ceremony

Monday, November 23

Monday, November 23 1:00 - 3:00

Practice: Free Practice + Simulation

Rooms: Room A, Room B, Room C, Room D, Room E
Chair: Rifda Faticha Alfa Aziza (Universitas Amikom Yogyakarta, Indonesia)

Meet with The Committee Team.

Preparing for Conference Day.

Simulation and Check Connection for Zoom Meeting.

Contact Center : Open Q & A all about conference.

Etc

Tuesday, November 24

Tuesday, November 24 6:30 - 7:00

Log In: Log In to ZOOM Meeting

Rooms: Room A, Room B, Room C, Room D, Room E
Chair: Sumarni Adi (Universitas Amikom Yogyakarta, Indonesia)

All participant Login to Zoom Meeting Room

Tuesday, November 24 7:00 - 9:00

1A: Parallel Session 1-A

Room A
Chair: Akhmad Dahlan (Universitas Amikom Yogyakarta, Indonesia)
Clustering Spatial Temporal Distribution of Fishing Vessel Based On VMS Data Using K-Means
Sunarmo Sunarmo (Institut Teknologi Sepuluh Nopember Surabaya, Indonesia); Achmad Affandi (Institut Teknologi Sepuluh Nopember, Indonesia); Surya Sumpeno (Institute Teknologi Sepuluh Nopember, Indonesia)
Management of sustainable marine resources is a national and global problem, and fisheries management has a complex problem, more research is need with a more comprehensive approach. The Government of Indonesia, through the Ministry of Marine Affairs and Fisheries, has made the Vessel Monitoring System (VMS). VMS data contains the position, movement, and activity of the fishing vessels utilized in this research. Data mining techniques and machine learning are using, and this study consists of three steps: i) Finding the number of optimum clusters by the Elbow Method, ii) Conducting clustering with the K-Means algorithm with the optimum k-value that has set, iii) Analyze the distribution of VMS data spatially and temporally. Overall, the optimum number of clusters obtained is 7 with the results of the compactness of the cluster members the percentage is 90.7%, spatially the distribution of VMS data in the Fisheries Management Area WPPNRI-711 is uneven and temporally very volatile. The results of this study can provide information about the intensity and location of fishing activity and for preventing overfishing.
pp. 1-6
Classification of Graduates Student on Entrance Selection Public Higher Education Through Report Card Grade Path Using Support Vector Machine Method
Rachmawati Findiana (Institut Teknologi Sepuluh Nopember Surabaya, Indonesia)
Entrance selection Public Higher Education through report card grades is one way of accepting new students at Public Higher Education, free of registration fees, and without written examinations. To date, in Mojokerto, the students' participation in the selection process is still done manually, so the results have not been maximized. Students with good report cards are not necessarily accepted, and vice versa, students with average report cards can be one General Higher Education through this pathway. Therefore, a classification system is needed to predict student graduation at Public Higher Education through this path to obtain optimal results. In this study will make the classification of student graduation at SNMPTN, SPAN PTKIN, SNMPN, and not received / not passed by using the parameters of the average semester report card grades 1, average semester report card grades 2, average semester report card grades 3, grades average semester report card 4, and the average grade report card semester 5 using the Support Vector Machine (SVM) method. Based on the results of data testing to identify how well the classification performed by measuring levels of accuracy of data classification at SMAN Puri, SMAN Sooko, and MAN 1 Mojosari in 2018 and 2019 using SVM method obtained a good degree of accuracy, namely amounted with 82% for SMAN Puri, 81% for SMAN Sooko, and 90% for MAN 1 Mojosari. The results showed that the level of accuracy of the SVM method could be used as a classifier to predict student graduation in Public Higher Education Entrance Selection through a report card grade path.
pp. 7-11
Implementation of Data Cleansing Null Method for Data Quality Management Dashboard using Pentaho Data Integration
Haidar Sulistyo, Tien Fabrianti Kusumasari and Ekky Novriza Alam (Telkom University, Indonesia)
Data is a collection of facts or information collected from various sources that are dirty and will affect the quality of decision-making in an organization. Data cleansing ensures that the data is correct, useable, and consistent. Data may be incomplete, inaccurate, or has the wrong format and needs to be corrected or deleted. Data cleansing processing can improve the quality of the data significantly. The data cleansing processing requires to create useful quality data that provides significant benefits for the recipient. The availability of data is crucial in an organization to develop competent, valid, and trustworthy decisions. The null or blank field in data is one of many problems to maintain data quality management in an organization, especially in Indonesian government agencies. The brand registration number permits contain many blank fields, including the complete data needed for the next step processing. Therefore, to solve the amount of blank data, this research will discuss the design and implementation of the data cleansing null method using Pentaho Data Integration (PDI). The result will be implemented to the data quality management (DQM) dashboard using the laravel framework and MySQL as a DBMS.
pp. 12-16
Ensemble Model Approach For Imbalanced Class Handling on Dataset
Yoga Pristyanto (Universitas Amikom Yogyakarta, Indonesia); Anggit Ferdita Nugraha (Universitas AMIKOM Yogyakarta, Indonesia); Irfan Pratama (Universitas Mercubuana Yogyakarta, Indonesia); Akhmad Dahlan (Universitas Amikom Yogyakarta, Indonesia)
In the field of machine learning, the distribution of classes in the dataset is important to produce a good model. The existence of class imbalances in the dataset is often ignored by researchers in the field of machine learning. This will certainly make the resulting model have less than maximum performance because theoretically, the single classifier has a weakness to the class imbalance conditions in datasets, this is because the majority of single classifier tends to work by recognizing patterns in the majority class in datasets that are not balanced so that the performance is not can be the maximum. Therefore, it is necessary to deal with these problems. In this study proposing an algorithmic level approach using Random Forest and Stacking to deal with these problems, the basic idea of using an algorithmic level approach is not to change the composition or pattern contained in the dataset itself. Based on tests using 5 datasets with different imbalance ratios, it shows that Random Forests or Bagging Tree and Stacking Naïve Bayes and Decision Tree C4.5 can produce better performance than single classifiers such as SVM, Naïve Bayes, and Decision Tree C4.5. So, the proposed method can be a solution in handling class imbalance in the dataset with different imbalance ratios.
pp. 17-21
Effect of Stemming Nazief & Adriani on the Ratcliff/Obershelp algorithm in identifying level of similarity between slang and formal words
Wahyu Hidayat, Ema Utami and Anggit Dwi Hartanto (Universitas Amikom Yogyakarta, Indonesia)
The word slang is a word that is widely used everyday by social media users, especially in Indonesia. The language generally contains abbreviations or strange words that have absolutely no connection with the word in question, this often leads to ambiguity when analysis such as sentiment analysis and Information Retrieval (IR). But lately there has been a slang dictionary that keeps a list of slang words that are often used by internet users, the dictionary informs slang words into words in formal Indonesian. But there are still many studies, especially text mining that ignores or does not filter out the word slang so that it will likely affect the results in research that has been done. The Nazief & Adriani algorithm is an algorithm to transform basic forms using Indonesian morphology by eliminating prefixes and suffixes. The Ratcliff/Obershelp algorithm can determine the similarity between two strings by matching the two strings. The purpose of this study is to compare the word slang to the formal word Indonesian, it aims to find out the similarity between the word slang with the formal word so that it will be known the percentage level of slang similarity with the formal word to find out whether the formal basic word can indirectly representing the word slang with the degree of similarity. The dataset used by Kaggle and Github, the results of the study of the use of text preprocessing and stemming of the Nazief & Adriani algorithm affect the similarity of slang and formal words with the most frequent distribution in the range of values 0.8-0.89 (80% -89.99%) in three different tests with two types of datasets and affect the level of similarity of strings precisely on the dataset from Kaggle at the value of similarity 1 (100%) which previously amounted to 40 data to 927 data, the dataset from Github also increased to the value of similarity 1 (100%) which previously did not have data to 409 data.
pp. 22-27
Poverty Level Prediction Based on E-Commerce Data Using K-Nearest Neighbor and Information-Theoretical-Based Feature Selection
Tiara Fatehana Aulia, Dedy Rahman Wijaya, Elis Hernawati and Wahyu Hidayat (Telkom University, Indonesia)
The Central Statistics Agency (BPS) is a government agency that focuses on matters concerning household economic and social needs. Every two years BPS conducts Susenas (National Socio-Economic Survey) to measure poverty levels in Indonesia. Every year BPS is tasked with providing information about the community's welfare in terms of their socio-economic aspects. In this very rapid development, many methods can be used to determine the poverty level. One of them is with the use of the rapid development of E-commerce in Indonesia, which can determine the level of poverty in Indonesia. In this study, we proposed a method to predict the poverty level based on an e-commerce dataset using K-Nearest Neighbor and Information Theoretical Based Feature Selection. Our method is expected to be able to complement the BPS Census and Susenas in predicting poverty levels in an area. Our test results show that our method data can predict the poverty level although there are rooms for improvement in terms of accuracy.
pp. 28-33
An Automated Interview Grading System in Talent Recruitment using SVM
Muhammad Yusuf and Kemas Lhaksmana (Telkom University, Indonesia)
Interview is one of the most important stages during talent recruitment to find the best candidates who fit with the corporate culture. However, an interview usually involves third-party psychology experts and professionals to conduct both the interview process and analysis, and thus can be fairly costly for the company. To this end, companies seek alternative methods to reduce manual tasks and human effort in talent recruitment by introducing automation using machine learning technology. In this paper, we investigate machine learning methods to grade the corporate culture fitness level of job applicants by analyzing the interview verbatim. To classify the interview verbatim, we compare SVM, which has been proven to be a very effective text classifier in general, with naive Bayes and KNN. In this study, SVM often demonstrates higher performance on any dataset and in many different schemes compared to naive Bayes and KNN. SVM achieves the average accuracy at 86%, better than naive Bayes at 81% and KNN at 79%.
pp. 34-38
Development of Youtube Sentiment Analysis Application using K-Nearest Neighbors (Nokia Case Study)
Irene Irawaty, Rachmadita Andreswari and Dita Pramesti (Telkom University, Indonesia)
Nokia is a company that uses Youtube as a platform to advertise and market its products until now. Nokia was a cellphone company that had fallen in 2013 due to the company's unwillingness to follow the android operating system trend at the time. Nokia continues to rise and launch new products that are increasingly sophisticated. In seeing and summarizing public opinion towards Nokia's products, this research will develop a website-based application namely Youtube Sentiment Analysis. The application is built using Python and JavaScript with the Flask framework. In building the sentiment analysis model, this research uses the K-Nearest Neighbors algorithm with value of k = 5 and getting an accuracy value of 88.6%. The application that has been built can carry out processes ranging from collecting comments using Selenium, labeling comments automatically using VaderSentiment, preprocessing comments using NLTK, weighting words using TFIDF, classifying using K-Nearest Neighbors algorithm, evaluating algorithms using confusion matrix, to visualizing classification and evaluation results in the form of pie charts, bar charts and confusion matrix.
pp. 39-44

1B: Parallel Session 1-B

Room B
Chair: Aditya Hasymi (Universitas AMIKOM Yogyakarta, Indonesia)
Decision Support System for Selection of Staples Food and Food Commodity Price Prediction Post-COVID-19 Using Simple Additive Weighting and Multiple Linear Regression Methods
Wahyu Hidayat and Mursyid Ardiansyah (Universitas Amikom Yogyakarta, Indonesia); Kusrini Kusrini (AMIKOM Yogyakarta University, Indonesia)
In the pandemic after the occurrence of COVID-19, there are significant changes in economic statistics, this is influenced by economic activity that is not stable compared to before. The price of food staples was also affected by the pandemic, meetings between buyers and traders, usually held in traditional and modern markets, were hampered due to government restrictions on the territory. This causes a decrease in existing transactions in the market, therefore foodstuffs have the possibility of price volatility. Multiple Linear Regression (MLR) algorithm is a method that can overcome predictions with the type of seasonal dataset prediction, therefore the MLR algorithm is implemented to predict food prices, especially in the modern market, based on the predicted prices, then a decision support system is made to make an alternative ranking of food selection accumulation. Based on the available food ingredients there are nutrients contained in these foods, therefore experts are needed to determine the weighting of nutrition in each food ingredient. Simple Additive Weighting (SAW) method is a method that can do weighting and ranking of alternatives. Therefore the SAW method is applied to rank alternative food staples that have nutritional weight and price. Based on the application of MLR, the error level testing concluded that the prediction of the price of food "Rice" has the least error results compared to other foodstuffs with the value of MSE 21261.04, MAE 145.79, RMSE 145.812, MAPE 0.81 while for the best R2 values found at food ingredients "Garlic" with a value of 0.576. Based on testing of the application of SAW, the same results are obtained between manual calculations and calculations provided by the system, so that the accuracy of the system can be ascertained.
pp. 45-50
Accuracy Enhancement of Correlated Naive Bayes Method by Using Correlation Feature Selection (CFS) for Health Data Classification
Hairani Hairani and Muhammad Innuddin (Universitas Bumigora, Indonesia); Majid Rahardi (Universitas Amikom Yogyakarta, Indonesia)
The main problem of health datasets is having a lot of data attributes and irrelevant features, so that the computation of the classification method takes more time to solve it. The purpose of this study is to implement the CFS feature selection method to improve the accuracy of the Correlated Naive Bayes method. Therefore, there are some stages used in this study such as: collecting dataset of Pima Indian Diabetes, pre-processing data especially for transformation data, selection of CFS feature, classification, and then performance evaluation based on accuracy. Based on the test results using the 10-fold cross validation method, the best accuracy is about 69.4% compared without feature selection, it is obtained by a combination of Correlated Naive Bayes and CFS methods. Thus, the CFS feature selection method may increase the accuracy of the Correlated Naive Bayes method by 2.25%.
pp. 51-55
Data Analysis for Corruption Indications on Procurement of Goods and Services
Agus Purwanto and Andi Wahju Rahardjo Emanuel (Universitas Atma Jaya Yogyakarta, Indonesia)
Corruption occurs in many developing countries and is very difficult to detect because of weak legal awareness, lack of good governance, and integrity. In Indonesia, there are seven types of corruption cases handled by the Corruption Eradication Commission (KPK). One of the corruption cases which is detrimental to the state occurs in public procurement, such as the procurement of goods/services. This case is the second most corrupt crime after bribery in Indonesia. In this research, we try to identify potential corruption from auction data on goods/services procurement in government. By using Big Data technology, it is expected that the process can be carried out immediately to assist the KPK in identifying potential corruption in goods/services procurement auctions.
pp. 56-60
KIP Recipient Decision Making For Students Affected by Covid_19 Using Fuzzy MADM Method
Noor Abdul Haris, Muhammad Nidhom and Arif Setia Sandi, Ariyanto (University Amikom Yogyakarta, Indonesia); Hari Asgar (STMIK Amikom Yogyakarta, Indonesia); Kusrini Kusrini (AMIKOM Yogyakarta University, Indonesia)
The impact of the Covid-19 pandemic on large-scale social restrictions forced the economic activities of the Indonesian population to cease, which had an impact on the educational funding process. This makes some people expect a helping hand from the government to cover some of the shortcomings in daily spending, one of them through the Indonesia Smart Card. The provision of aid funds in these conditions must make the process of channeling aid on target. With this problem, it is important to develop a Decision-Making System to help the KIP acceptance selection process for students who are indeed eligible to get it. The purpose of this study is that the provision of KIP can be right on target for students who really need it according to specified criteria. For decision making, three stages are used with the method used, the first stage the C-45 method is used for decision making whether students graduate or not, the second stage is the Fuzzy MADM method used for decision making of students who get KIP, and the third stage ranks according to specified total quota. Initial selection uses the C-45 method with variable GPA (V1), distance from home to campus (V2), length of study (V3), work (V4), family (V5), and tuition payment bills (V6). The calculation yields 5 rules which are used to determine normal (graduated) or DO (not) students. Students who pass the initial selection are processed using the MADM fuzzy method for the decision-making process that is truly feasible for KIP. Furthermore, the number that passes is ranked and the amount is taken based on the quota in accordance with the results of the ranking. Of the 456 student data received based on 1024 registrants after ranking, 300 students were drawn based on KIP target recipients. The results of testing the accuracy of 300 KIP recipients who obtained data actually had 284 rights. In order to obtain an accuracy of 98% of students who are eligible for KIP.
pp. 61-65
Optimization Convolutional Neural Network For classification Diabetic Retinopathy Severity
Tinuk Agustin and Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Computer-aided Diabetic Retinopathy screening can help doctors to get a fast and accurate diagnosis. With early detection and proper handling, it is hoped that it can save DM patient's vision. Deep learning Convolutional Neural Network methods are widely used in medical image analysis by producing results there are comparable or even exceed human performance. In this study, we propose an ordinary CNN method and how to optimize the network performance to overcome problems such as overfitting in the classification of normal retinal and NPDR. With the technique our proposed, we get 95% accuracy, sensitivity 95%, and specificity 96%
pp. 66-71
Identification of Potato Leaf Disease Using the Convolutional Neural Network (CNN) Algorithm
Abdul Jalil Rozaqi (University of Amikom Yogyakarta, Indonesia); Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Potatoes have carbohydrate content, which makes them the leading food for humans. Potato agricultural products are essential because they are the leading food. But in carrying out this potato farming certainly has several obstacles, including the disease that attacks the potato leaves which if left unchecked will result in poor production or even crop failure. late blight and early blight are diseases that are often found in potato leaves. This disease has its own symptoms so that farmers can take precautions if they see symptoms on potato leaves, but this step has a weakness that is a long identification process, and if the handling of this leaf disease is very slow will result in additional maintenance costs. By utilizing technology in the form of digital image processing, this can be overcome, so this research will propose an appropriate method in detecting diseases in the leaves of potato plants. Classification will be carried out with three classes in the form of healthy leaves, early blight, and late blight using the Convolutional Neural Network (CNN) algorithm. the results of this research are considered good because on the 10th epoch with batch_size 20 produces 95% accuracy training and 94% validation accuracy.
pp. 72-76
Optimizing Single Exponential Smoothing Method by Using Genetics Algorithm for Object Selling Prediction
Harliana Harliana (University Muhadi Setiabudi, Indonesia); Hartatik Hartatik and Akhdan Krisna Aditama (Universitas Amikom Yogyakarta, Indonesia)
The product purchases and sales sometimes become an obstacle uncertainty, therefore, it necessary need a prediction to use for predicting the out of stock. Based on it, this study will compare the results of MAPE calculation through the usual single exponential smoothing with optimized through the genetic algorithms through a test scenario. From the test results it was found that the optimization through genetic algorithms can increase the optimal value which is quite large
pp. 77-82
Implementation of Data Augmentation to Improving Performance CNN Method for Detecting Diabetic Retinopathy
Tinuk Agustin (Universitas Amikom Yogyakarta, Indonesia); Hanif Fatta (Universitas AMIKOM Yogyakarta, Indonesia); Ema Utami (Universitas Amikom Yogyakarta, Indonesia)
Diabetic retinopathy is a progressive disease that is the most common cause of blindness in adults worldwide. Early detection of DR is crucial to saving the vision of DM patients. The development of computing technology and artificial intelligent methods effectively help automatic screening of DR in the early stages and provide objective and accurate results. This research to do to detecting Diabetic retinopathy disease using Convolutional Neural Network as a classification method and using CLAHE to image enhancement. And compares some suitable data augmentation techniques for retinal fundus images. The results of this study found that the augmented random zoom technique and image enhancement provides the best classification accuracy of 98% with sensitivity and specificity of 100% and 95%.
pp. 83-88

1C: Parallel Session 1-C

Room C
Chair: Rhisa Aidilla (Universitas AMIKOM Yogyakarta, Indonesia)
The Comparison of Distance Measurement for Optimizing KNN Collaborative Filtering Recommender System
Zauvik Rizaldi Maruf and Arif Dwi Laksito (Universitas Amikom Yogyakarta, Indonesia)
Optimizing the recommender system could be done depending on the kind of algorithm. From related research, computing similarities are a critical component in Collaborative Filtering Recommender System. This paper is using predictive KNN to perform Collaborative Filtering, and there have some widespread similarity measurements, such as Euclidean, Minkowski, and generalization from both of them called with Minkowski. From the experiment that was performed in this paper, Minkowski has the best performance but two times slower than Euclidean and Manhattan, since manhattan is trying to find the best p (p=1 or p=2) to decide which p was used. On the other hand, Euclidean performance was approaching the Minkowski, so Euclidean was worth considering.
pp. 89-93
Classification of DISC Personality Type Based on Twitter Data Using Machine Learning Techniques
Hermawan Setiawan and Achmad Abdul Wafi (Sekolah Tinggi Sandi Negara, Indonesia)
In the world of education, knowing about the personality of students can help education providers to determine the development of students. If a person's personality is known, of course we can identify the characteristics, thought patterns, feelings, and behaviors of a person that make them unique. However, conventional personality assessment requires several resources, such as interpreters, space, and time which tends to be long. A person's personality is certainly related to and will affect several linguistic aspects that he uses, including word choice and word placement. Twitter is an internet-based social networking service that allows its users to use its linguistic aspects in sending and reading short messages. This study provides solutions to problems in implementing conventional personality assessments, especially the DISC personality type, by using Twitter media to form a predictive model that applies the naïve bayes classifier method to classify the DISC personality types of Twitter users. We used 9,044 tweets from 70 Twitter accounts as training data to build a predictive model. Before forming and evaluating a model, tweets must go through a series of preprocessing stages. The evaluation of the model was carried out by comparing the classification results of the expert and the classification results of the model, so that the data prediction accuracy rate was 76.19%.
pp. 94-98
A Review of Expert System for Identification Various Risk in Pregnancy
Galih Malela Damaraji, Adhistya Erna Permanasari and Indriana Hidayah (Universitas Gadjah Mada, Indonesia)
The rapid growth of information technology allows us to create new technologies that can be applied later on to other fields. The thing that is developing is an expert system where this technology can later be applied to the medical world, especially in the identification of risks during pregnancy. If such risks can be detected earlier, preventive steps can help expectant mothers and health workers. The Systematic Literature Review will address different approaches inside the expert framework, which can be used to identify early symptoms encountered by pregnant women. The whole process to be identified comes from a variety of available sources of literature. A total of 17 papers reviewed were published between 1985 and 2019. From this study, it can be inferred that premature birth and hypertension threats are the most observed. Rule-based systems and Artificial neural networks are the most commonly used approaches
pp. 99-104
Improved Classification of Coronavirus Disease (COVID-19) based on Combination of Texture Features using CT Scan and X-ray Images
Luqy Nailur Rohmah and Alhadi Bustamam (Universitas Indonesia, Indonesia)
The novel coronavirus (also known as COVID-19) has infected more than 20 million people worldwide and has now become a global pandemic. It is necessary to perform initial screenings to control the spread of the disease. Computed Tomography (CT) scan and X-ray images play an essential role in diagnosing the lung condition of patients with COVID-19 symptoms. Therefore, a machine learning method is needed to help in the early detection of COVID-19 patients through CT scan and X-ray images. In this research, we propose a machine learning model that can classify COVID-19 based on texture features techniques. In particular, there are three texture features, namely Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Histogram of Oriented Pattern (HOG), chosen as the feature extractors. To improve classification accuracy and computational efficiency, we combined these features with principal component analysis as a feature reduction. We evaluated each feature set individually and in groups. For the final step, we conducted a classification process using Support Vector Machine (SVM) algorithm. The proposed method's performance was implemented on a publicly available COVID-19 dataset that includes 1100 CT scans and 1100 X-ray images. The results show that combining GLCM, LBP, and HOG features can provide accuracy up to 97% on CT images and 99% accuracy on X-ray images.
pp. 105-109
Feature Selection Optimization for Identification of Citation Sentences in Scientific Journals
Raynaldi Fatih Amanullah (Amikom University, Indonesia); Ema Utami (Universitas Amikom Yogyakarta, Indonesia); Suwanto Raharjo (Institut Sains & Teknologi AKPRIND Yogyakarta, Indonesia)
Scientific journal publication is an easy way to disseminate findings or research. But in doing the writing study, researchers often forget the source of the basics of the theory and the results of previous studies that are used as a reference. In some cases, these actions can be considered as a form of plagiarism, they can also damage the integrity of an academy. Therefore, citation writing is very important in research. In this study, researchers tried to detect citation and non-citation sentences contained in the CL-SciSumm dataset in 2018 using classifiers such as SVM. Not only that, classification is also helped by using feature selection to reduce unneeded dimensions. Feature selection used includes TF-IDF, IDF, WordNet and Word2Vet Similarity. In doing classification, it shows that the use of TF-IDF and SVM methods is the best combination of other features, this is evidenced by the high accuracy value that reaches 90,23%, recall reaches 92,59%, and f-measure reaches 89,28%.
pp. 110-114
Evaluation of Real Performance on Brain Tumor Diagnosis Based on Machine Learning techniques - A Systematic Review
Mohammad omid Khairandish (Chandigarh University, India); Meenakshi Sharma (Galgotias University, India); Kusrini Kusrini (AMIKOM Yogyakarta University, Indonesia)
Object: The purpose of this research is to investigate the real performance of brain tumor diagnosis and Treatment using machine learning algorithm, to provide the actual position of studies on improving of human life, and also based on this systematic review try to introduce the criteria to choose such algorithm that can detect effectively and accurately the brain tumor. Methods: Different research studies were systematically searched for relevant articles published between October 2012 and December 2019. With consideration of information that included, i.e. Algorithm type, dataset, accuracy, and also addressing the fourth primary task of image processing (Pre-Processing, Segmentation, feature extraction, classification) to make a clear view on it, and Eventually analyze the proposed diagnostic model of every work. Results: Eventually, eight studies were included, on observation of four elements (Accuracy, Algorithm type, proposed model, and Performance) has been accomplished. The accuracy of the studies shows 98.5% (85% Training, 93.03 Testing) in maximum and 80.05 in minimum, respectively. The application of algorithm types in various show CNN, KNN, C-means, RF, respectively from the highest use up to minimum usage. In the case of performance, it shows that till now many different techniques for brain tumor detection haven need to be implemented with high accuracy, but still to achieve a better result and save the human life we have in a more accurate way, need to focus and continue the improvement. Conclusion: Although the quality of included studies was moderate, current evidence Indicates that from different perspective with help of machine learning for brain tumor detection have been Implemented that clearly try to solve the real cases, but still requires to improve and select more flexible criteria to develop suitable technique to detect the brain tumor more accurately.
Decision Support System for Covid19 Affected Family Cash Aid Recipients Using the Naïve Bayes Algorithm and the Weight Product Method
Muhammad Ibnu Sa'ad II (AMIKOM Yogyakarta & FKIP Universitas Mulawarman, Indonesia); Kusrini Kusrini (AMIKOM Yogyakarta University, Indonesia); Supriatin Supriatin and Dony Bryan (AMIKOM Yogyakarta, Indonesia)
The purpose of this study was to predict the recipients of cash assistance and to evaluate Naïve Bayes in predicting recipients of cash assistance from families affected by Covid19. This study uses the Naïve Bayes algorithm. The data used is logical data then processed and calculated. The variables used are Age, Income, College Status, and labels using two classes, namely Cannot and Can. From the results of this study, it can be concluded that the recipients of cash assistance in Village X can be predicted Naïve Bayes using a training value of 10%. From the results of testing the predictive accuracy of Naïve Bayes is 67%. For the calculation of the Weighted Product method using the variables Age, Income, Education, Working Status, Family Status and there are two alternatives, namely Cannot and Can, from the Weighted Product calculation results in a Vector S ranking value of 2.24 and a vector V of 0.66 which states that families affected by Covid19 have the right get (Get) cash assistance
pp. 115-120
An Implementation of C4.5 Classification Algorithm to Analyze Student's Performance
Latifaestrelita Indi Pramesti Aji and Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Due to the massive amount of information in educational database, predicting student's performance is more difficult. Thus, a comprehensive literature review to forecast student's performance utilizing data mining techniques-in particular using the C4.5 algorithm-is recommended to enhance the student's achievements. The main intention of this research is to provide a description of the data mining techniques used to predict student's results. This research paper also reflects as to how the classification algorithm may be used to classify the most significant characteristics in a student's information. In essence, we might increase student's performance and progress more effectively by utilizing educational data mining methods, particularly the C4.5 classification algorithm. Based on the results obtained by the dataset provided, the C4.5 algorithm had the accuracy of 71,9%. This could offer advantages and influence to students, lecturers and academic establishments.
pp. 121-126

1D: Parallel Session 1-D

Room D
Chair: Tonny Hidayat (Universitas Amikom Yogyakarta, Indonesia)
Improving ARIMA Forecasting Accuracy Using Decomposed Signal on PH and Turbidity at SCADA Based Water Treatment
Junaidi Junaidi, Joko Buliali and Ahmad Saikhu (Institut Teknologi Sepuluh Nopember, Indonesia)
In industrial plants, accurate forecasting is critical for decision making. Autoregressive Integrated Moving Average (ARIMA) is a statistical analysis model used widely in time series forecasting. A suitable forecasting methodology must accurately predict future values. In the testing or validation process, the model should relatively follow the pattern of the actual signal. Most studies about ARIMA use directly observed signals in modeling and forecasting. The lack of this method, the predicted signal shows a straight line in some cases. In this paper, we propose a customized forecasting methodology. First, the observed signal is decomposed into trend, seasonal, and residual component. Then decomposed components are modeled and forecasted independently. Finally, the forecasted components are recomposed to achieve the forecasted observed signal. In this study's experiment, the proposed method can reduce MSE of turbidity forecast 90.021% lower than the direct forecasting method. Meanwhile, the MSE reduction of pH forecast reaches 97.062% lower than the direct forecasting method. The average MSE reduction reaches 42.597%.
pp. 127-132
Low Complexity Named-Entity Recognition for Indonesia Language using BiLSTM-CNNs
Meredita Susanty, Sahrul Sukardi, Ade Irawan and Randi Putra (Universitas Pertamina, Indonesia)
Named-Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP) and information extraction used to extract information such as names of the people, organizations, and places. NER has been used in many fields of work, one of which is in chatbot development. NLP and machine learning approaches enable a smarter chatbot with better personal analysis to users. This research builds a NER model in Indonesian Language using Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNNs) model architecture. Unlike the former research, this model only uses word-level embedding in the CNNs layer to keep the model simple. The Named-Entities (NEs) used in this study are limited to the name of the person, organization, location, quantity, and time using the BILOU labelling format. The performance of the model built is measured using the f1 score evaluation metric with the micro average. The BiLSTM-CNNs + pre-trained word2vec embedding model provides good performance compared to other models with an f1 score of 71.37%.
pp. 133-138
Face Anti-Spoofing Using CNN Classifier & Face liveness Detection
Raden Budiarto Hadiprakoso (Poltek Siber dan Sandi Negara, Indonesia)
Biometrics with facial recognition is now widely used. A face identification system should identify not only someone's faces but also detect spoofing attempts with printed face or digital presentations. A sincere spoofing prevention approach is to examine face liveness, such as eye blinking and lips movement. Nevertheless, this approach is helpless when dealing with video-based replay attacks. For this reason, this paper proposes a combined method of face liveness detection and CNN classifier. The anti-spoofing method is designed with two modules, the blinking eye module that evaluates eye openness and lip movement, and the CCN classifier module. The dataset for training our CNN classification can be from a variety of publicly available sources. We combined these two modules sequentially and implemented them into a simple facial recognition application using the Android platform. The test results show that the module created can recognize various kinds of facial spoof attacks, such as using posters, masks, or smartphones.
pp. 139-143
Implementation of Stacking Ensemble Learning for Classification of COVID-19 using Image Dataset CT Scan and Lung X-Ray
Annisa Utama Berliana (University of Indonesia, Indonesia); Alhadi Bustamam (Universitas Indonesia, Indonesia)
Novel Coronavirus Disease (COVID-19) is a disease caused by SARS-CoV-2, which has become a global pandemic. COVID-19 was first discovered in Wuhan, China, and has already spread to various countries, which, until now, still haven't found a proper way to deal with it. Various studies related to COVID-19 have been carried out, including initial screening to control the disease's spread. X-ray images and Computed Tomography (CT) can be utilized for initial screening in diagnosing lung conditions for patients with COVID-19 symptoms. Machine learning has been at the forefront of many fields, such as analyzing X-Ray and CT Images. Machine learning shows an outstanding performance compared to other methods. In this paper, we present an ensemble learning with stacking to analyze X-Ray and CT in calcifying COVID-19, which was previously pre-documented using the Gabor feature. The ensemble learning model is built with two levels of learning, namely the base-learners and the meta-learner. The base-learners we use to build the model are Support Vector Classification (SVC), Random Forest (RF), and K-Nearest Neighbors (KNN), and the meta-learner we use is Support Vector Classification (SVC). The proposed method's performance is implemented on a publicly available COVID-19 data set, including 1140 chest X-Ray images and 2400 CT Images. The proposed method shows that the stacking ensemble learning of Support Vector Classification (SVC), Random Forest (RF), and K-Nearest Neighbors (KNN) can provide accuracy above 97% for CT Images and 99% for chest X-Ray images.
pp. 144-148
Implementation of ANN for Optimization MPPT Using Zeta Converter
Dewi Rahmatika Yunitasari, Epyk Sunarno and Indra Ferdiansyah (Politeknik Elektronika Negeri Surabaya, Indonesia); Putu Agus Mahadi Putra (Politeknik Elektronika Negeri Surabaya-Indonesia, Indonesia); Lucky Pradigta Setiya Raharja (Politeknik Elektronika Negeri Surabaya, Indonesia)
Fossil fuels such as coal, gas and petroleum are non-renewable energy and have a limited amount. This encourages the development of renewable energy as an alternative energy source for electricity generation. One of the renewable energy sources is solar cell or photovoltaic (PV) which utilize solar energy. The problem in the use of PV today is that efficiency is still low at a high cost. One of the efforts to optimize photovoltaic power is to use photovoltaic control devices by applying the Maximum Power Point Tracking (MPPT) technique using the photovoltaic optimization method by imitating human neural networks in processing several conditions and providing solutions from existing reference data. This method is called Artificial Neural Network (ANN). With this method, ANN can reach maximum power from the photovoltaic when there is a change in sunlight intensity which is equipped with a zeta converter to increase or decrease the photovoltaic voltage to supply DC loads. The simulation results show that MPPT-ANN can track power up to 199.95 Watt for a photovoltaic with a capacity of 200 WP with a time of 0.056 seconds, and there is no power oscillation when the MPPT has reached the MPP from the photovoltaic.
pp. 149-154
Lead Forecasting using LSTM based Deep Learning Architecture for Sentiment Analysis
Rajesh Puravankara and Narendra babu C (MS Ramaiah University of Applied Sciences, India)
Marketers adopt various strategies to improve the efficiency of lead management by understanding more about their prospective customers. Advancements in data mining and machine leaning techniques helped industry to adopt these techniques to the business benefit. Use of modern deep learning technology for marketing industry is limited. This paper proposes a methodology for identifying the probability of conversion of a lead by employing a structured and unstructured data analysis and modelling. A binary classification model is built using logistic regression, ensemble method and feed forward neural network. The unstructured user comments data is represented using a long short term memory (LSTM) network which has the ability to capture context information effectively. This method provide a comprehensive approach to consider both structured and unstructured data so as to utilize the data effectively to know the customer more and there by improve the probability to predict the lead conversion. The results reveal that the LSTM based forecasting has better performance compared to the other models considered.
pp. 155-160
Deep Learning based System Identification of Quadcopter Unmanned Aerial Vehicle
Brilian Putra Amiruddin (Institut Teknologi Sepuluh Nopember, Indonesia); Eka Iskandar (Institut Teknologi Sepuluh Nopember Surabaya, Indonesia); Ali Fatoni and Ari Santoso (Institut Teknologi Sepuluh Nopember, Indonesia)
Acquiring the systems' model is a significant stride of controller design and development. Nevertheless, it is arduous sometimes to estimate a real-world system model such as a quadcopter UAV, which has non-linearity, difficult-to-measure, and complex characteristics. This condition caused system identification plays a crucial role in the quadcopter modeling. With the ease of obtaining the experimental flight data directly from the quadcopter, hence modeling of the quadcopter using system identification now is handy. Conforming to that, the development of machine learning algorithms specifically on the deep learning field, driven new perspectives on the system identification approach. In this paper, several deep learning architectures were applied to identify the quadcopter UAV system. Overall, results show that the CNN-LSTM model was the top-performed architecture with the average tested MSE and MAE equal to 0.0002 and 0.0030.
pp. 161-165
Survey of Performance Measurement Indicators for Lossless Compression Technique based on the Objectives
Tonny Hidayat (Universitas Amikom Yogyakarta, Indonesia); Mohd Hafiz Zakaria and Naim Che Pee (Universiti Teknikal Malaysia Melaka, Malaysia)
The explosive growth of data in the digital world has led to the need for efficient techniques for storing and transmitting data. Data compression is done to reduce the size of the data or file. By compressing or compressing data, the file or data size will be smaller so that it can reduce transmission time when data is sent and does not take up a lot of storage media space. Since the data compression concept results in effective utilization of the available storage area and communication bandwidth, many approaches are developed in several aspects based on objectives and needs. To analyze the compression results based on the goal which will affect the main parameters that will be the goal of success is carried out to overcome current requirements in terms of data quality, coding schemes, type of data and Performance Measurement. Finally, this paper provides insights into various open issues and research directions to explore promising areas for future developments and clarifies indicators that are the main norms for conducting research in the scope of lossless compression
pp. 166-171

1E: Parallel Session 1-E

Room E
Chair: Gardyas Adninda (Universitas AMIKOM Yogyakarta, Indonesia)
Pre-processing Task for Classifying Satire in Indonesian News Headline
Mas Siti Imrona (Universitas Gadjah Mada, Indonesia); Widy Widyawan (Gadjah Mada University, Indonesia); Lukito Edi Nugroho (Universitas Gadjah Mada, Indonesia)
One of challenges in the topic of natural language processing is detecting satire sentences. In the news headlines, satire commonly used to criticize the government. This study aims to know the impact of preprocessing in the classification of satire sentences in Indonesian news headlines. Besides, feature extraction was conducted by using Term Frequency Inverse Document frequency (TF-IDF) employing Machine Learning: Naive Bayes to classify the satire in Indonesian news headline. In this study, there were six preprocessing combinations used which were; breaking down the word into tokens (tokenizing), changing word into base form (stemming), removing stopword, and removing punctuation. The result showed that preprocessing affected the accuracy of sarcastic sentence classification which are not on the dataset of Indonesian news headlines. The highest accuracy of 90.38% obtained by combination of standardized preprocessing which were tokenizing and lowercasing
pp. 172-175
Multilingual Named Entity Recognition Model for Indonesian Health Insurance Question Answering System
Budi Sulistiyo Jati (Universitas Gadjah Mada, Indonesia); Widy Widyawan (Gadjah Mada University, Indonesia); Muhammad Nur Rizal (Universitas Gadjah Mada, Australia)
Named Entity Recognition (NER) is the task of extracting information to find and classify entities from unstructured text into predetermined categories. In this study, NER is used to find entities of locations, organizations, financial tasks, administrative tasks, and healthcare facilities in chat and public service complaints dataset of Indonesian national health insurance. The method used is Bidirectional Encoder Representation from Transformer (BERT) Multilingual Cased, and BERT Multilingual Uncased models. Pre-processing conducted in this research is tokenization, formalization, and tag distribution analysis. Then it is converted into a BERT input feature consisting of token id, segment mask, and attention mask. Based on the experiment results, BERT Multilingual Uncased model archives total average F1 score 83.52 and BERT Multilingual Cased model archives total average F1 score 85.41. The experiment results prove that BERT Multilingual can be implemented for Indonesian dataset, and also show that the cased model can get a better F1 score.
pp. 176-180
Implementing Term Frequency-Inverse Term Frequency in Indonesian Fraud Crime Through Tweets
Hesty Ulfatriyani, Hanung Adi Nugroho and Indah Soesanti (Universitas Gadjah Mada, Indonesia)
Crime Analysis is a methodical approach to identifying and analyzing patterns and trends in crime. Using crime analysis and text mining, we can analyze the modus operandi of crime to reduce the offenders. However, many fraud victims rarely report their problems to the police and tend to share their fraud case stories with their social media. Therefore, it needs a capable dataset to further analyze the fraud case. TF-IDF as a weighting approach to find out the importance of a word. Thus, this study makes data derived from original data into data that can be processed for analysis. The data used are data from social media Twitter with 39,964 data that have keyword "penipuan" in Indonesian. This study uses text preprocessing techniques to clean the data from information which is not useful for the analysis process. This research produces the terms frequency that appears and visualizes it.
pp. 181-186
Performance Analysis of Implementation Model Architecture Reference and Master Data Management using Open Source Platform
Immanuela Christiantari Perdana, Tien Fabrianti Kusumasari and Ekky Novriza Alam (Telkom University, Indonesia)
Data becomes the most critical asset in the company. Of course, the data must be accurate, relevant, and consistent, used as a reference in corporate decision making. Data stored in the company is spread on a database that is in the company. To manage the scattered data, Master Data Management is needed to maintain data consistency. The selection of an appropriate MDM implementation model will support the successful implementation of MDM. This paper aims to improve and measure the performance of the MDM management process that has been made. Application performance in this study was measured using the performance testing method. This study will determine the performance of existing and receiving MDM applications to find out more effective performance. The benefits of this research to learn about the effective performance of MDM applications will benefit the company.
pp. 187-191
Analysis and Design of Master Data Monitoring Application using Open Source Tools: A Case Study at Government Agency
Muhammad Ariq Naufal, Tien Fabrianti Kusumasari and Ekky Novriza Alam (Telkom University, Indonesia)
Master data management is a process to integrate several data sources into one for good, consistent, and evenly standardized data. Several companies have been successful in creating a good quality data source for their data. However, some companies yet to have this system. A government agency in Indonesia finds difficulties in managing their data source. The problem lies in data duplication, lack of control on data reference, and the lack of application to monitor all data sources. This paper is carried out to fix the MDM monitoring dashboard that has been created in advance. The monitoring dashboard is created with the pureshare method. This paper designed a monitoring dashboard that can monitor the process and quality of master data. The benefit of this research is to get good quality and consistent master data, which can be used in all organizations, and its authenticity can be guaranteed.
pp. 192-196
Optimization of Decision Tree Algorithm in Text Classification of Job Applicants Using Particle Swarm Optimization
Wikan Kuncara Jati and Kemas Lhaksmana (Telkom University, Indonesia)
Job interview is one of the stages that must be passed by job applicants before getting a job. However, the manual interview process resulted in a large amount of cost and time of selection, so we need a system that can provide recommendations of which applicants are qualified. Currently there are many studies of text classification using the Naive Bayes method, K-Nearest Neighbor, Support Vector Machine, and Deep Learning. Therefore, algorithms that are used less frequently will be tested in this study, such as the Decision Tree. Then the Swarm Intelligence method, Particle Swarm Optimization, is implemented to improve the performance of the method. So that this research focuses on testing and comparing ordinary Decision Tree and Decision Tree methods that are optimized with Particle Swarm Optimization. From the test results, the accuracy of the optimized model increased by 7.1%, and the highest accuracy achieved was 74.3%.
pp. 197-201
Impact of Feature Extraction and Feature Selection using Naïve Bayes on Indonesian Personality Trait
Ahmad Fikri Iskandar, Ema Utami and Agung Budi Prasetio (Universitas Amikom Yogyakarta, Indonesia)
Personality is a characteristic that makes every human being have their respective differences. Ability of measuring personality using conventional approach needed time so long based on the content of the questionnaire that was given and focus on the researcher. Twitter is part of trend social media today that consist of large unstructured data text. The behavior displayed on the contents of tweets by users on Twitter is influenced based on their personalities, making it easier for researchers to make it as data. Naïve Bayes assumes all features on data are independent. This model works based on probabilities of variables appearing on a feature data. There are three scenario done in this work. Best average accuracy is third scenario is 74.57%, 79.83%, 77.17% and 66.89%. There are some factor contributed to increase performance accuracy namely, feature extraction, feature selection and also splitting.
pp. 202-207
E-Service Quality Assesment of Mobile-based Smart Regency with M-S-QUAL Approach
Aang Kisnu Darmawan, Dr (Madura Islamic University & Faculty of Engineering, Indonesia); Daniel Siahaan (Institut teknologi Sepuluh Nopember, Indonesia); Tony Dwi Susanto (ITS, Indonesia); Hoiriyah Hoiriyah, Busro Umam and Anwari Anwari (Madura Islamic University, Indonesia)
Today, almost every country struggles to implement city management with the concept of smart cities. Several previous studies have produced several models to measure the quality of information system services. Previously there had been several studies that measured service quality and experience quality in the context of smart city application. However, less work still explores the quality of service calculation of the Smart District model, which is quite different from the common smart - city concept. This study aims to evaluate the quality of service from the application of mobile-based smart regency or districts. The model and approach used is the Mobile Service Quality (M-S-QUAL) model, which is a model of development and adaptation of the previous model, namely SERVQUAL, a viral model for measuring the quality of information system services. Data collection was carried out from one hundred and seventy respondents who were successfully collected using the online survey method - data processing using SmartPLS v.3.2.8 software. Research findings indicate that the variables proposed in the M-S-QUAL model are positive and essential relationships for measuring the quality of smart district services. This research contributes to determining the effective use of intelligent regional mobile applications. This research confirms that lawmakers pay more attention to important issues that influence the performance of smart district mobile service quality
pp. 208-213

Tuesday, November 24 9:00 - 12:00

Opening Ceremony + Plenary Speakers

Rooms: Room A, Room B, Room C, Room D, Room E

Plenary

Tuesday, November 24 12:00 - 1:00

Break: Break Time

Rooms: Room A, Room B, Room C, Room D, Room E

Break Time

Tuesday, November 24 1:00 - 3:00

2A: Parallel Session 2-A

Room A
Chair: Tonny Hidayat (Universitas Amikom Yogyakarta, Indonesia)
Decision Making Framework Based On Sentiment Analysis in Twitter Using SAW and Machine Learning Approach
Erna Daniati (Indonesia & Universitas Nusantara PGRI Kediri, Indonesia); Hastari Utama (Universitas Amikom Yogyakarta, Indonesia)
One type of social media that is often used is Twitter. The development of social media is so fast that even users reach 326 million and produce 500 million tweets every day in July 2018. Users can send, change, and read short messages called tweets. Tweets can contain facts or opinions so it is very beneficial to be analyzed. The results of this analysis can be in the form of stock market predictions, elections, reaction events or news and measuring subjectivity. The activity of analyzing this tweet is a series of sentiment analysis activities. However, the results of sentiment analysis are cumulative in percentage of tweet polarity and only provide a overview for decision making. So, the intuitive aspect still plays a role in deciding the results of the Sentiment Analysis obtained. Therefore, there is a need for more specific modeling of sentiment analysis results. In the decision making phase, the results of the Sentiment Analysis are still in the Intelligence phase or can be called the Problem Discovery. To proceed again to the Design phase until with a Choice it is necessary to have a Decision Support System (DSS). In this study trying to propose In this study, proposed a decision making framework based on the results of sentiment analysis from the Tweet dataset. Sentiment analysis is built using the machine learning approach. Furthermore, the results of this study indicate that the SAW method can accept the input polarity of the number of tweets and produce alternative decision weights.
pp. 214-218
Product Segmentation Based on Sales Transaction Data Using Agglomerative Hierarchical Clustering and FMC Model (Case Study: XYZ Company)
Crisnandra Rahmita Mardiantien (Telkom University, Indonesia); Imelda Atastina (School of Electrical Engineering and Informatics, Bandung Institute of Technology & Telkom University, Indonesia); Ibnu Asror (Telkom University, Indonesia)
The availability of a large number of data and overgrowing data dimensions is a challenge for companies to create business opportunities by utilizing the data. A large number of data causes companies to search for information from the data so that the information can be used to grow their business. Knowledge or information from data can be found by using one of the techniques in data mining, namely cluster analysis. Cluster analysis allows companies to get information about the object cluster in the data owned by the company. In this research, cluster analysis for medicinal product cluster was conducted on XYZ Company transaction data using the FMC (Frequency, Monetary, and Customer Variety) business approach model and the Agglomerative Hierarchical Clustering algorithm. The results showed that in the XYZ Company transaction data, there are eight product clusters that can provide information to XYZ Company. Around 60.5% of products in 2018 and 78.8% of products in 2019 belong to clusters with a low FMC score. Therefore, by segmenting products, XYZ Company can find out products that require more attention and determine the right marketing strategy for the product.
pp. 219-224
Measuring Software Size and Effort Estimation on Islamic Banking Application
Renny Sari Dewi, Yogantara Dharmawan and Siti Aisah (Universitas Internasional Semen Indonesia, Indonesia)
To maintain customer trust, Islamic banking continues to improve its services which rely on the use of information technology (software and its supporting peripherals). In the meantime, researches on software in Islamic banking are still finite. Thus, the authors are interested in calculating the size and effort of working on the core application of an Islamic bank, which is called the Islamic Banking Application (IBA). Two methods are used in the study to see the size of the IBA. The size can be seen from the point of view of functional-based processes and non-functional aspects. The functional one, we propose the Use Case Points (UCP) method. In the UCP method, there are at least 4 steps that must be passed for measuring software size. On the other hand, for non-functional measurement, we used the original Function Points Analysis (FPA) concepts. As a result, this research produces size measurement software using the FPA technique of 2,109.40 function points, while the UCP method results in 3,024.54 use case points. To convert size measurement into effort estimation, the author uses previous research with a productivity rate of 8.2 person/hour. Thus, the predicted effort consumed to develop the IBA is 17,297.08 person/hour (based on FPA technique) and 24,801.23 person/hour (by UCP method).
pp. 225-229
Exploring Factors Influencing Smart Sustainable City Adoption using E-Government Services Effectiveness Evaluation Framework (E-GEEF)
Aang Kisnu Darmawan, Dr (Madura Islamic University & Faculty of Engineering, Indonesia); Daniel Siahaan (Institut teknologi Sepuluh Nopember, Indonesia); Tony Dwi Susanto (ITS, Indonesia); Hoiriyah Hoiriyah, Busro Umam and Bakir Bakir (Madura Islamic University, Indonesia)
Smart Sustainable City is an ICT-based integrated governance concept to efficiently manage city resources that is expected to be an answer to problems in increasingly complex and multidimensional city management. However, in the application of intelligent cities, several critical issues must be resolved immediately, including the factor of human resources that are still less competent, inefficient governance and ICT policies, lack of government dedication, and low participation of stakeholders and the society. This paper explores the main factors influencing the progress of intelligent city adoption in Indonesia. The approach adopted includes the E-Government Services Effectiveness Evaluation Framework (E-GEEF), a highly comprehensive model for technology adoption in the measurement of information technology adoption. Data collection using Purposive and stratified random sampling was conducted by collecting questionnaires from 288 service users and smart city stakeholders in Madura Island. Data analysis with the support of AMOS 23 software was conducted using typical structural equation modeling. The results of this study show that the building elements of the E-GEEF model have a significant partnership, which is beneficial to the adoption of Smart Sustainable City. This research contributes to the adoption of the model of the relationship between critical factors that affect technological adoption in Smart Sustainable City and recommends policymakers and local authorities to pay more attention to issues that influence successfully implementing Smart Sustainable City
pp. 230-235
A Project-level Investigation of Software Commit Comments and Code Quality
Dan Chen and Sally Elizabeth Goldin (School of Engineering, Thailand)
Requiring useful and meaningful commit comments is assumed to contribute to an effective software development process, but to the best of our knowledge, no research has demonstrated an empirical link between commit comment quality and code quality. To fill this gap, in this work, we acquired 120 open source projects from GitHub, examining the relationships between the commit comment quality and code quality over a 6-month period. We first calculated a set of comment quality metrics and a set of code quality metrics. Two of these metrics, comment expressiveness and code thrashing frequency, were newly developed as part of this research. Then we evaluated the relationship between these two sets of metrics. We found strong evidence that the new thrashing frequency metric is a valid measure of code quality. There were significant correlations between the new expressiveness metric and other comment quality metrics. Considering our main hypothesis, we found some significant relationships between the code quality metrics and the comment quality metrics, but the results suggest that relationships between comments and code are more complex than suggested by our original hypothesis.
pp. 236-241
Assessing User Experience of a Secure Mobile Exam Application using UEQ+
Bayu Setiaji and Mardhiya Hayaty (Universitas AMIKOM Yogyakarta, Indonesia); Arief Setyanto (Universitas AMIKOM Yogyakarta, Indonesia); Krisnawati Krisnawati (University of AMIKOM Yogyakarta, Indonesia); Harry Budi Santoso (Universitas Indonesia, Indonesia)
Online examination gains its popularity during COVID-19 pandemic due to massive adoption of online learning. In online examination, human proctoring activity is hardly possible, and therefore an alternative way keep academic integrity is needed. Currently, camera, microphone, and gyro sensors has been a standard embedded sensor in smartphones. This research utilizes those sensors to detect the possibility of cheating in online examination. A mobile application with two main functions was developed to record the user activity during exam as the background process and carry out examination function on the foreground task. Consequently, users may feel uncomfortable due to activity recording requirements. This research reveals many aspects of user convenience during examination towards those condition in user varied location, available bandwidth and gadget. A user experience questionnaire plus (UEQ+) was used to understand user feeling while they engaged with the application. Our analysis shows no strong correlation between key performance indicator achievement and the smartphone specification. According to the questionnaire analysis, the user considered that the secure exam application achieves 1.61 point in a range of -3 to 3. It is indicated that the users in general have positive experience with the mobile exam application.
pp. 242-247
Android Malware Detection Using Hybrid-Based Analysis & Deep Neural Network
Raden Budiarto Hadiprakoso (Poltek Siber dan Sandi Negara, Indonesia)
Currently, the growth of the Android operating system on smartphone devices is proliferating. A comfort that is felt by the community in using smartphones in various activities such as communication, playing games, and other things driving the popularity of smartphone use. However, the Android platform is now a target opportunity for cybercrime against security threats such as malicious software or malware. Identifying this malware is very important to maintain user security and privacy. However, due to the increasingly complicated malware identification process, it is necessary to use deep learning for malware classification. This study compiles static and dynamic analysis features from benign and malicious applications. The features extracted from APK consists of the API call sequence, system command, manifest permission, and intent. We then process that data using a deep neural network. We also concentrated on maximizing achievement tuning several configurations to assure the best combination of the hyper-parameters and reach the highest statistical metric value. Experimental results show that our model reached 99.08% accuracy, 98.14% recall, and 99.54% precision.
pp. 248-252

2B: Parallel Session 2-B

Room B
Chair: Bety Wulan Sari (Universitas AMIKOM Yogyakarta, Indonesia)
Smart Kost: a Proposed New Normal Boarding House Controlling and Monitoring System in Industry 4.0 Era
Alfredo Gormantara (Universitas Atma Jaya Makassar, Indonesia); Julius Galih Prima Negara and Suyoto Suyoto (Universitas Atma Jaya Yogyakarta, Indonesia)
Industry 4.0 is the term for automation and transparency of data using the latest technology. This concept can be applied in several aspects of life besides the manufacturing industry. The Boarding house is a type of industry that has been thrived in D.I. Yogyakarta. It was became a place of residence for students and workers in urban areas. As one type of business, boarding houses need a good management system for best service and run efficiently. At this time the boarding house is still managed traditionally. Along with the development of IoT (Internet of Things) technology, several problems and its challenges can be solved with this technology. 5 types of sensors are used to realize this research. The sensor sends data and then the information is processed in a software application. This research uses a mobile application and integrates the sensors to monitor and control boarding rooms. This research can be used to face the new normal challenges, using the latest technology. The challenge of contactless process, energy efficiency, open information and data can be well prepared by boarding house owners. The proposed system is predicted to save electricity costs for each bedroom by approximately 38% and also affect the saving of electricity bill payments by managers.
pp. 253-257
Performance Analysis on x86 Architecture Microprocessor for Lightweight Encryption
Nyoman Karna, Shafira Febriani and Ramdhan Nugraha (Telkom University, Indonesia); Dong Seong Kim (Kumoh National Institute of Technology, Korea (South))
Encryption is a process to replace information with an unreadable code using specific algorithm so that only the people who have the key and algorithm can understand the original information. The encryption, and its follow up decryption process, requires high computational power, including transposition, substitution, and iteration, which affects the speed and various aspects of performance, especially when applied to a generic processor. This research tries to find the optimum machine instruction composition for lightweight encryption to establish a design for Application-specific Instruction Set Processor (ASIP). The lightweight encryption algorithms used in this research are the Data Encryption Standard (DES) and Advanced Encryption Standard (AES), implemented using x86 instruction set architecture and compare the results for the machine instruction composition and the computational speed between the two programs. For machine instruction composition, the result shows that AES requires fewer instructions with 1537 instructions on separate I/O data scenarios and 1487 instructions on data overwrite scenarios. Both algorithms show that the most widely used machine instruction type is the Data Transfer. From performance point of view, AES is faster than DES with an average computational time in separate I/O data scenarios is 0.0544653 seconds and for the data overwrite scenario is 0.0520902 seconds.
pp. 258-261
Terahertz Imaging Simulation on A Rectangular Metal Object with Bowtie Antenna Coupled Bolometer Sensor
Hendry Steven Marbun and Catur Apriono (Universitas Indonesia, Indonesia)
Security technologies have a big role in the development of the world of air transportation to prevent any plane accidents, including acts of terrorism. Currently, big worldwide airports have been taking advantage of the use of x-ray backscatter and millimeter-wave technology for metal detection in the human body. However, the implementation of this technology can cause ionization impact on the use of X-Ray backscatter technology and low accuracy result obtained from millimeter wave technology. The use of THz waves that occupy the electromagnetic spectrum between X-ray and millimeter-wave has been attractive to be an alternate solution for imaging applications. This paper discusses an imaging simulation of a metal object with THz waves radiation by using CST Microwave Studio. The considered radiation detection sensor is a bowtie antenna coupled with a bolometer structure that has the main resonant frequency in 1 THz. The bolometer has a resistivity of 8.75x10-6 Ωm. The absorbed power in every bolometer will become a unit value component in an imaging result. Simulation and variation on the radiated frequencies have shown that the greater radiation coming to the metal object shows a lower diffraction effect. Further studies on a wider frequency band of the detected antenna are necessary for observing the antenna resonant and its effectiveness in absorbing irradiated power.
pp. 262-266
Simulation of Fiber Optic Chemical Sensor for Monitoring of pH Level
Harisa Rahmah, Budi Mulyanti, Roer Eka Pawinanto and Arjuni Budi Pantjawati (Universitas Pendidikan Indonesia, Indonesia); Lilik Hasanah (Universitas Pendidikan Indonesia, Malaysia); Wawan Purnama (Universitas Pendidikan Indonesia, Indonesia)
Water is a natural resource which is very beneficial for aquatic life in the fishpond. One of the considering factors for fishpond water is the pH level which greatly affects fish life sustainability. The study aims to design fiber optic-based sensors (FOS's) that can monitor pH levels in the fishpond water. The wavelength values of potential light sources for the sensor, the refractive index value of cladding for FOS's and the effect of changing pH levels on the sensitivity of FOS's are also studied. The method used is the simulation method using the Lumerical MODE Solutions software. The parameters used in the study include the value of the light source wavelength, diameter, and length of the cladding, and the pH level values in the range of 6.5 - 8.5. The results show that the light sources with 1550 nm and 1310 nm of wavelength are potential to be used as an ideal indicator of FOS's. The optimum cladding's refractive index for FOS's is when the value is close to the core's refractive index. In addition, the electric field also increases when the pH value increases. Furthermore, FOS's resulted has a sensitivity value of 7.18638 ×10^(-3) Vm^(-1)/au at a pH level of 7.5
pp. 267-271
Design System Body Temperature and Blood Pressure Monitoring Based on Internet of Things
Alamsyah Alamsyah (Tadulako University, Indonesia)
The current development of technology aims to facilitate all human activities and work practices. One of the fields of technology that requires fast and efficient information services is the health sector. Health services or checks are very important to determine a person's physical condition or health conditions and early prevention. However, hospitals or health centers' equipment is still carried out conventionally or still use cable media to send patient data information. One example of medical personnel's conventional equipment to take body temperature and blood pressure is a thermometer and sphygmomanometer. This condition certainly requires a longer time to process patient data because it does not work in real-time. To improve health services need monitoring system of body temperature and blood pressure based on the internet of things (IoT) is proposed using the Arduino Uno module. This study's purpose is to easier for medical personnel to monitor the patient's health condition in real-time, reduce the burden on medical personnel, and reduce the occurrence of errors in the data collection process. This research started from literature study, hardware and software design, testing, data collection, and analysis of data collection results. The accuracy rates obtained on the blood pressure sensors for diastolic and systolic of 98,87% and body temperature sensor of 99.50%.
pp. 272-275
Quadruped Robot Control Base on Adaptive Neuro-Fuzzy Inference System With V-REP Simulator
Sigit Wasista (Institut Teknologi Sepuluh Nopember, Indonesia); Handayani Tjandrasa (Sepuluh Nopember Institute of Technology, Indonesia); Waskitho Wibisono (Institut Teknologi Sepuluh Nopember, Indonesia)
This study aims to design a Quadruped robot control on a new medium-sized Quadruped robot design called Kancil, which has 4 arms with 2 freedoms (4 x 2-DOF) with an overall weight of 5kg. ANFIS here is used as a balance regulator for the movement of the robot legs and CPG-VDP as a periodic drive system. The MIMO ANFIS structure is designed with two inputs and four outputs which are used to control the shoulder movements of the Quadruped robot, to maintain body balance so as not to fall. Input data for the gyro sensor tilt is -45 degrees to 45 degrees, studied in the ANFIS machine. The ANFIS output is then simulated using the V-REP simulator software, by converting the output data into a leg path so that it can be simulated. From the test results, the robot can pass through obstacles while walking down 30 degrees and 45 degrees in a balanced state and not falling.
pp. 276-281
Fishing and Military Ship Recognition using Parameters of Convolutional Neural Network
Adinda Maharani Dwi Yuan Syah (University of Indonesia, Indonesia); Meirista Wulandari and Dadang Gunawan (Universitas Indonesia, Indonesia)
Indonesia has a maritime boundary that vulnerable to illegal activities. Those activities lead to the bad loss of Indonesia income. Therefore, monitoring of every object which is passing through the maritime boundary is important. Detection of ship that is passing through the ocean is one of many ways to monitor the maritime boundary. Nowadays, there are many systems developed to detect and to recognize ship automatically especially fishing ship and military ship. The recognition adopts technology which is called CNN. CNN is deep learning algorithm that is based on image. CNN has many parameters that can be optimized the recognition system. This study investigated some parameters such as pooling layer, batch normalization and dropout parameters. For the best accuracy results on the fishing ship and military ship dataset obtained a value of 99.99% for training and 90% for validation accuracy. The best accuracy results are obtained by using the pooling layer with the max pooling type. Max pooling is more efficient used for object recognition than average pooling. The use of dropout functions can increase the level of training accuracy. Batch normalization can increase the validation accuracy value.
pp. 282-286
Blinking Eyes Detection using Convolutional Neural Network on Video Data
Firman Matiinu Sigit (Institut Teknologi Sepuluh Nopember, Indonesia); Eko Mulyanto Yuniarno (Institut Teknologi Sepuluh November, Indonesia); Reza Fuad Rachmadi and Ahmad Zaini (Institut Teknologi Sepuluh Nopember, Indonesia)
The drowsiness conditions at human body can affect to changing some parts of body's behaviour, such as changing behaviour of eyes, mouths, and brains. When people in drowsy conditions, their eyes will increase their frequencies of blinking, duration when eyes closed is longer than normal, and distance between upper eyelid and lower eyelid is shorter than normal behaviour, mouths will be a little more opened rather than usual, and brains will produce very low frequencies of brain's signals. This changing behaviour in some parts of bodies gives a benefit to detect drowsy conditions when some people do not give concern to this. Different behaviour of faces and eyes during drowsy conditions and normal conditions; decreasing distance between upper eyelid and lower eyelid is a worth idea to detect drowsiness, this is a basic idea in this research, we want to build a machine learning to detect blinking eyes based on images running in realtime. In this research, we collect images of faces and eyes to build a dataset and separate this dataset to two label categories based on our target classifications, these labels are "opened eyes" and "closed eyes". There are 3 different datasets in this research, first dataset contains 6000 images, second dataset contains 8000 images, and third dataset contains 10000 images of faces and eyes, each of those datasets is collected from one sample person, one sample person is me (author). There is a little thing different at 10000 image dataset compared by those two datasets particularly at closed eyes class category. In 10000 image dataset contains 4000 perfectly closed eyes images and 1000 half-closed eyes images in closed eyes class label. All of those datasets are trained to convolutional neural network (CNN) so we have 3 different pretrained CNNs. Those three pretrained CNNs are tested to detect blinking eyes of samples running in realtime. There are 11 differences of samples, one sample is me (author) and 10 another sample is from other people. From this test, we get the conclusions that the highest success in detect blinking if those pretrained CNNs are tested to detect blinking eyes from sampel face of me (author) is exactly placed at center in front of frame/camera. Rate of success in this detection is 0.95 every 20 detection.
pp. 287-292

2C: Parallel Session 2-C

Room C
Chair: Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Multiclass Classification of Modulation Formats in the presence of Rayleigh and Rician Channel Noise using Deep Learning Methods
Rahim Khan, Yang Qiang, Ahsan Bin Tufail and Alam Noor (Harbin Institute of Technology, China)
Wireless communication technologies have revolutionized the communication scenario completely. In the last four decades, the transition from 1G to 5G communication systems has been instrumental in opening up many new possibilities. Automated modulation format recognition is a major step with wide applications in intelligent communication systems. It is a major step towards accurate signal detection on the receiver side. Deep learning techniques have widespread use in image recognition, object detection, speech recognition as well as reinforcement learning domains etc. In this study, we compared and contrasted 2D and 3D Convolutional Neural Networks (CNNs) for the task of automatic recognition of modulation formats. We used random images of cat, that are modulated using 16 and 64 Quadrature Amplitude Modulation (QAM) formats, passed through Rayleigh and Rician channel noise formats and classified using both 2D and 3D CNN architectures. We tried 5 and 10 fold cross-validation procedures to train multiclass (4-classes) classifiers. We found the performance of 3D CNN architecture trained using 10-fold cross-validation procedure to be the best in terms of the reported metrics and the performance of 2D CNN architecture trained using 5-fold cross-validation procedure to be the worst. In general, we found the performances of 3D architectures to be better than their 2D counterparts.
pp. 293-297
Detection Of CT - Scan Lungs COVID-19 Image Using Convolutional Neural Network And Contrast Limited Adaptive Histogram Equalization
Ronaldus Morgan James and Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Coronavirus is a virus that can cause disease in humans and animals. In humans, coronavirus can cause respiratory infections. The spread of this virus is very wide, almost to the entire world including Indonesia. On July 16, 2020, the total number of coronavirus cases in Indonesia reached 81,668 people. COVID detection is a major task for medical professionals today because it spreads very fast. Because of the soaring number of C0VID19 patients, the test kit is limited, so the time spent in the diagnosis process is quite long. Therefore it is necessary to implement a system for automatic coronavirus detection quickly as an alternative. This study intends to help medical practitioners to detect CT-SCAN images of co-infected lungs19. The method that will be used is Contrast Limited Adaptive Histogram Equalization (CLAHE) to improve the quality of the CT-SCAN image of the COVID Lung 19 and Convolutional Neural Network (CNN) for the image classification process. The dataset used was 698 images with jpg extensions. This study uses 3 convolutional layers, 3 layers max-pooling and 2 layers fully connected resulting in an accuracy of 83.28%.
pp. 298-303
Performance Comparison of Mushroom Types Classification using K-Nearest Neighbor and Decision Tree
Nadya Chitayae and Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
There are currently an estimated 1.5 million species of mushrooms in the world. Among the millions of mushrooms that exist throughout the world, there are two types of mushrooms, namely edible and poisonous mushrooms. Many people get food poisoning because they don't know that the mushrooms are poisonous. Even some countries have reported cases of poisoning due to poisonous mushrooms. However, identifying edible and poisonous mushrooms is not easy because of the large number of mushrooms and has similar characteristics. Identifying the type of mushrooms can be done by utilizing data mining science, namely classification, which can help find essential patterns from millions or even billions of data records. The method used to classify the types of mushroom is using the K-Nearest Neighbor and Decision Tree methods. The performance of the two methods was compared so that it can be seen which method is better in classifying the type of fungus. Experimental analysis conducted on the UCI Mushroom dataset provides evidence of the proposed method's effectiveness and the most appropriate method for the classification of mushroom types. The results obtained indicate that the Decision Tree method has better performance with an accuracy value of 0.9193 or 91.93%, a precision of 0.9227, recall of 0.9193, and an F1 score of 0.9210.
pp. 304-309
Accent Recognition by Native Language using Mel-Frequency Cepstral Coefficients and K Nearest Neighbor
Dwi Sari Widyowaty and Andi Sunyoto (Universitas Amikom Yogyakarta, Indonesia)
Almost all the world use English to communicate. English accents appear in various parts of the world. Every Country that communicates in English has a different accent, for example, English, French, Spanish, Saudi Arabia, Korea, and China. The difference in accents influenced by environment, culture, and birthplace. Accent recognition is an important thing, by recognizing the speaker's accent, it will be known the origin of the speaker. In the future, accented English is pivotal to know, both used in learning and other interesting Automatic Speech Recognition System (ASR), therefore, the research of recognizing an accent is interested. This paper is expected to contribute some methods for accent recognizing. This paper recognizes some Native language from English, French, Spanish, Arabic, Korean, and Mandarin using MFCC Feature extraction and KNN Method. This technique reaches 57 % accuracy with the best K = 3, where there is an increase accuracy compared to previous studies.
pp. 310-315
Combination of Machine Learning and Decision Support System for Determining the Priority of Covid-19 Patients
Muhamad Kurniawan and Arham Rahim (AMIKOM, Indonesia); Kusrini Kusrini (AMIKOM Yogyakarta University, Indonesia)
Coronavirus Disease-2019 (COVID-19) is a new type of coronavirus that has become a pandemic in various countries. The large number of people exposed to COVID-19 at the same time makes it difficult for hospitals to accommodate all patients so that they must determine the priority scale which patients should get treatment first. In this study, we designed a decision-making system that was used to determine which patients were prioritized for treatment. In addition, we integrate with AI to quickly detect someone exposed to covid or not from X Ray images. The X Ray image detection model that we use is based on VGG16. Obtained f1-score in testing the Covid-19 and non-Covid-19 label dataset by 99.4%. Then in the normal label dataset, covid-19, and pneumonia obtained f1-score 96.5%. In the covid-19 dataset and pneumonia an f1-score of 98.7% was obtained. In addition, our model also beats other pretrain models such as densenet, resnet, squeezenet, and inception with a range of around 4%. This study also provides the same results between manual calculations and decision support systems that are built.
pp. 316-321
The Using of Gaussian Pyramid Decomposition, Compact Watershed Segmentation Masking and DBSCAN in Copy-Move Forgery Detection with SIFT
Firstyani Imannisa Rahma and Ema Utami (Universitas Amikom Yogyakarta, Indonesia); Hanif Fatta (Universitas AMIKOM Yogyakarta, Indonesia)
The case of image manipulation is rising, along with the growth of digital imaging development. One of the many dangerous image manipulation types is a copy-move forgery. In copy-move, the user can cover some parts in images with another piece in the same picture. In copy-move forgery detection, SIFT extraction is one of many methods used by many researchers. Usually, this keypoint extraction method had combined with other methods for improving its accuracy. This paper explained our experiment with Gaussian Pyramid Decomposition, Compact Watershed segmentation mask, and DBSCAN clustering in copy-move forgery with SIFT. This combination shows the average precisions at more than 79% and can detect in shorter average times.
pp. 322-327
Measurement and Classification Retinal Blood Vessel Tortuosity in Digital Fudus Images
Rezty Amalia Aras (Institut Teknologi dan Bisnis Kalla, Indonesia)
Analysis and detection of retinal blood vessels structure changes is the most important for diagnosing and detecting retinal diseases. Retinal blood vessels are normally straight or curved gently, but they tend to dilate and expand into twisting with age or the number of retinal disease. Tortuosity is a qualitative parameter used by ophthalmologist to show how blood vessels tortuos such as mild, moderate, severe and extreme as the analysis result remain subjective. To establish the relationship between tortuosity and vascular pathology requires quantitative measurement of tortuosity. This research developed a computer aided diagnosis (CAD) to detect retinal blood vessels before measure the blood vessels tortuosity and classify them. A method of morphological reconstruction is proposed to detect retinal blood vessels. Retinal blood vessel tortuosity was calculated using relative length variation method to classify retinal images. This research is conducted two types classification using K nearest neighbour. The first classification is normal and tortuosity classes. The second classification is moderate and severe tortuosity classes. The evaluation result for retinal blood vessel detection is obtained accuracy of 96.2%. Calculation of retinal blood vessel tortuosity based on relative length variation method because it has the best correlation of 0.892 to grading. The results of retinal blood vessel tortuosity classfication between normal and tortuosity classes is obtained the best accuracy of KNN by 93%. The classification between moderate and severe tortuosity classes using KNN obtains accuracy of 100%. The proposed method can assist the ophthalmologist to detect blood vessels and calculate tortuosity of blood vessels to diagnose retinal diseases.
pp. 328-333
Features Extraction Performance to Differentiate of Spinal Curvature Types using Gray Level Co-occurrence Matrix Algorithm
Yessi Jusman, Julnila Husna Lubis and Anna Nur Nazilah Chamim (Universitas Muhammadiyah Yogyakarta, Indonesia); Siti Nurul Aqmariah Mohd Kanafiah (Universiti Malaysia Perlis, Malaysia)
Based on the development of technology, spinal abnormalities can be detected from x-rays digital images to help experts as (second opinion) to be able to perform spinal abnormalities diagnostics with adequate time and more accurate results. This research has the aim to analyze the use of image processing techniques to extract features in two types of spinal imagery, namely normal and abnormal (i.e., scoliosis), by applying the Gray Level Co-occurrence Matrix (GLCM) algorithm and Support Vector Machine (SVM) for the classification method. Image data used in this study were 40 images divided into 4 data sets for analysis. This analysis uses three distance parameters, namely 50, 75, and 100 pixels, and three parameters of quantization values, 8, 16, and 32. The highest accuracy obtained from one of the specific data set is 100%, while the highest accuracy of the average of each value distance and quantization is 90%. The GLCM algorithm can differentiate the abnormality of spinal imagery.
pp. 334-338

2D: Parallel Session 2-D

Room D
Chair: Aditya Hasymi (Universitas AMIKOM Yogyakarta, Indonesia)
Optimized Deep Transfer Learning for Covid-19 Screening using Chest X-Ray Image
Stephany Octaviani Ngesthi and Iwan Setyawan (Satya Wacana Christian University, Indonesia)
The Covid-19 pandemic continues to spread at an alarming rate. One of the methods used to screen potential Covid-19 infected patients is analysis of chest x-ray images. However, the sheer number of patients may overwhelm the radiologists that have to perform such analysis. Therefore, it is desirable to perform automatic screening to provide Covid-19 positive cases as early as possible. In this study, transfer learning by using pre-trained deep residual network model was implemented to perform the binary classification between Covid-19 infected persons and normal (ie., non-infected) persons. We also use automatic cropping of the input images to focus on lung area to optimize the learning performance. Our experiments show that this approach yields better performance achieved, achieving accuracy rate of 99.35% compared to an accuracy rate of 98.08% without application of automatic cropping. The performance of the proposed system when used to classify images taken from a dataset that are completely different from those used in the training process is also satisfactory. The system only produces 1 false negative (out of a dataset containing 66 images) with automatic cropping compared to 3 false negatives and one false positive without cropping. These results show that the pre-trained model with automatic cropping gives superior performance and is suitable to be used in automated Covid-19 screening based on chest X-ray images.
pp. 339-344
Deep Neural Network of Earthquake Signal Identification using Stridenet
Hajar Nimpuno Adi and I Wayan Mustika (Universitas Gadjah Mada, Indonesia); Sigit Basuki Wibowo (Gadjah Mada University, Indonesia)
Earthquake is one of the main causes of worldwide destruction. A seismogram is a time function recording of ground motion. From this twenty-four-hour record of data, we must draw a distinction between earthquake and earthquake noise. One of the applications of Artificial Intelligence can be used to identify between seismic events and noise. In this paper, we studied seismograms to identify an earthquake. Our main objective is to identify earthquake signals and earthquake noises with the combination of CNN, LSTM and fully connected layer using established datasets. The average validation accuracy of the model when reaching saturation was 98,08% and validation loss was 14,76%.
pp. 345-350
Roof Materials Identification Based on Pleiades Spectral Responses Using Specular Correction
Ayom Widipaminto (Indonesian National Institute of Aeronautics and Space, Indonesia); Yohanes Fridolin Hestrio and Yuvita Dian Safitri (National Institute of Aeronautics and Space, Indonesia); Donna Monica (Indonesian National Institute of Aeronautics and Space, Indonesia); Rokhmatuloh Rokhmatuloh (University of Indonesia, Indonesia); Djoko Triyono (Universitas Indonesia, Indonesia); Erna Sri Adiningsih (National Institute of Aeronautics and Space, Indonesia)
An important factor that needs to be done to determine city development is to identify the roof materials. One of the identification methods that can be used is spectroscopy - the study of electromagnetic radiation. The analysis we developed is the use of a specular effect filter to get a better value for radians. Better values of radians can yield a better reflectance value that can be used for more accurate identification of roof materials. By estimating the shading factor and specular effect, the radiance value captured by the sensor can be corrected so that differences in reflectance values due to differences in sensor and light source observations can be corrected. We classify the types of materials based on their spectral responses using the Support Vector Machine (SVM) method, as a basis for the identification of roof materials using specular correction. In this study, we performed a comparative analysis of Top of Atmosphere (TOA) and specular-TOA (S-TOA). From this comparison, it was found that the S-TOA accuracy value was better with a value of 95.78% when compared to the TOA accuracy value which was only 93.87%. The Kappa coefficient result also increased in value 0.9416 when compared to TOA which had a value of 0.9146. This study proves that specular effect correction can improve the quality of roof materials identification in urban areas.
pp. 351-354
Speech Age-Gender Classification Using Long Short-Term Memory
Galih Nitisara and Suyanto Suyanto (Telkom University, Indonesia); Kurniawan Nur Ramadhani (Universitas Telkom, Indonesia)
Recognizing the age and gender of a person with certainty from a media has a significant advantage. For example, the perpetrators recorded on a CCTV camera can be easily recognized, or someone used to lie about age on social media or job application applications can be easily detected. However, detecting the exact age of a person is still a tricky thing because of the quality media and the characteristics of the person who seems deceptive. In machine learning, the neural-based methods are commonly used for classification and recognition. However, age and gender classifications still produce unsatisfactory results, even age and gender classifications by speech are rarely discussed. Hence, the right approach is needed to create a good age and gender classification model. One of the solutions is using Recurrent Neural Network (RNN), which is made for sequential data like speech. In this paper, a speech age-gender classification model is developed using one of the popular RNN models called Long Short-Term Memory (LSTM). The experimental results show that the proposed model is trapped on the overfitting problem so that the accuracy of the testing set is lower than the training set. Regularization can reduce the difference between the accuracies of both training and testing sets but it cannot increase them. The data augmentation is able to slightly solve the overfitting problem.
pp. 355-358
Improved Residual Neural Network for Breast Cancer Classification
Reynold Erwandi and Suyanto Suyanto (Telkom University, Indonesia)
Breast cancer is one of the most dangerous types of cancer, especially for women. In 2015, it became the deadliest cancer after lung cancer in America. Some studies found that both self-detection and prevention are important factors in dealing with this cancer. The process of diagnosing breast cancer traditionally takes a long time. Moreover, pathologists are not 100% sure of the results of their diagnosis. Therefore, in this research, a computer-aided system is developed to help doctors to classify cell types based on histopathological images. In this research, a new model based on convolutional neural networks with an improved Residual Neural Network (ResNet) architecture is proposed to distinguish histopathological images into some classes of breast cancers. Testing on the BreakHis dataset shows that the best performance of the proposed method gives the average accuracies of 99.3% and 94.6% for binary and eight-class classifications, respectively. These results are comparable to state-of-the-art results in the recent study.
pp. 359-363
Discretizing WOA to Optimize a Long Short-Term Memory
Rizki Achmad Riyanto and Suyanto Suyanto (Telkom University, Indonesia)
If previous data can influence data, then the data can be said to have sequential properties. Unlike the data that is not sequential, the randomization of sequential data sequences can change the data. A common neural network model generally cannot distinguish a sequential data from the non-sequential ones. Thus, the recurrent model is made specifically for managing data with sequential properties that must be considered by studying the relationship between data with the previous ones. To create a recurrent model, some parameters should be carefully designed. One of them is the architecture of the model. In this paper, discretizing the whale optimization algorithm (WOA), which is performed by determining the hidden layer number of neurons and dropout an architecture, is proposed to optimize the long short-term memory (LSTM). Evaluations on the large movie review dataset show that the proposed discrete WOA is capable of significantly giving an absolute improvement of the LSTM mean accuracy by up to 1.50% (from 91.23% to 92.73%).
pp. 364-367
Detection of COVID-19 on Chest X-Ray Images using Inverted Residuals Structure-Based Convolutional Neural Networks
Tita Karlita (Electronic Engineering Polytechnic Institute of Surabaya, Indonesia); Eko Mulyanto Yuniarno (Institut Teknologi Sepuluh November, Indonesia); I Ketut Eddy Purnama (Institut Teknologi Sepuluh Nopember, Indonesia); Mauridhi Hery Purnomo (Institut of Technology Sepuluh Nopember, Indonesia)
China officially reported the COVID-19 coronavirus's existence to the World Health Organization (WHO) on December 31, 2019. Since then, it has spread and has infected millions of people around the world. COVID-19 is a highly infectious disease and may lead to acute respiratory distress or multiple organ failure in severe cases. Recent studies have shown that in the chest X-rays of patients suffering from COVID-19 present specific characteristics of those infected with this virus. This paper presents a method to detect the presence of COVID-19 on chest X-ray images based on inverted residuals structure implemented in MobileNetV2 as a base model. We also explore the performance of using a Fully connected layer with dropout and using the Global Average Pooling layer as top layers of the base model to classify each image into COVID-19 or NonCOVID-19. Our proposed method was able to achieve COVID-19 detection with the best accuracy of 0.81, with precision, recall, and f1-score of 0.81, 0.75, and 0.77, respectively.
pp. 368-372
Classification of Diabetic Retinopathy through Deep Feature Extraction and Classic Machine Learning Approach
Radifa Hilya Paradisa (Universitas Indonesia, Indonesia); Devvi Sarwinda (Universitas Indonesia & Faculty of Mathematics and Natural Sciences, Indonesia); Alhadi Bustamam (Universitas Indonesia, Indonesia); Terry Argyadiva (Faculty of Mathematics and Natural Sciences, Universitas Indonesia, Indonesia)
Diabetic Retinopathy (DR) is a complication of diabetes, the leading cause of vision loss in working-age adults. An ophthalmologist can carry out the diagnosis of DR by examining color fundus images. However, the fundus image analysis process takes a long time. Automatic detection of DR is a challenging task. One of the deep learning approaches, Convolutional Neural Networks (CNN), is efficient in image classification tasks. In this research, a CNN architecture is used, namely ResNet-50, as feature extraction and classification. The ResNet-50 feature output at the feature extraction stage is also used as input for machine learning classifiers such as Support Vector Machine (SVM), Random Forest (RF), k-Nearest Neighbor (k-NN), and Extreme Gradient Boosting (XGBoost). The model works by using fundus images from the DIARETDB1 dataset. Data augmentation and preprocessing are proposed in this study to facilitate the model in recognizing images. The performance of each classifier is evaluated based on accuracy, sensitivity, and specificity. The SVM classifier achieved 99% for accuracy and sensitivity in the 80:20 dataset composition. The k-NN classifier obtains the highest specificity for the same dataset's design by 100%.
pp. 373-377

2E: Parallel Session 2-E

Room E
Chair: Kumara Ari Yuana, Yun (Universitas Gadjah Mada & Universitas Amikom Yogyakarta, Indonesia)
High Accuracy Conversational AI Chatbot Using Deep Recurrent Neural Networks Based on BiLSTM Model
Prasnurzaki Anki (University of Indonesia, Indonesia)
In the modern world, chatbot programs are implementations that can be used to store data collected through a question and answer system and then can be applied in the Python program to optimize the results based on highly rated questions asked in a service center. The application of chatbots in the Python program can use various models. Specifically in this program, the BiLSTM model will be applied. The output produced from the chatbot program with the application of the BiLSTM model is in the form of accuracy and also data set that matches the information the program user enters in the chatbot's input dialog box. The selection of models that can be applied to the program is based on data which can affect program performance, with the objective of the program which can determine the high or low level of accuracy that will be generated from the results obtained through a program, which can be a major factor in deciding the selected model. Based on the various considerations that are the requirements for choosing a model of a program, in the end the BiLSTM model is selected will be applied to the program. In addition to model selection, the next step is to determine the method used in the program, in this program the greedy method is a form of implementation of the BiLSTM model with the aim that when running the program, data processing time can be faster, and increase the value of the model selected in program. In addition, supporting attributes such as the seq2seq model are a determining factor in a program that can function to verify whether data processing matches the criteria that can be used as new in data processing. In addition, a program evaluation method is needed that can be used to verify whether the program output matches the data expected by the user. Based on the application of the BiLSTM model into the chatbot, it can be concluded that with all program test results consisting of a variety of different parameter pairs, it is stated that Parameter Pair 1 (size_layer 512, num_layers 2, embedded_size 256, learning_rate 0.001, batch_size 32, epoch 20) from File 3 is the BiLSTM Chatbot with the avg accuracy value of 0.995217 which uses the BiLSTM model is the best parameter pair.
pp. 378-383
Complexity Based Multilevel Signal Analysis for Epileptic Seizure Detection
Inung Wijayanto (Telkom University & Universitas Gadjah Mada, Indonesia); Rudy Hartanto (Gadjah Mada University & Electrical Engineering and Information Technology Departmen, Faculty of Engineering Gadjah Mada University, Indonesia); Hanung Adi Nugroho (Universitas Gadjah Mada, Indonesia)
A person with epilepsy is characterized by the occurrence of seizures repeatedly in the last 24 hours. There are several ways to detect seizures. One of them is by looking for the differences between signal patterns from electroencephalogram (EEG) recordings. Neurologists examine and diagnose the EEG signal manually, which is difficult to do and needs a lot of time. Therefore, a Computer-Aided Diagnosis (CAD) system could help the neurologist diagnose the existence of seizures in EEG signals. EEG signal is a signal produced by a complex biological system. This study proposed a multilevel wavelet complexity analysis to analyze the long term EEG signals recording. The process was started with a channel selection process to reduce the processed channels. EEG signals from selected channels were then decomposed using a five-level of wavelet packet decomposition (WPD), producing 32 wavelet coefficients. The signal then segmented using a ten-minutes non-overlapping window. Two types of complexity measurements (entropy and fractal dimension) were applied in each wavelet coefficients segment. A support vector machine (SVM) was used to classify the feature set into a seizure and normal conditions. The system then evaluated using 987.85 h of EEG recording of the CHB-MIT dataset. The highest accuracy of 96.8% was achieved by using multilevel wavelet fractal dimension analysis.
pp. 384-389
A Comparative Study of Deepfake Video Detection Method
Kurniawan Nur Ramadhani (Universitas Telkom, Indonesia); Rinaldi Munir (Institut Teknologi Bandung, Indonesia)
Deepfake technology allows humans to manipulate images and videos using deep learning technology. The results from deepfakes are very difficult to distinguish using ordinary vision. Many algorithms are built to detect deepfake content in images and videos. There are several approaches in deepfake detection, including a visual feature-based approach, a local feature-based approach, a deep feature-based approach and a temporal feature-based approach. The main challenge in developing deepfake detection algorithms is the variety of existing deepfake models in both images and videos. Another challenge is that deepfake technology is still evolving, making deepfake images and videos look more realistic and harder to detect.
pp. 390-395
Vehicle Type Classification in Surveillance Image based on Deep Learning Method
Edmund Ucok Armin (Gadjah Mada University, Indonesia); Agus Bejo (Universitas Gadjah Mada, Indonesia); Risanuri Hidayat (Gadjah Mada University (UGM), Indonesia)
Vehicle type classification is an important part of intelligent traffic. With the development of research in the field of classification, especially in deep learning, many Convolutional Neural Network (CNN) architectures have been created. This becomes very challenging because increasing the accuracy of the CNN architecture in classifying vehicle types will contribute to the field of intelligent traffic systems. The method we propose is to improve the existing CNN architecture, ResNet-50, by replacing the Global average pooling (GAP) function with a flatten layer and adding a hidden layer before softmax activation. We set the number of filter on the residual block so that the parameters used are smaller than ResNet50. Our research focuses on the vehicle front view image dataset from surveillance cameras for the training and testing process. From the experimental results, our proposed method in the vehicle type classification outperforms ResNet-50, VGG16 Network and CNN in previous studies by yielding accuracy of 96.26%.
pp. 396-400
Degradation Classification on Ancient Document Image Based on Deep Neural Networks
Khairun Saddami (Universitas Syiah Kuala, Indonesia); Khairul Munadi (Syiah Kuala University, Faculty of Engineering, Indonesia); Fitri Arnia (Syiah Kuala University, Indonesia)
In this paper, we study degradation classification on ancient document images using three pre-trained models of benchmarking CNN architecture, i.e., Resnet101, Mobilenet V2, and Shufflenet. We use Document Image Binarization Contest (DIBCO), Persian Heritage Image Binarization Dataset (PHIBD), and private Jawi datasets for experimental purposes. We grouped the degradation into four categories, namely: bleedthrough/showthrough/ink-bleed, faint-text and low contrast, smear-spot-stain, and uniform degradation. In the training progress, we set optimizer to ADAM, the initial learn rate to 10-4, and three epoch values: 5, 25, and 50 training epochs. To test the model, we conduct two testing stages: (1) unblind testing, (2) blind testing. The result shows that Shufflenet with 25 training epochs achieved 100% and 85% accuracy of unblind and blind testing, respectively, and obtained the fastest computational process. We concluded that Shufflenet could be chosen in classifying degradations based on its accuracy and computational time.
pp. 401-406
A comparative study of two meta-heuristic algorithms for MRI and CT images registration
Hedifa Dida, Charif Fella and Abderrazak Benchabane (Kasdi Merbah University, Algeria)
Modern meta-heuristics algorithms have achieved great success in the field of medical image registration. Due to the difference in performance of these algorithms, we present a comparative study of the grey wolf optimizer (GWO) and particle swarm optimization (PSO) algorithms for registration of a section of the human brain Using the two methods Magnetic Resonance (MR) and Computed Tomography (CT). The simulation results show that the grey wolf optimizer algorithm approaches achieves high-precision and robust registration compared to particle swarm optimization.
pp. 407-411
Applying Neural Network on Wheelchair Control System using Eye Blink and Movement Command
Ilham Ari Elbaith Zaeni, Arizal Wijanarko and Ahsan Walad (Universitas Negeri Malang, Indonesia); Qi-Sheng An (Southern Taiwan University of Science and Technology, Taiwan); Anik Nur Handayani (Universitas Negeri Malang, Indonesia)
Some people with quadriplegia which cannot move their part of the body from hand to feet need an assistive device to support their mobility. An assistive device in form of an electric wheelchair can be developed using eye activity signals. The Artificial Neural Network (ANN) is proposed to be implemented on wheelchair control system based the user eye movement command. The system consists of the eye movement electrode, data collection, signal filtering and pre-processing, and the decision model. There are four commands that is involved on this study. The commands are glances left, glances right, blink, and double blink for command turn left, turn right, stop, and going forward, respectively. The hold out used for validation model by splitting data into 80% training set and 20% testing set. The test result shows that the Mean Absolute Percentage Error (MAPE) of the decision model is 1.55%. This result is a good result and the model can be implemented on the system.
pp. 412-416
Analysis of Gender Identification in Bahasa Indonesia using Supervised Machine Learning Algorithm
Evawaty Tanuar (Bina Nusantara University, Indonesia)
Gender classification or identification is an interesting research area in speech or voice signal processing. It is a promising research area, and there still a room for improvements, especially in the localization context. There are not many researches related gender identification in Bahasa Indonesia. Most of the research found are in English, some are in Chinese, Korea, Arab, France. This paper will used primary data, self-collected in Bahasa Indonesia to identify the gender using the supervised machine learning algorithm. MFCC is used as the feature extraction algorithm for input in the machine learning. After comparing several algorithms: Artificial Neural Network, SVM and K -Nearest Neighbors (KNN) algorithm, ANN shows more promising result then others. There are 2.735 primary data used in this research. The result in this research will be used in future experiment about the impact of gender classification in voice recognition in Bahasa Indonesia.
pp. 417-420

Tuesday, November 24 3:00 - 3:30

Break: Break Time

Rooms: Room A, Room B, Room C, Room D, Room E

Break Time

Tuesday, November 24 3:30 - 4:30

3A: Parallel Session 3-A

Room A
Chair: Widiyana Riasasi (Universitas AMIKOM Yogyakarta, Indonesia)
Secure Microservices Deployment for Fog Computing Services in a Remote Office
Favian Dewanta (Telkom University, Indonesia)
Microservices deployment remains insecure because it relies on the knowledge of microservices' users against security aspects in some particular fog computing networks. As a consequence, the users need to carefully assess the vulnerability of the micorservices' deployment. In addition, the users have to ensure that transactions between microservices and fog computing server should be verified and protected against any potential attacks. This paper proposes secure microservices deployment for environment of fog computing services by establishing trusted and authenticated communication channel prior to engaging any transactions among all entities. The proposed method is lightweight due to employing one-way hash function and XoR operation. Eventually, performance evaluation shows that the method is secure against replay attack, offline guessing attack, impersonation attack, and ephemeral secret leakage attack. Moreover, the proposed scheme is more lightweight in terms of communication and computational cost with respect to JPAKE algorithm.
pp. 421-426
Mobility Awareness in Cellular Networks to Support Service Continuity in Vehicular Users
Nandish P. Kuruvatti (University of Kaiserslautern, Germany); Sachinkumar Bavikatti Mallikarjun and Sai Charan Kusumapani (Technical University of Kaiserslautern, Germany); Hans D. Schotten (University of Kaiserslautern, Germany)
Mobile communication is an ubiquitously used technology that has evolved through various generations and is currently on the verge of its fifth generation (5G). In the recent years, Intelligent Transportation Systems (ITS) and supplementary vehicular use cases (e.g., autonomous driving) are considered widely within the scope of cellular networks. These use cases generally demand reliable and low latency services from the cellular network. Mobile Edge Clouds (MEC) in the 5G networks are often applicable to satiate such service demands of a vehicular user. However, the cellular handovers (HO) of vehicular users prompt frequent service migration among the MECs. The handovers and service migration increase the service interruption of a user. In this paper, we consider machine learning (ML) based mobility awareness to obtain future service migration and HO sites of a user. This enables smooth service migration by allowing non-state data transfer earlier to user handover. Further, it provides sufficient time for establishment of successful Coordinated Multipoint (CoMP) transmissions, which will reduce service interruption due to HO. Simulation results show that the proposed framework provides timely assistance for service migration and significantly reduces the service interruption time.
pp. 427-431
Near Distance Digital Data Transmission of A Low-Cost Wireless Communication Optical System
Patar Parlindungan Sianturi and Catur Apriono (Universitas Indonesia, Indonesia)
Demand for mobile data transfer has driven developing technologies, including optical wireless communications, such as visible light communication (VLC). The high demand has contributed much higher power consumption, which means an increase in carbon emission as well. VLC is an alternative solution for green communication because of its potential applications for both lighting and data transfer simultaneously. In this paper, we study a low-cost VLC system, which consists of two sides, to understanding its performance for digital data transmission. The first part of the transmitting end consists of a microcontroller and an LED. The second part of the receiving end consists of a photodiode and a microcontroller. This research considers OOK modulation, a darkroom to avoid noise from other light sources, and three variables of wavelength spectrums, clock rates, and distances. Parameter observation of bit-error-rate or BER shows that the average BER of LED in white color has smaller BER than red, green, and blue LED, which 0.377, 0.412, 0.387, and 0.387, respectively. From this research, we obtain an insight into sufficient received power and component characteristics that are necessary for developing effectively VLC systems.
pp. 432-436
Deceiving Smart Lock Trusted Place in Android Smartphones with Location Spoofing
Muhammad Yusuf Setiadji (National Crypto Institute of Indonesia, Indonesia); Bayu Aji (Politeknik Siber dan Sandi Negara, Indonesia); Amiruddin Amiruddin (Sekolah Tinggi Sandi Negara & Badan Siber dan Sandi Negara, Indonesia)
Convenience often comes with the price of security. As our research results in strengthening the opinion, we should reconsider the action of sacrificing security over convenience. One of those convenience that are readily available in almost any Android smartphone is the Smart Lock Trusted Place feature. By conditioning the smartphone in order to disable GPS satellite signals and creating a Wi-Fi hotspot with Wireless Positioning System, we are able to deceive the device that it is in the designated trusted place thus unlocking the phone.
pp. 437-441

3B: Parallel Session 3-B

Room B
Chair: Dhani Ariatmanto (Universitas Amikom Yogyakarta, Indonesia)
Implementation Analysis of Available Bandwidth Estimation for Multimedia Service on VANET Network using A-STAR Routing Protocol
Ida Nurcahyani (Universitas Islam Indonesia, Indonesia)
VANET technology enables the communication between vehicles by utilizing ad-hoc-based wireless networks. Because of the dynamic topology in VANET, routing protocols with topology-based such as A-STAR is suitable for finding the most efficient route. ABE is an important component of QoS to improve service quality at VANET. This study analyzed the effect of ABE implementation on the performance of average throughput and end-to-end delay on VANET networks for multimedia services. The simulation was done by changing the number and speed of nodes in the conditions before and after ABE was implemented. The simulation was conducted on the Padalarang toll road in Bandung, Indonesia. The results show a significant increase in the value of QoS obtained. The average delay value was reduced to four times lower than without ABE. For the average throughput, ABE implementation can provide an increase of up to 13 times than before. The simulation results also show that the addition of nodes affects the QoS value obtained. However, increasing the speed of the nodes on the VANET network does not cause a significant change in QoS value.
pp. 442-446
Analysis of Modulation Performance of Underwater Visible Light Communication with Variable Wavelength
Arya Maulana Ibrahimy, Budi Ikhwan Fadilah and Brian Pamukti (Telkom University, Indonesia)
This paper evaluates the performance of Underwater Visible Light Communication (UVLC) with various modulation and wavelength. The first scenario will analyze the Signal-to-Noise Ratio (SNR) of the UVLC with 450, 480, and 500 nm wavelength. The second scenario will compare the performance of Bit Error Rate (BER) with various modulation from On-off Keying No-Return Zero (OOK-NRZ), On-Off Keying Return Zero (OOK-RZ), 8 Pulse Position Modulation (8-PPM), and 8 Pulse Amplitude Modulation (8-PAM). Same as the first scenario the comparison of BER use 450, 480, and 500 nm wavelength. From the simulation of the first scenario, the used of 500 nm wavelength get the result 13.1147, which is the best result of SNR in this simulation. Meanwhile, in the second scenario, the combination of 8-PPM with 500 nm wavelength get the result 1.8922 × 10 −10, which is the best result and the value is smaller from Optical Wireless Communication (OWC) BER which is 10 −9
pp. 447-451
Resource Allocation with Random Orientation Using the Greedy Algorithm Method for Visible Light Communication
Raga Filydevilia Putra (Telkom University, Indonesia); Nachwan Mufti Adriansyah (Universitas Telkom, Indonesia); Brian Pamukti (Telkom University, Indonesia)
Visible Light Communication Technology (VLC) is a communication technology that has a large capacity in sending data. The allocation process is needed to improve the system quality in its implementation. This research will be focused to the process of allocating time slots to User Equipment (UE) by scheduling Greedy Algorithm. UE distribution is spreaded randomly in a 5x5x4 meter room with amounts from 6 to 24 UE and each direction of UE is changed gradually between 0◦, 15◦, and 30◦. The test results shows that the average of the total increase value in system channel capacity to variations of UE increases 0.034 % when the system using the scheduling Greedy algorithm and it requires power consumption which 2.19 times more efficient. Changing the receiver's point of view to 30◦ results in an average total channel capacity of 1444.096 Mbps and the highest at 0◦ with 1503.478 Mbps in variations of the UE, then the fairness value of the system is affected by the available UE. The highest fairness value is 0.833 when the number of UE is 6 while the lowest fairness value is 0.208 when there are 24 UE in the system. This is prove that with adding the amounts of UE can increase the total channel capacity and reduce the value of fairness system.
pp. 452-456
Android Assets Protection Using RSA and AES Cryptography to Prevent App Piracy
Afrig Aminuddin (Universitas Amikom Yogyakarta, Indonesia)
Android is the major operating system for mobile devices. The presence of the Google Play Store creates an ecosystem between the app developers and users. As the ecosystem grows, some pirated apps start to show up. This is possible due to the nature of the Android Application that easily can be extracted and decompiled to reveal the source code and the assets file. This research proposed a methodology to protect the application assets from piracy using the cryptographic algorithm. The assets are encrypted during the compile-time using the Gradle build system provided by the Android Studio. While the decryption is performed in the Android device during the application run-time. The proposed algorithm is the pair of asymmetric algorithm called RSA and the symmetric algorithm called AES. This research shows that RSA-AES gives the best security in protecting the assets of Android applications. Besides, the performance of the algorithm is evaluated based on the speed of the encryption and decryption that reach 106.82 MB/s and 44.42 MB/s respectively.
pp. 457-461

3C: Parallel Session 3-C

Room C
Chair: Aditya Hasymi (Universitas AMIKOM Yogyakarta, Indonesia)
Modified LSB on Audio Steganography using WAV Format
Rini Indrayani (Universitas Amikom Yogyakarta, Indonesia)
Steganography is a data security technique that used as an effort to secure data. Various techniques and media tested in the steganography technique. One of the classic methods that still today is the Least Significant Bit (LSB) method. As one of the classical methods, this method tested in many technical modification developments and tested on various types of cover media. Various LSB methods that have done always show changes in quality. Therefore, this study conducted tests to try to increase the capacity of the steganography and test the quality degradation that occurred using LSB and some modified LSB methods. The evaluations include calculating the PSNR value, drawing the visual spectrogram, and calculating the BER value. The PSNR evaluation shows that the larger the cover media size used, the smaller the risk of quality degradation, and the bigger the secret message, the greater the noise that will be generated. The evaluation of BER shows that any technique depends on the match between the cover media bits and the secret message bits. Evaluation of the calculation of the maximum steganographic capacity shows that the larger the size of the cover media, the greater the capacity of the secret message that the cover media has.
pp. 462-466
Vulnerability Analysis Using The Interactive Application Security Testing (IAST) Approach For Government X Website Applications
Lytio Enggar Erlangga (Politeknik Siber dan Sandi Negara, Indonesia); Hermawan Setiawan (Sekolah Tinggi Sandi Negara, Indonesia); Ido Baskoro (Politeknik Siber dan Sandi Negara, Indonesia)
The security of technology, information and communication (ICT) is one of the tasks of government agencies X. The security of government ICT can be achieved by applying the principle of Security by Design. The Open Web Application Security Project (OWASP) publishes a list of potential vulnerability risks that are most common in web applications. Security tests can be carried out by performing a vulnerability assessment. The risk assessment is a series of measures to identify and analyze possible security gaps in the system of an organization or a company. Steps to look for vulnerabilities in the vulnerability assessment phase, starting with target discovery, scanning, results analysis, and reporting. The IAST approach (Interactive Application Security Testing) is used for security tests using a vulnerability assessment. When developing a vulnerability analysis system using the IAST approach, Jenkins tools, the ZAP-API, and SonarQube are used. The results of the vulnerability analysis are grouped based on the OWASP Top Ten 2017. Using the IAST approach, a total of 249 vulnerability risks were identified.
pp. 467-471
Design and Characterization of Rectangular Array Microstrip Antenna for Cubesat S-Band Transmitter
Sherin Benyamin, Heroe Wijanto, Edwar Edwar, Vinsensius Sigit Widhi Prabowo, Haris Prananditya and Shindi Marlina Oktaviani (Telkom University, Indonesia)
Automatic Dependent Surveillance-Broadcast (ADS-B) is an air traffic surveillance technology that automatically and periodically broadcasts onboard aircraft flight information such as identity numbers, positions, speeds, and destinations during all phases of flight to avoid collisions. In the future, the radar system will be equipped or even replaced by the ADS-B ground station. Therefore, the Nano-Satellite Laboratory of Telkom University is developing a satellite technology called Tel-USat which the ADS-B receiver is one of the missions. This work focuses on the design and characterization of the antenna to send all the collected ADS-B data to the ground. This antenna is designed by using an FR-4 substrate material with two rectangular patches, linear array, T-junction power divider, and proximity coupled rationing. The results obtained during the measurement are return loss values at 2.4 GHz frequency of -18.5 dB, VSWR of 1.2, antenna bandwidth of 163 MHz, and the gain of 6.08 dB.
pp. 472-477
Dual Polarized Antenna Decoupling of Planar Antenna for 5G-NR Band N77
Muhsin Muhsin (Institut Teknologi Telkom Surabaya, Indonesia)
Multiple-Input Multiple-Output antenna is a key technology for the internet of things (IoT) and future wireless communications. The main challenge is to provide low correlation between antennas to provide the best diversity. This paper proposes dual-polarized antenna technique to reduce coupling of MIMO antennas on 5G-NR Band N77. Antenna uses a form of microstrip antenna with Rogers RT-5880. Antenna is modified with half-ground structure for higher bandwidth to meet 5G-NR Band N77 requirement. The designed antenna has 4 elements with dual-cross-polarized formation. Antenna has coupling below -17 dB. Low coupling provides low correlation. The obtained envelope correlation coefficient (ECC) below 0.01.
pp. 478-482

3D: Parallel Session 3-D

Room D
Chair: Rhisa Aidilla (Universitas AMIKOM Yogyakarta, Indonesia)
Enhancing Trust Model of Information Vehicular Ad-Hoc Networks Through Blockchain Consensus Algorithm
Muhammad Sulkhan Nurfatih and Mohd. Yazid Idris (Universiti Teknologi Malaysia, Malaysia); Deris Stiawan (University of Sriwijaya, Indonesia); Eko Arip Winanto (Uiversiti Teknologi Malaysia, Malaysia)
Vehicular Ad-hoc Networks (VANETs) presents smart transport that is capable of processing data and can self-organize for each vehicle. However, there have been security issues such as communication breakdowns between vehicles and information trust. Therefore, the trust model becomes an essential element in overcoming this problem. Various trust models have been suggested in the literature, including the model that utilizing consensus algorithm in the blockchain. This paper proposes an extension of the blockchain consensus algorithm based on Proof of Event (PoE) and Proof of Location (PoL) strategy, which call Proof of Event and Location (PoEL). Finally, it presents an initial experimental result on the effectiveness of our blockchain consensus algorithm extension. The simulation results show that the proposes blockchain consensus algorithm extension shows an increase in making blocks by 2% to 17% when the simulation is done with a different number of nodes.
pp. 483-488
Normalized Data Technique Performance for Covid-19 Social Assistance Decision Making
Edy Budiman, Joan Angelina Widians and Masna Wati (Universitas Mulawarman, Indonesia)
The student internet data assistance program is an effort by educational institutions to support online learning from home during the Covid-19 pandemic. A series of tests are applied to determine the optimization of decision making on the social assistance program performance. This study aims to evaluate the performance of students' internet data assistance programs using a confusion matrix approach, in particular on the performance of simple, linear and vector normalized data analysis methods. The representation normalized data performance for simple data using SAW, linear data is VIKOR and vector using the MOORA method. The study results found that there were differences in performance in the process of selecting preferences for ranking potential social assistance recipients, as well as a differential in the confusion matrix performance values on the accuracy, precision, recall and error rate values on each method
pp. 489-494
A Preliminary Study of Meteotsunami Using Fuzzy Logic Algorithm over Sunda Strait, Indonesia
Nur Arifin Akbar (Universitas Amikom Yogyakarta & Idenitive Mashable Prototyping, Indonesia); Ema Utami (Universitas Amikom Yogyakarta, Indonesia); Wahyu Sasongko Putro (Institut Teknologi Sumatera (ITERA), Indonesia); Zadrach Ledoufij Dupe (Institut Teknologi Bandung, Indonesia); Andi Cahyadi (Meteorological Climatological and Geophysical Agency (BMKG), Indonesia); Hendra Achiari (Institut Teknologi Bandung, Indonesia)
Natural hazard disaster caused by ancient mountain (Krakatoa) is very dangerous for human life at Sunda Strait, Indonesia. In 22 December 2018, the Krakatoa eruption mountain and minor effect from Kenanga tropical cyclone are triggered Meteotsunami with a tidal wave 3 to 15 meter over Sunda Strait, Indonesia. A many people has passed away during Meteotsunami event. Thus, in this study aimed to analyze meteorological parameter during Meteotsunami in one-month observation (1 Dec to 31 Dec 2018) using Fuzzy Logic algorithm. The result shows three cluster parameters are proposed to obtain a Meteotsunami model based on Fuzzy Logic algorithm with highest correlation (R-sq) value. Based on correlation analysis, we choose the fit Meteotsunami model from meteorological parameters. Finally, the fit meteorological parameters can be estimate Meteotsunami model especially over Sunda Straait, Indonesia in near future.
pp. 495-499
Increasing Residential Capacity in Gigabit-capable Passive Optical Network using High Splitting Ratio
Nurul Putri (Akademi Telkom Sandhy Putra Jakarta, Indonesia); Yus Natali (Akademi Teknik Telekomunikasi Sandhy Putra Jakarta & Universitas Indonesia, Indonesia); Catur Apriono (Universitas Indonesia, Indonesia)
Nowadays, multimedia technology has led to the active development of many types of broadband services such as the delivery of data, voice, and video (multiple-play) services. PONs (Passive Optical Networks) can provide these services cost-effectively. Fiber to The Home (FTTH) with G-PON which is PON's expansion, is generally based on tree network topologies that use passive optical splitters. Splitting level for passive optical splitter is possible to be used in one or two levels in Optical Distribution Networks (ODNs) with the configuration of passive splitter 1:2, 1:4, 1:8, 1:16, 1:32, and 1:64. This paper discusses FTTH deployment using a high splitting ratio method by applying 1:8 passive splitters in each of the two levels in ODNs for residential areas. This high splitting ratio can increase the user capacity for future user scaling. The proposed method is simulated by using the Optisystem software. The power link budget and bit error rate (BER) are considered as the G-PON eligibility standards in this paper. The received power and BER meet the eligibility standard set by ITU-T G.984.2, which is greater than -28 dBm, and not worse than 10-12 for broadband services, respectively. This method shows that the proposed configuration can be implemented in a rapid residential because every G-PON port can serve 64 users. It increases 2560 users for every G-PON network, which is a double number of increasing users compared to the general splitting method of 1: 4 and 1: 8.
pp. 500-504

3E: Parallel Session 3-E

Room E
Chair: Gardyas Adninda (Universitas AMIKOM Yogyakarta, Indonesia)
Design and Performance Evaluation of Visible Light Communication AFE using IC LM741
Muhammad Hamka Ibrahim (Universitas Sebelas Maret, Indonesia); Feri Adriyanto (Sebelas Maret University, Indonesia); Agus Ramelan (Universitas Sebelas Maret, Indonesia); Hari Maghfiroh (Sebelas Maret University, Indonesia); Miftahuddin Irfani (Universitas Sebelas Maret, Indonesia)
Analog front-end (AFE) of Visible Light Communication (VLC) has a lot in common with other conventional analog front-end communication systems. There are modifications to the filter section, analog gain controller and DC offset controller to condition the signal before entering digital components so that it can be read properly after going through communication channels with a lot of interference or noise. With a wavelength of 380-700nm, and a frequency of 430-770THz visible light becomes easily dissipated by. So that requires special treatment to condition the signal to suit your needs. Analog front-end consists of band pass filter, dc offset cancelation, and automatic gain controller. The advantage of this research lies in the use of LM741 tools available in the community and easy implementation. VLC transceiver with AFE using LM 741 has been implemented. It shows that the performance of AFE is consistent with the behavior of VLC in respect to light source distance, angle, and dimming. However due to limitation of LM741, it only support clock frequency up to 48 kHz.
pp. 505-508
PoAS: Enhanced Consensus Algorithm for Collaborative Blockchain Intrusion Detection System
Eko Arip Winanto (Uiversiti Teknologi Malaysia, Malaysia); Mohd. Yazid Idris (Universiti Teknologi Malaysia, Malaysia); Deris Stiawan (University of Sriwijaya, Indonesia); Muhammad Sulkhan Nurfatih (Universiti Teknologi Malaysia, Malaysia); Sharipuddin Sharipuddin (STIKOM Dinamika Bangsa, Indonesia)
Signature-based Collaborative Intrusion Detection System (CIDS) is highly depends on the reliability of nodes to provide IDS attack signatures. Each node in the network is responsible to provide new attack signature to be shared with other node. There is problems exist in CIDS highlighted in this paper, first is to maintain trust among the nodes while sharing the attack signatures. Recently, researcher find that blockchain has a great potential to solve those problems. Consensus algorithm in blockchain is able to increase trusts among the node and allows data to be inserted from a single source of truth. Aim this paper to design an extension of hybrid PoW-PoS Chain-based consensus algorithm to fulfill the requirement. The extensions name it as Proof of Attack Signature (PoAS). In the evaluation, the results demonstrated that the PoAS consensus algorithm successfully builds trusted attack signatures, and enhanced the robustness of CIDS, when compared to network without the PoAS.
pp. 509-514
GCRFP Cache Algorithm Simulation using User Space Filesystem
Wahyu Suadi (Institut Teknologi Sepuluh Nopember, Indonesia); Supeno Djanali (Sepuluh Nopember Institute of Technology, Indonesia); Waskitho Wibisono (Institut Teknologi Sepuluh Nopember, Indonesia)
SSD (Solid State Drive) is a media that could change the storage system. Since it has a smaller size, it acts as a cache on larger and slower media. There are researches on SSD as a cache on Hard Disk (HDD) media. Many SSD cache algorithms are developed, and GCRFP is one recent SSD cache algorithm using ghostcache mechanism, the same as in LARC. Its development uses trace to assess the algorithm performance on different workloads. To further study its performance behavior, this paper implements other simulation models by using userspace filesystem, its implementation using FUSE (Filesystem in Userspace) on Linux. This model enables simulation using benchmark applications. The results show GCRFP gives comparable performance to LARC. Coupled with more complex logic and codes, in these workloads, GCRFP gives small to no benefit to LARC. Other conclusions, GCRFP and LARC do not always provide the best hit ratio but consistently bring good results in a write ratio.
pp. 515-519

Tuesday, November 24 5:00 - 5:30

Awarding + Closing Ceremony

Rooms: Room A, Room B, Room C, Room D, Room E

Closing Ceremony Awarding Best Paper