Program for 2018 International Conference on Platform Technology and Service (PlatCon)
Sunday, January 28
Sunday, January 28 11:00 - 12:00
Local Arrangement Meeting
Sunday, January 28 13:30 - 15:30
ICTPS & ICRP Steering Meeting
Sunday, January 28 16:00 - 18:00
Conference Committees' Meeting(Steering Committee / Organizing Committee / Program Committee)
Monday, January 29
Monday, January 29 9:00 - 9:30
Monday, January 29 9:30 - 10:50
1-A: Computing Platform
- A Study on Dynamic Role-Based User Service Authority Control and Real-Time Service Configuration
- Recently, most of the software is provided in the form of a cloud computing environment or a web application depending on the needs of various user needs and the specialized research field and the individual dependent system. Unlike the past, there is an increasing need for various types of software convergence services that are customized for a user who provides an optional service environment in which a service can be used in combination according to a user's purpose and needs. In particular, a service is provided on a cloud system when a user accessing receives a service access authority for a certain period of time. A process of retrieving the corresponding right when the deadline is expired is performed that various methods of accessing and controlling the system have been devised. In this paper, we define the dynamic roles that access web services assigned users, also can manage user authorities according to each role to provide appropriate service resources to users according to their rights and session information. We have conducted a research on providing an optional cloud software environment to users through the combination and distribution of services that access the system in real - time that configure the dynamic role assigned to each user.
- Data Analysis Platform Using Open Source Based Deep Learning Engine
- In this paper, we propose a platform for analyzing data using open source deep learning engine. As the demand for deep learning technology grows, research is needed on a deep learning platform that can process and analyze various learning data. We use a tensor flow and a tensor flow serve to provide a deep running platform for analyzing various data. In order to process large capacity learning data, it preprocesses data in size and type that can be processed by the platform, and provides the function to distribute learning data appropriately according to the processing capability of each node. In the future, we will implement a deep learning platform based on the platform design method, and then we will develop a deep learning model using actual large capacity learning data, and optimize the platform function.
- Detecting Flood Vulnerable Areas in Social Media Stream Using Association Rule Mining
- In this study, we identify flood vulnerable areas by employing association rule mining to social media streams. The following processes are involved: (1) data collecting; (2) data cleaning; representing the training data; (4) determining the association between words; and (5) using the association values as guide to identify vulnerable areas. As testbed, we focused on tweets from Metro Manila, particularly tweets from August 2015. We decided to use tweets since it is publicly available. This study will aid different government agencies, specifically those that are focusing in disaster management and others that are into flood related projects. This paper presents the possibility of detecting location in Metro Manila, which in turn gives higher possibility of being able to trace possible flood vulnerable areas. As future works, since the entity extraction is done manually an automation of this can be very helpful to other researchers.
- Analyzing Overhead from Security and Administrative Functions in Virtual Environment
- The paper provides an analysis of the performance of an administrative component that helps the hypervisor to manage the resources of guest operating systems under fluctuation workload. The additional administrative component provides an extra layer of security to the guest operating systems and system as a whole. In this study, an administrative component was implemented by using Xen-hypervisor based para-virtualization technique and assigned some additional roles and responsibilities that reduce hypervisor workload. The study measured the resource utilizations of an administrative component when excessive input/output load passes passing through the system. Performance was measured in terms of bandwidth and CPU utilisation Based on the analysis of administrative component performance recommendations have been provided with the goal to improve system availability. Recommendations included detection of the performance saturation point that indicates the necessity to start load balancing procedures for the administrative component in the virtualized environment.
1-B: Networking Platform
- Resource-Oriented Architecture for Smart Home Operations Management Platforms
- Until recently, Smart Home technologies are still not widely deployed in most people's living spaces. The main reason is that operations management mechanisms for Smart Home such as remote deployment, monitoring, and maintenance are not well-studied and only a few attempts have so far been made toward this aspect. CWMP, proposed by Broadband Forum, is a promising standard for realizing a Smart Home operations management platform. Previously, we have investigated real-world operations management issues of Smart Home services, namely, newly installed, module purchasing and download, service start, service update, service diagnosis, failure recovery, usage statistics, and billing. After examining CWMP in detail, several issues of CWMP, namely, poor performance and scalability, poor domain model design and inappropriate web callback architecture, have been identified. The objective of this paper is, therefore, to deal with the issues mentioned above by suggesting a set of RESTful ways to refactor the CWMP-based operations management platform. The overall approach is based on the RESTful architectural style. The proposed architecture has been realized as an operations management platform prototype. Validations and experiments are performed to verify the effectiveness of the proposed architecture.
- Command and Control Platform Using Private Blockchain
- The military develops its own information technology to save the information about the defense of each country and strive to take advantage of the information by stealing the military information of the other countries. Since it is the most sensitive information, military's Command and Control (C2) system has become first target to cyber attacks. Therefore, the paper applied the blockchain technology to develop the more secure C2 framework. This paper shows a schema of Command and Control Framework using private blockchain and explains why blockchain is needed for the framework. In this paper, we propose the blockchain schema of C2 framework that is reconstructed in order to solve confidentiality and authority issues of military data and provides information about block header and consensus algorithms that is used to maintain the blockchain network without using cryptocurrency.
- Detecting Money Laundering by Analyzing Cryptocurrency Transaction Graph
- As the cryptocurrency market grows, various crimes based on cryptocurrency have been highlighted. Among the various crimes using cryptocurrency, this paper focuses on the money laundering and presents a methodology to detect money laundering which can hurt the national economy. Using the characteristics of the blockchain to query the cryptocurrency transaction, the paper first presents a graph of specific cryptocurrency transactions. Then, one of the characteristics of money laundering, a mixer pattern, is used to find a wallet address that is supposed to participate in money laundering in the graph. To find a mixer pattern in a transaction graph, the paper uses the subgraph isomorphism algorithm.
1-C: Human & Media Platform
- A Case Study for the Approach of Converting a Mobile RPG to a Multiplayer VR Game
- In recent years, multiplayer VR contents have been developed and are in service in which a number of users cooperate or confront each other in the same physical space. This type of content should be provided in a first-person view. In addition, the positions of the users' HMDs in the real space where the actual users are playing must be reflected in the virtual space in real-time. In this paper, we propose a solution to consider when creating these VR contents. We also explain a case where a mobile RPG in service is converted directly into a multiplayer VR game.
- Movie Recommendation Using Metadata Based Word2Vec Algorithm
- Nowadays, recommending preferable item among huge number of item is essential on online market. Many content platforms, such as YouTube and Amazon, use recommendation techniques to recommend items. Therefore, various techniques have been studied to recommend desirable item for each users. In this paper, we propose a method for effectively recommending preferable movies for each users by using community user's movie rating information and movie metadata information with deep learning technology. The proposed method shows 0.165 performance improvement based on Recall@100 as compared with the baseline method.
- Design of Lighting Control System Considering Lighting Uniformity and Discomfort Glare for Indoor Space
- In the modern society, the indoor lighting environment has an uneven illuminance distribution due to artificial lighting having optical properties such as fixed illuminance, color temperature, and light influx from outside. Because fatigue of the eyes is accumulated in the uneven illumination environment and the brightness of background is changed according to the position of the occupant, discomfort glare occurs when light of high luminance against the brightness adapted to the eyes lights up the eyes. In order to construct a pleasant indoor lighting environment, it is necessary to develop an illumination system that can maintain lighting uniformity and reduce glare. In this paper, we propose a lighting control system considering discomfort glare while maintaining lighting uniformity of indoor space
- Analysis of Bio-Signal Data of Stroke Patients and Normal Elderly People for Real-Time Monitoring
- We measured vital signs and motion data from 50 stroke patients and 50 normal elderly. This study is part of a study to compare the data patterns of the elderly people by measuring daily life data, motion data, body pressure, EEG, ECG, EMG, GSR data of stroke patients. We experimented with scenarios (walking, moving objects, sitting, etc.) to get natural daily data from stroke patients. In this paper, we focus on ECG data. The ECG data of stroke patients are significantly different from those of general patients. It is part of a study to develop a technology and system that can proactively detect stroke. This paper focuses on ECG data
Monday, January 29 10:50 - 11:10
Monday, January 29 11:10 - 12:30
2-A: Convergence Platform
- A Design of IoT Based Contextual Adaptation Management System
- we propose a method to intelligently provide benefits to people based on data collected in IoT environment through contextual adaptation. Through this method, it is possible to connect and abstract the results of human learning and machine learning, and to provide more dynamic and flexible services by constructing a system that can learn again based on abstracted results.
- Situational Awareness for Cyber Threat Intelligence Using LSTM Based RNN
- Since the Paris terrorist attacks in November 2015, terrorist groups such as ISIS are threatening physical terrorism and cyber terrorism in parallel. In order to cope with such cyber threats, there is a demand for cyber threat intelligence. In the United States, the US Department of Homeland Security has developed cyber threat information language, STIX (Structured Threat Information eXpression) and TAXII (Trusted Automated Exchange of Indicator Information), to prevent cyber threats. In order to utilize such CTI, it is necessary to cyber situational awareness(CSA) that can recognize the environmental factors in cyberspace and cope with future threats. This study applies Factor Analysis of Information Risk (FAIR), which is a risk measurement method, for CSA. This paper also proposes a CSA system that matches the elements of FAIR and the objects of STIX through LSTM based RNN.
- Deep Residual Convolutional Network for Natural Image Denoising and Brightness Enhancement
- Because of the low-light shooting environment, the camera sensor will loss huge details and fuzzy edge. A deep low-light residual convolutional network (LRCNN) is proposed in this paper, which utilizes the sparse coding feature to get the true signal and adaptively adjusts the image exposure in the low-light state. The residual connections in LRCNN help us preserve more potential detail information in the original picture and accelerate the training speed of the network. Many existing image enhancement algorithms only are able to address one aspect of image problems. We designed a neural network system which could deal with many image processing problems at the same time. The experimental results show that our neural network system well optimizes the images that affected by darkness and noise. It also avoids an artificial appearance in generating the image patches.
2-B: Networking Platform
- BER of Alamouti STBC MC-CDMA with Imperfect Channel State Information
- In this paper, we study the effect of channel estimation errors for the multiple input multiple output (MIMO) multi-carrier code division multiple access (MC-CDMA) system. The performance of Alamouti Space Time Block Code (STBC) together with MC-CDMA is derived by using moment generating functions (MGF) method. Finally, the closed form of bit error rate (BER) expression is presented. Without loss of generality, the scheme of two transmitting antennas and one receiving antenna is proposed, which could be straightforwardly extended to the more general cases. In the case of imperfect channel estimation, the simulation results demonstrate the BER performance for the MIMO MC-CDMA system is greatly depending on the errors in the channel estimation.
- A Study on Generating Sensory Effects of MPEG-V Motion for Realistic Content
- There is the standard for representing the motion effects in MPEG-V. This paper presents how to use object tracking and motion vector information from realistic content like 4D films and Virtual Reality as motion effects, and how to express the patterns as meaningful motion sensory effects. In this study, the experiment describes the process of analyzing the location and orientation information of each frame extracted from the actual content and recognizing it as a pattern.
- Reference Security Architecture for Body Area Networks in Healthcare Applications
- Body Area Network (BAN) and Wireless Body Area Network (WBAN) are being used in the healthcare industry to improve medical outcomes by monitoring and treating patients while they go about their everyday lives. BAN facilitates data collection from the human body via a small wearable or implantable sensor. This technology has improved the quality of medical services provided and lowered some associated costs. BAN has a wide range of applications such as monitoring patients' medical conditions and enhancing their response to treatment plans, but at the same time security and privacy are among major concerns in BAN-based healthcare systems as the patients' data must be kept secure from adverse events and attackers during transmission and in storage. This paper reviewed BAN communication standards, security threats and vulnerabilities to BAN- based systems as well as existing security and privacy mechanisms. Based on the review the paper proposed a reference security architecture which focuses on developing a secure foundation of the BAN layer called Tier 1.The reference security architecture incorporates IEEE802.15.6 (WBAN) standard, which provides a security baseline. The architecture will assist BAN manufacturers and auditors to develop and ensure secure BAN.
- Development of 9-Axis Sensor-Based Motion Extraction Program for Generating a Motion Accuracy Determination Factor of the Personal Training
- People steadily exercising for health maintenance and improvement are recently on the rise. Especially, people are highly interested in personal training (PT) that has no spatial restriction and that can maximize exercise effects for the short-term. However, it is important to maintain accurate exercise postures to prevent injuries and improve exercise effects in doing PT exercise. This study designs and implements a motion extraction program in which motion data can be acquired and analyzed through multi 9-axis motion sensors attached to user's body and accuracy judgement can be supported for each exercise. The proposed program collects the acceleration, gyro, and magnetometer data of each x, y, and z axis by attaching 9-aix motion sensors to various parts of human body. And Then selects specific intervals on preparatory, intermediate, and finishing postures accompanied using many movements in exercise, and calculates quaternion data-based Euler angles. Euler angles are extracted from major body parts in terms of exercises in which angle maintenance is important, and they are calculated as each exercise factor that can be used for PT exercise accuracy judgment.
2-C: CRET 2018
- The Impact of ICT-based Education on Students' Learning Performance
- This study aims to explore the effectiveness of ICT-based learning in developing EFL university students' learning behaviors and learning performance. In the field of EFL education, a lot of studies have attempted to promote the effectiveness and performance of English lexical learning because lexical knowledge is one of the vital components in the enhancing communicative competence of EFL learners. For this study, classroom tasks were set up by integrating the concept of smart learning for student-directed lexical learning in English. As the result of the study, ICT-based learning activities could help students better engaged in their own learning process. The specific features of the ICT integration also stimulated learners' learning motivation. Students in this class were transformed from passive learners to active learners who were willingly engaged in their learning process. It suggested that ICT-based education could be a more effective medium for student-directed learning environment in EFL learning than the traditional text-based learning materials in that new technological tactics could arise students' motivation and lead to better performance in their learning.
- Students' Growth in a Business English Course Insights for English for Specific Purposes
- The purpose of this study is to report on students' development as a future employees in business settings and to learn about their experiences in a business English course. Qualitative case study research design guided the present study. Eight students' experiences have been collected through video-taped classroom interactions for the fall semester of 2017 (the total of 45 class sessions), their interview transcripts, their writing samples, presentation materials and the researcher's observation notes. Students developed their knowledge on business English through engageing in both new content and linguistic knowledge on business studies. A lot of practice on improvised and extended amount of speaking skills about what they are learning interacted positively with their growth as future employees in business contexts. Core concepts needed to be explicitly addressed to help students' learning and when new knowledge has been acquired, students showed strong motivation to further investigate business concepts. Educational implications have been discussed for future implementation of English for Specific Purposes(hereafter ESP) education.
- A Study on the Use of Creative Digital Writing Activity
- The purpose of this paper was to examine the concrete implementation of creative digital writing activity in the university literary education. This research tried to reflect the changes in the attitudes of university students participating in writing and reading activities in the era of digital revolution. Participants in this study utilized URL links to write their own articles and the researcher analyzed how the creative digital text produced by students made sense. While engaging in the creative digital writing activity, students directly experienced the relationship between the whole and the part as well as the change in meaning that the new narrative structure produced. In addition, students experienced the linkage between collective intelligence and narrative structure. Students also had the experience of combining digital communication with writing activities.
- Difference of Reading Processes Between Gifted English Language Learners and General Middle School English Language Learners During English Text Reading
- The study examined the difference of reading processes which gifted English language learners (GELLs) and general middle school English learners(GMELs) used while they read English texts. An English reading test was conducted among the GELLs and the GMELs. After the test of English reading, in-depth interviews were carried out to determine the way each participant had read English text. In order to do so, 5 GELLs and 5 GMELs participated in the interview. From these data, there were three differences between them. First, the GELLs did not have any difficulties understanding the sentences including unknown words while the GMELs tend to have some difficulties understanding the sentences including unknown words. Second, the GELLs could process the sentences rapidly because they had the sufficient knowledge of sentence structure. In contrast, the GMELs tend to have the difficulties because of their lack of knowledge related to sentence structure. Finally, the GELLs tend to process the sentences automatically, while the GMELs could not. Some pedagogical implications will be provided based on the findings of this study.
- On Science Education in the Era of the 4Th Industrial Revolution
- The purpose of this study is to investigate the future learners' competence and the direction of science education to prepare for the 4th Industrial Revolution era. Through science outdoor learning, we must learn human nature, intuition, and get insights from nature. Heuristics, which is a cognitive shortcut learned through outdoor learning, enables us to cultivate the competence and sensibility of human nature that artificial intelligence does not possess. The activation and expansion of science education should be strengthened by building the science education Platform containing learning materials for various outdoor learning science activities.
Monday, January 29 12:30 - 14:00
Monday, January 29 14:00 - 15:10
Opening Remark & Keynote Speech 1 (Terrace Ballroom-A)
Gestalt and corresponding Gestalt laws of vision are apparent phenomena of visual perception that still lack general understanding, despite of passing more than 100 years after its first mentioning in psychological literature. In this contribution, we want to promote Gestalt as a kind of challenge to the naturally and biologically inspired computation community. Browsing a bulk of existing research literature on the Gestalt theme, with only a few notable exceptions (like the Helmholtz principle), there is not much indication for a comprehensive approach to the understanding of Gestalt, for having explanations about the means for its application, or for advancement in the provision of models reflecting the complex interplay of Gestalt laws in a verifiable manner. Said this, currently Gestalt triggers more questions than answers, and it might slowly become obvious that Gestalt is more than being just a source of inspiration for new algorithms, or for stimulating modifications of existing algorithms. It also gets slowly more clear that the only open issue is not just a lack of "holistic view" in present science, as it is often stated. It seems that any further progress in this regard might require a more rigorous departure from existing computational paradigms and concepts than expected.
In this talk, the state of research on Gestalt in engineering sciences, esp. image processing and pattern analysis, will be critically reviewed, and their strong and weak points will be evaluated. But moreover, new emerging computational paradigms and models will be evaluated according to what they might provide to the understanding of Gestalt. Among these paradigms and models, we can find the Neural Darwinism, which relates evolutionary concepts to the processing of the brain, or the recently proposed Cogency Confabulation, which relates learning with the maximization of a priori probability, and which is accompanied by a novel neural network architecture. Applications of these neural approaches to Gestalt, for example in the subjective evaluation of video quality, will be demonstrated.
Monday, January 29 15:10 - 15:30
Monday, January 29 15:30 - 16:50
3-A: Convergence Platform
- Design of Band-type Device to Measure UV Index and UVB Irradiance in Everyday
- UV (Ultraviolet) is known to negatively affect human body. Especially UVA promotes skin aging, and UVB enhances skin cancer prevalence. UVB, however, offers a useful function to support vitamin D production within human body. Therefore it is needed for people to be exposed to UV through proper outdoor activities and to acquire UV environmental information surrounding them. In this regard, this study devises a band-type device to measure UV irradiance in everyday life. The proposed device consists of a band to wear the device on user's wrist, arduino-based micro controller, and a UV sensor measuring UV irradiance. The proposed band-type device is expected to induce appropriate outdoor activities in the UV environment that does not harm human body by offering precise information on UV index and UVB irradiance surrounding users.
- Analysis of Association Between Students' Mathematics Test Results Using Association Rule Mining
- With the development of computers, the amount of data is rapidly increasing as it is able to process larger amounts of data than in the past. Data mining technology is finding its way to finding meaningful data in a lot of data. Data mining methods include classification, clustering, and association analysis. In this study, we use apriori algorithm, one of the association rule search methods, to analyze the relationship between mathematics scores and problem solving patterns of students through mathematics test data of students. Through the association rule search, the data belonging to the specific score category were able to analyze the correlation of the solution pattern to the specific problem.
- A Framework on Semantic Thing Retrieval Method in IoT and IoE Environment
- The concept of Internet of Things (IoT) and Internet of Everything (IoE) was proposed to realize intelligent and autonomous to thing communications using Web standard technology. To many, the large scale data generated by IoT and IoE environment are considered having highly valuable and useful information. Increasing requirements to generate new values through data analysis and autonomous mash-up have resulted in active studies on integration with semantic web technologies in the IoT and IoE. Particularly, in order to utilize IoE data in real environment, connectivity and mutual cooperation technology suitable for various services are required. In this paper, we proposed a semantic thing retrieval system in IoT and IoE that can perform efficient thing retrieval operation with various thing metadata and things with states information in IoT and IoE.
- Balancing Method in Cyber Situational Awareness: Human Psychological Change by Cyber Attack
- Recently cyber attacks have been increasing in cyberspace as the usage rate of internet has increased. Cyber attacks such as Ransomware, which are attacks that cause damage to the real world for the purpose of monetary profit, are attacked through sites that are easily accessible to people or services that are heavily used, It is proved that it can give. This study measures the risk that reflects changes in human psychology based on recent Ransomware attacks in Korea. In order to protect the threat of leakage of personal information according to the risk, we suggest ways to balance cyber situation awareness using human psychological change.
3-B: FSP 2018
- Threat Analysis of Wi-Fi Connected Dashboard Camera
- Recently dashboard camera (dashcam) has a function of connecting Wi-Fi to smartphone for user's ease for use. But, it has severe vulnerabilities that it doesn't have authentication process. We analyzed threats of overall functions of dashcam using Data Flow Diagram and STRIDE analysis. After that, we found that dashcam's vulnerabilities are closely related to Wi-Fi function and lack of authentication process. We made sets of possible attacks and presented as attack trees. Attack Tree Analysis was used to categorize and prioritize among several ways of possible attacks. This research contributes to suggest vulnerabilities of dashcam. This emphasizes the importance that information security standards of dashcam have to be made and manufacturers of dashcam have to apply the standards.
- Reminder as 'Watch-Out' the Role of Privacy Salience at the Point of Interaction with Biometric Systems
- Biometric system users are concerned about privacy issues, while personal recognition methods are automated and easy to use. For users who want to protect their biometric information, practitioners need to take action in the public interest to better prevent users from leaking their biometric information in situations where the user interacts with the biometric system. This study explores how the user can perceive the consequences of registering biometric information through the Theory of Planned Behavior and attitude aspects. Sensitivity to potential consequences of the biological system was confirmed by the behavior and response observed by the subjects after using the biometric system. The sensitivity of the experimental group was found to be higher than that of the control group. Therefore, this study suggests that simple information presentation in system interaction can lead to more protective against privacy issue.
- Permission Management Method for Before and After Applications the Update in Android-based IoT Platform Environment
- The Android-based IoT(Internet of Things) platform just like the existing Android provides an environment that makes it easy to utilize Google's infrastructure services including development tools and APIs through which it helps to control the sensors of IoT devices. Applications running on the Android-based IoT platform are often UI free and are used without the user's consent to registered permissions. It is difficult to respond to the misuse of permissions as well as to check them when they are registered indiscriminately while updating applications. This paper analyzes the versions of before and after an application the update running on the Android-based IoT platform and the collected permission lists. It aims to identify the same permissions before and after the update, and deleted and newly added permissions after the update were identified, and thereby respond to security threats that can arise from the permissions that is not needed for IoT devices to perform certain functions.
- The Framework of 3P-Based Secure eHealth-Information System
- As eHealth-Information is computerized and managed, sensitive eHealth-Information is targeted and leaked in terms of personal information. Accordingly, it is necessary to improve the security of the eHealth-Information system by analyzing the security threats according to the flow of each flow. In this paper, we refer to patient, hospital, pharmacy and management organizations as 3P, and based on this, we study framework for secure eHealth-Information system.
3-C: CRET 2018
- Teachers' Perceptions of the Achievement Assessment System in Secondary Schools
- The purpose of this study is to investigate teachers' perception of the achievement assessment system as a tool of evaluating the academic capacity of learners. The core of the 2015 revised curriculum, to be implemented from 2018, is the presupposition of changes in class. In order to survey teachers' perceptions of the achievement evaluation system, a total of 104 middle and high school teachers were surveyed. Teachers' understanding of the achievement assessment system was relatively good (mean 3.54 on a 5-point Likert scale). The most important concern in the process of applying the achievement evaluation system to college admissions was the difficulty of comparison among the schools, as well as inconveniences caused by the types of high school. As the achievement evaluation system weakens the discriminatory power as the preliminary selection data, it is concerned about the reliability of a school and fairness of evaluation. Therefore, universities should find ways to evaluate academic capacities of learners by deliberating on how to use the achievement evaluation data in the admission selection.
- A Case Study on the Discussion Project in the Kindergarten Through Digital Methods
- The purpose of this case study was to present the meaningful process and results of the dilemmas discussion about the tree in Sky kindergarten. The Application, SNS and Internet searching methods were provided for Korean kindergartners in two different classes and preschoolers in Puerto Rico, Thailand and India for six months in the form of intercultural exchange. The findings showed that the dilemmas discussion through digital method could be helpful for enhancement of children's creative problem-solving ability. The present study has educational implications such as providing the children's social competency with dynamic cultural experiences through various digital method.
- Effectiveness of Graphic Organizers in L2 Instruction: A Meta-analysis
- This study investigated the overall effects of using graphic organizers on Korean EFL students' L2 competence. A meta-analysis of 34 research findings in 31 articles was implemented to synthesize the results of these studies by calculating mean effect sizes. This study reviewed and analyzed results of previous studies in terms of school types, treatment period, and language skills. The results revealed that using graphic organizers in teaching English had the beneficial effects in general (d = .618). There was the statistically significant difference between the effect sizes of reading (d = .618) and writing ability (d = .618).
- A Study of the 3 ONs Through Analysis of Lesson Plans Made by Pre-Service English Teachers
- The purpose of this study was to research 3 ONs (Hands-on, Minds-on, and Hearts-on). To study these 3 elements objectively, we analyzed 19 lesson plans made by pre-service English teachers in a teachers' college. In the literature review, we shared what the 5 ONs mean. To indicate main foci and purposes of classroom activities, educators in Britain referred to parts of the human body like Ears-on, Eyes-on, Hands-on, and Mind-on. Nowadays some scholars have proposed a new analogy, Hearts-on and Acts-on. In this research we will study how 3 of these ONs are implemented in English Education. To do this we examined 19 English lesson plans and analyzed them in terms of Hands-on, Minds-on, and Hearts-on. Through the analysis, we found that we need to focus on Minds-on and Hearts-on activities rather than Hands-on. Many pre-service teachers tended to focus on Hands-on activities rather than other aspects in their English classes. We can assume that the activities implemented by English teachers in schools are not different from the pre-service teachers' cases because pre-service teachers are styling their activities on those of English teachers' in schools. Therefore, we propose a new paradigm based on the Minds-on and Hearts-on activities in order to contribute to students' intellectual, moral and esthetic development as goals of Competency and Big Idea comprehension in the 2015 revised curriculum. In further studies, we hope English classes taught by teachers in schools will be analyzed based on the 3 ONs we focused on in our study. This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017S1A5A8021897).
Monday, January 29 16:50 - 17:10
Monday, January 29 17:10 - 18:30
4-A: Human & Media Platform
- Health Monitoring System for Elderly Drivers Using IoT Platform
- IoT (Internet of things) is considered most innovative technology in smart healthcare monitoring system, which is able to demonstrate real-time physiological parameters in computer and mobile platform. Driving has been an integrated part of our lifestyle and sometimes stress and health abnormality arises during driving, specially for elderly drivers. Among all kinds of health problems, stroke is most deadly diseases and real-time health monitoring is desired to detect stroke onset during regular activities. The aim of our study is to develop a health monitoring system for elderly drivers using air cushion seat and IoT devices in order to detect health abnormality such as stroke onset during driving. We have also made a prototype of health monitoring system using Quad-chamber air cushion system and IoT devices. This system can display ECG, EEG, heart rate, seat pressure balance data, face/eye tracking etc. using IoT sensors, generate alert and send message to relatives and emergency services if any health abnormality happens during driving to provide emergency assistance.
- An Efficient Video Series Decision Algorithm Based on Actor-Face Recognition
- It is an important task to check whether or not an uploaded video on the Internet violates the copyright. One of the most common approaches for checking such videos is to compare an uploaded video with respect to the sample data provided the original owner. A disadvantage of this approach is that a large amount of sample data must be stored in the database before, and also the task becomes very challenging when an uploaded video is distorted. We propose a method of generating a signature set that characterizes a series of video sets based on face recognition data, and suggest an efficient method on how to identify which series is included in the corresponding series set. We demonstrate that the proposed method can effectively determine whether or not a video is in a series set even if the video data is relatively small as an input for the previous method.
- Robot Soccer Using Deep Q Network
- Reinforcement Learning is one of brilliant way to develop intelligent agents in the field of Artificial Intelligence. This paper proposes a RL algorithm called Deep Q Network and presents applications of this algorithm to the decision-making problems challenged in the RoboCup. Four scenarios were defined to develop decision-making for a SSL in various situations using the proposed algorithm. Furthermore, a Convolutional Neural Network model was used as a function approximator in each application. The experimental results showed that the proposed Reinforcement Learning algorithm had effectively trained the Reinforcement Learning agent to acquire good decision making. The Reinforcement Learning agent showed good performance under specified experimental conditions.
- Online Disaster Preparedness Application for Kids
- In the current settings, Philippines were highly experiencing earthquake, based on Phivolcs the biggest aftershocks were experienced at 6.5 magnitude in August 2017. Education plays an important role during disaster since most of the children spend 6 hours during daytime inside the school premises. Local government and other private organizations conduct a generalized earthquake drill to prepare individuals in case of an event of disaster. Simultaneously, priority has been given to equipping older school children and their teachers with the necessary information and skills to shield themselves from the consequences of sudden onset natural hazards. For young children, the disaster risk reduction discourse has been almost completely silent. Today's students are being addressed as digital natives or digitally incline individuals and as such basic educational institution plays a very important role in bridging the gap of technology and user. We would like to propose an online disaster preparedness application specifically for elementary school students to impart to them through video, games and lectures the concept and drills for disaster preparedness. To identify the effectiveness of the proposed methods researchers used quantitative approach. Initial findings show that positively the proposed application and methods are effective and as such recommended to be included as part of the classroom activity and curriculum.— In the current settings, Philippines were highly experiencing earthquake, based on Phivolcs the biggest aftershocks were experienced at 6.5 magnitude in August 2017. Education plays an important role during disaster since most of the children spend 6 hours during daytime inside the school premises. Local government and other private organizations conduct a generalized earthquake drill to prepare individuals in case of an event of disaster. Simultaneously, priority has been given to equipping older school children and their teachers with the necessary information and skills to shield themselves from the consequences of sudden onset natural hazards. For young children, the disaster risk reduction discourse has been almost completely silent. Today's students are being addressed as digital natives or digitally incline individuals and as such basic educational institution plays a very important role in bridging the gap of technology and user. We would like to propose an online disaster preparedness application specifically for elementary school students to impart to them through video, games and lectures the concept and drills for disaster preparedness. To identify the effectiveness of the proposed methods researchers used quantitative approach. Initial findings show that positively the proposed application and methods are effective and as such recommended to be included as part of the classroom activity and curriculum.
4-B: FSP 2018
- Efficiency-based Comparison on Malware Detection Techniques
- Malicious Software have been for a long time now the biggest trouble of computer users. Different types of malware attack the computer system, causing serious damages on data and on the system performance. Researchers have come up with different techniques of detecting malware. This paper reviews the different malware detection techniques and compares the efficiency of each one of them.
- A Review of Cyber Security Controls from an ICS Perspective
- As cyber attacks on national critical infrastructure have increased, researches are actively conducted to protect industrial control systems(ICS). Compared with existing IT systems, research on special attributes of ICS has been performed mainly, and based on this, many security requirements for ICSs have been derived. However, there is insufficient research on security control to understand and improve the security status of target ICS. To the best of our knowledge, only a comprehensive guide to applying existing security controls to ICSs is being presented. Therefore, this study suggests implications for the current security control items from the ICS point of view. For this purpose, each viewpoint for categorizing and analyzing information security control items was established and based on this, an integrated matrix of ICS security control was derived. Through the ICS security control matrix, the information security control items currently being guided are analyzed from the viewpoint of the ICS, and the present status and implications are derived. The findings can be used as an indicator to effectively apply appropriate controls to appropriate locations where security is required. It is also expected to be a useful reference for improving and supplementing security control items currently being used and used.
- Risk Management to Cryptocurrency Exchange and Investors
- Investment and interest in cryptocurrency is rapidly growing. The price of each bitcoin, in particular, has exceeded 10,000 dollars as of November 2017, so we do not know how long the uptrend will continue. Although blockchain technology is more open and security oriented than conventional currency issuing methods, it is relatively ineffective in terms of distribution and management of cryptocurrency. The most common way to get cryptocurrency is trading through exchange and mining, which novices sometimes invest in without sufficient knowledge. Therefore, this paper analyzes vulnerabilities of potential cryptocurrency exchanges and individual user wallets. Moreover, this paper will suggest policy risk management methods using international standards such as from NIST and ISO. Blockchain, weaknesses of countermeasures management system, countermeasures to deal with them, management vulnerability of investors and management plan. Server management plan and personal action tips will be provided.
- Fast Implementation of Simeck Family Block Ciphers Using AVX2
- In CHES 2015, the Simeck light-weight family block cipher was proposed, which has similar architecture to SIMON and SPECK. Previous works on implementation of Simeck family lightweight block cipher are focused on the embedded device environment. In this paper, we proposed the fast implementation methods of Simeck family lightweight block ciphers by using Intel AVX2 on the Intel x86-64 environment. The proposed Simeck32/64 and Simeck64/128 require 3.797 cycles/bytes and 6.734 cycles/bytes respectively. The proposed methods enhance 15.67, and 7.42 times than previous work.
4-C: Convergence Platform
- MPC Based Ramp-Rate Control Strategy with Battery Energy Storage Systems for Wind Farms
- This paper tackles the problem of short-term wind fluctuation with the imposition of strict ramp rate limits. A control strategy is proposed to reduce the output ramp-rate of a wind farm to a required level using model predictive control (MPC). The system consists of a wind farm coupled with a battery energy storage system (BESS) with the sole function of ramp-rate control (RRC) that optimizes the wind power output as well as the BESS output energy while observing the physical constraints. A variation of up to 10% of the maximum plant capacity per minute is observed in the implementation of RRC as required the grid code for the acceptable penetration of wind farms into the power system.
- Ultra-Low Power Communication for Infrastructure Monitoring During a Disaster
- To monitor public infrastructure systems such as power grid during and after a disaster or emergency, radio communication requiring neither an external power source nor a pre-deployed connection point is needed. Ambient backscatter communication meets such requirements thanks to its ultra-low power consumption and ad hoc connectivity, thus being suitable as a backup method for infrastructure monitoring in such situations. In this paper, we describe the detailed methods to realize ambient backscatter communication particularly using Wi-Fi signals as carrier signals and demonstrate an experimental testbed system to discuss its feasibility in practice.
- Application of EtherCAT in Microgrid Communication Network: A Case Study
- Microgrids can operate either connected to the utility grid or disconnected, respectively called grid-connected and islanding mode. If an emergency occurs, a rapid transition to islanding mode is executed to evaluate the cause of problem and to prevent further damage. However, seamless transition control is not realizable because of speed constraints introduced by the communication protocols being used today. This paper reviews the current technologies applied in a microgrid communication system and presents a new study to solve this problem using EtherCAT, a real-time Ethernet protocol. The advantages of this solution with respect to the state-of-the-art are summarized. We also conducted a performance analysis using minimum cycle time as the performance index to verify the feasibility of EtherCAT in a microgrid application. Potential future work is suggested related to integration of EtherCAT in an actual microgrid system for practical field experiments.
- Discrete-time Control Design for Three-phase Grid-connected Inverter Using Full State Observer
- LCL filters are being more and more preferred over the L filter in a grid-connected inverter due to the smaller physical size and better harmonic attenuation characteristics. However, additional control loops are often required to control inverter-side current as well as capacitor voltage. These additional control loops complicate the controller design process. To overcome such a limitation, a discrete-time control design for a three-phase grid-connected inverter using a full state observer is presented in this paper. The controller design is accomplished based on the state-space model of the inverter system. Furthermore, to reduce the steady-state error in output currents, a discrete-time integral state feedback controller is employed. Generally, all state variables should be available to implement a state feedback controller. For the purpose of reducing the number of sensors in a practical system, an observer which uses the measured grid-side currents, grid voltages, and control inputs is constructed in discrete-time domain to predict the inverter-side currents and capacitor voltages. As a result of using both the feedback controller and the observer, the proposed control scheme provides a better control performance in a systematic design approach. Simulation results are given to demonstrate the feasibility and performance of the proposed control scheme.
Monday, January 29 19:00 - 20:30
Reception Party & Service Award Ceremony
Tuesday, January 30
Tuesday, January 30 9:00 - 10:00
Tuesday, January 30 10:00 - 11:20
5-A: Computing Platform
- Context-driven Mobile Learning Using Fog Computing
- There has been a rapid development in mobile, cloud computing and sensor technologies which enables learners to learn more effectively, flexibly and efficiently from anywhere. Using mobile devices for learning exploits the features of mobile computing such that they can be used to develop customizable systems that help in context-aware learning. Mobile learning refers to the use of mobile devices for the purpose of learning while on the move. In this paper, we propose to apply fog computing in mobile learning in order to achieve efficient context-aware learning. Using fog computing reduces the latency and the time complexity of using a mobile learning application.
- A Representation Method of Graph-based Internet of Media (IoM) Objects
- A new concept of Internet of Media (IoM) is introduced, it is enable to provide a diverse communication rather than uniform and one-way delivery using user and media information through interconnections between media. This paper considers a representation method to express the IoM object based on the graph theory. The IoM object comprises of contexts from a novel which is a kind of media and keyword information obtained by analyzing the novel. The coordination for a bunch of keywords becomes an initial coordinates of the IoM object over the word embedding space and all the relationship for every IoM objects pair are calculated. If it is higher than the threshold value of relationship, those IoM objects are connected to each other based on the graph theory. The graph-based IoM objects are finally obtained after reducing the redundancy for the word embedding space through the manifold learning.
- Automated Security Configuration Checklist for Apple iOS Devices Using SCAP V1.2
- The security content automation includes configurations of large number of systems, installation of patches securely, verification of security-related configuration settings, compliance with security policies and regulatory requirements, and ability to respond quickly when new threats are discovered . Although humans are important in information security management, humans sometimes introduce errors and inconsistencies in an organization due to manual nature of their tasks . Security Content Automation Protocol was developed by the U.S. NIST to automate information security management tasks such as vulnerability and patch management, and to achieve continuous monitoring of security configurations in an organization. In this paper, SCAP is employed to develop an automated security configuration checklist for use in verifying Apple iOS device configuration against the defined security baseline to enforce policy compliance in an enterprise.
- Web-based Nominal Group Technique Decision Making Tool Using Blockchain
- The overall interest of modern managers is to find effective group decision-making methods when members with diverse backgrounds and perspectives have to solve problems together. Nominal Group Technique is used as one of the methods for group decision making. However, the Nominal Group Technique is not easy to ensure anonymity in a face-to-face environment, and there is no clear evidence to support the judgment of results after voting. In addition, it is difficult to trust the data because the integrity of the data generated by the voting result is not ensured. In this paper, propose a web-based Nominal Group Technique using blockchain. Web-based Nominal Group Technique can construct an agent - based model and request the blockchain network when data management is needed to guarantee the integrity of the data. In addition, the anonymity of all members is ensured through the web chat function, and the voting results can be visualized using the Priority Graph Tool, which is implemented on the web during the group decision making.
5-B: CIA 2018
- Enhanced Intelligent Character Recognition (ICR) Approach Using Diagonal Feature Extraction and Euler Number as Classifier with Modified One-Pixel Width Character Segmentation Algorithm
- In this technological age, handwriting communication is still an essential aspect in the lives of people and relating to each other. This study was created to identify the most suitable set of algorithms that can be used and determine how effective it would be in recognizing cursive handwritten texts. The proponents created a system that accepts a handwritten text image as input, undergoes processing stages and outputs a text based on the features extracted per character using the Diagonal Feature Extraction, and classification using Euler Number with the use of the Modified One-Pixel Width Character Segmentation Algorithm. A total of 100 handwritten text images are used in evaluating the system. The system achieved a character recognition rate of 88.7838% and word recognition rate of 50.4348%.
- Demonstration of Outdoor AR Game Using SLAM-Based Camera Tracking
- This paper presents an outdoor mobile augmented reality (AR) game which uses a camera tracking technology based on simultaneous localization and mapping (SLAM). In the SLAM-based camera tracking, image feature points are extracted and described by ORB which is a fast and rotation-invariant binary descriptor under a BSD license, and the optical flow is used to accurately track the displacement of feature points between consecutive images. In order to prevent the number of feature points from being continuously reduced by the use of the optical flow, effective feature points are added at the image feature matching. For the robustness to environmental changes, weight-based feature matching is applied. A 3D point cloud is generated with the feature points using the multiple view geometry. An AR game space in Unity is built by using the 3D point cloud and virtual objects are placed. With SLAM-based camera tracking, an AR game is demonstrated in an outdoor environment.
- 3D Watermarking Secret Direction Scheme for Volumetric DICOM Images
- This research presents a new method for watermarking multi-frame DICOM medical images viewed as a three-dimensional volumetric with a secret direction. Line slicing the XY direction by using the Bresenham line algorithm and randomization of the line to get the secret direction as a time frame per second by utilizing the Halton series in randomization. The line is collected into one frame and then watermarking is embedded by using Discrete Cosine Transform (DCT). The experimental results show that the secret scheme of 3D watermarking direction can ensure the integrity of volumetric DICOM images efficiently.
- A Correlation Analysis Between VR Human Factors and Sickness Symptoms for Cybersickness Prediction
- Recently, virtual reality (VR) content has been popularized due to developments in the performance of VR devices such as HMD (Head mounted display). The VR content provides experiences that is impossible in the real world through the virtual environment. However, the virtual environment can induce sickness symptoms (cybersickness) when experienced for a long time. Previous studies have been performed on warning or blocking the VR content before the sickness symptoms occur, however there have been few evaluations of objective factors such as VR human factors. Therefore, this paper performs a correlation analysis between VR human factors and sickness symptoms for cybersickness prediction.
- A Study on Non-Photorealistic Rendering for Real-Time Mobile AR
- Augmented reality (AR) is a technology that provides information by combining a real image and a virtual graphic content. Early AR applications used realistic rendering to overlay the virtual graphic content. However, research on AR applying non-photorealistic rendering (NPR) such as cartoon rendering is performing according to requiring various AR applications recently. NPR-used AR not only utilizes as artistic contents but also has the effect of reducing the disparity of visual realism between the real image and the augmented 3D model. Recent AR applications are developed in a mobile environment due to the popularized of smartphones. The mobile environment has relatively low performance compared to PC. In the mobile environment, the implementation of NPR-used AR application required high performance is difficult. Therefore, we aim to study on non-photorealistic rendering for real-time mobile AR. For applying NPR to mobile AR, we propose a method using contour highlighting and color quantization. The proposed method is implemented using OpenGL shading language (GLSL) on the android platform.
5-C: SESIS 2018
Tuesday, January 30 11:20 - 13:30
Coffee Break & Lunch
Tuesday, January 30 13:30 - 14:20
Keynote Speech 2 (Terrace Ballroom-A)
We currently live in an information society. As information and communication technologies develop at an ever growing pace, the number of people who use these technologies on a day to day basis is also increasing. This development has brought substantial benefits but also introduced novel threats and vulnerabilities related to the leakage of confidential information, the theft of identities and the unauthorized modification of data. This shows the need for the development of a reliable and trustworthy information infrastructure. This requires trustworthy systems, and an essential building block for such systems is cryptography. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries. Various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography.
In this talk, some of major topics in modern cryptography are discussed. They include lightweight cryptography, cryptography-based ransomware, and recently proposed practical attack on the standardized hash function SHA-1.
Tuesday, January 30 14:20 - 14:40
Tuesday, January 30 14:40 - 16:00
6-A: Convergence Platform
- Lightweight Mutual Authentication and Key Agreement in IoT Networks and Wireless Sensor Networks
- Recently, as the age of the Internet of Things is approaching, there are more and more devices that communicate data with each other by incorporating sensors and communication functions in various objects. If the IoT is miniaturized, it can be regarded as a sensor having only the sensing ability and the low performance communication ability. Low-performance sensors are difficult to use high-quality communication, and wireless security used in expensive wireless communication devices cannot be applied. Therefore, this paper proposes authentication and key Agreement that can be applied in sensor networks using communication with speed less than 1 Kbps and has limited performances.
- Abnormal Payment Transaction Detection Scheme Based on Scalable Architecture and Redis Cluster
- Log file based data analysis methods in the closed fault tolerant OS have shown several problems. First, it is not easy to change the data analysis direction if the analysis process is established into operation. Second, in an independent closed system, due to the limited resource policy, the goal of real-time data analysis cannot be achieved. Finally, it can't utilize new technologies and open sources such as in-memory database and python. Due to these problems, existing methods have difficulty in detecting abnormal payment transactions in real time. To solve these problems, this paper proposes an abnormal payment transaction detection scheme based on scalable network architecture and Redis cluster, which can perform rapid transaction data collection and their real-time analysis. Moreover, our proposed scheme can be used for data analysis through the reproduction of data using in-memory storage, which can solve the aforementioned problem of unidirectional analysis by doing parallel processing on the distributed Redis repository.
- Application of C5.0 Algorithm to Flu Prediction Using Twitter Data
- Since one's health is a factor considered, data coming from Twitter, one of the most popular social media platforms often used by millions of people, is beneficial for predictions of certain diseases. The researchers created a system that will improve the precision rate of the current system conducted by Santos and Matos using C5.0 algorithm instead of Naive Bayes algorithm for classifying tweets with flu or without flu. For the testing part, a total of 1000 tweets which is only limited within the Philippines were gathered to evaluate the system. Moreover, both English and Tagalog tweets are included in the dataset. The researchers found that the proposed system, after examination, has achieved a rate of 62.40% in terms of precision, and 66% in terms of accuracy. It was concluded that the C5.0 algorithm is less precise but more accurate than the Naïve Bayes algorithm.
- Methodology of Future Promising Technologies in Measurement Field
- To secure the initiative of the 4th Industrial Revolution and to create new value, it is necessary to develop the future promising technologies that can solve the needs of the market and create new industries and read them. Selected promising measurement technologies for future promising technology candidates using AHP, Fuzzy measure, DEA, Delphi and other prioritized methodologies. Performs a promising technology candidate filter according to the measurement expert panel composition and evaluation criteria. AHP We select promising candidate technologies through weighting, duplication tolerance, and alternative analysis. In order to find promising technologies for the future, technology, industry, and society are expected to have a ripple effect.
- Facial Landmark Extraction Scheme Based on Semantic Segmentation
- Facial landmark is a set of features that can be distinguished in the human face with the naked eye. Typical facial landmark includes eyes, eyebrows, nose and mouth. It plays an important role in the human-related image analysis. For example, it can be used to determine whether human beings exist in the image, identify who the person is or recognize the orientation of a face when photographing. Methods for detecting facial landmark can be classified into two groups: One group is based on traditional image processing techniques such as Haar-cascade and edge detection. The other group is based on machine learning technique where landmark is detected through training facial features. However, such techniques have shown low accuracy, especially in the exceptional conditions such as low luminance or overlapped face. To overcome this problem, we propose a new facial landmark extraction scheme using deep learning and semantic segmentation and demonstrate that with even small dataset, our scheme can achieve excellent facial landmark extraction performance.
6-B: SCA 2018
- Research on Optimizing KNL Hybrid Memory Using a Job Scheduler
- Scientific problem, engineering, and data analysis involving very large-scale calculations requires efficient high performance computers and a batch scheduler system that can manage these computers. In this paper, we describe a technique for using Slurm, a widely employed batch scheduler for high performance computers, and Knights Landing, the cutting-edge manycore processor released by Intel. Also, we employed numa commands to allocate KNL hybrid memory, that is local memory (DDR) and the high bandwidth memory (MCDRAM) in Slurm. Finally, we performed experiments with our self-developed MPI benchmarks to show that there are performance improvements by running one bechmark on two nodes divisionally and assigning only a memory intensive benchmark to MCDRAM. As a result, the two-node division experiment was better than single node experiment. In particular, when the memory intensive benchmark was assigned to the MCDRAM, there was a 7.8% performance improvement over when both CPU and memory-intensive benchmark were assigned to MCDRAM. The reason for the performance improvement was that the memory-intensive benchmark can use twice MCDRAM in the two-node division experiment.
- A Proposal of Efficiency Benchmarking Model for Supercomputer Performance Measurement
- The current supercomputer Top500 list the world supercomputer rankings based on the theoretical performance obtained by the sum of the theoretical performance of the CPU and the actual performance which is the result of the execution of the benchmark program. This ranking is closely related to the performance of the central processing unit and the accelerator among the components consisting of the supercomputer. This implies a disadvantage that it does not fully reflect the characteristics of various other components of supercomputers. In this paper, we try to propose a new benchmaring model for supercomputer performance measurement using economical methods not computer science and engineerning methods. And also thie effiecny benchmarking model considers several factors that make up supercomputer.
- A Study on Users' Perception of HPC Environment and Policy
- With the emergence of issues such as the Fourth Industrial Revolution and AI, interest in HPC is also increasing. Advanced countries recognize HPC resources as a key factor to improve science and industry competitiveness and are promoting various policies. The Korean government also has laid the foundation for HPC policy and has been pushing for a variety of support after enacting the HPC Act(2011) and establishing the first basic plan(2013). In order to increase the efficiency of the policy, it is important to grasp the real situation. In this paper, we analyze users' perception of HPC environment and policy in various aspects. To do this, we use the data of the 'National High Performance Computer Survey' conducted by KISTI. In particular, we will examine the differences in policies that industrial, academic, and educational users consider important for the activation of the HPC ecosystem.
- Trend of Software Research for Low Power Towards Exascale Computing
- The demand for low-power HPC in exascale computing is a top priority, and various improvements are being considered to reduce the clock frequency and improve the cost of data movement in an aspect of energy consumption and performance to save power. Exascale computing is expected to be created with a new architecture different from previous H/W architectures, but this study focuses on research trends for low power in achieving an exascale system in terms of S/W.
6-C: SCA 2018
- Context/Social Aware Based Hybrid Reality CAx(CAD/CAE/CAM) Services
- A CAx isconsidered as a new environment that can apply the use of Internet-of-Things (IoT). For that reason, the realization of the CAx requires the seamless integration among humans, CAx contents, physical objects, and user interactions. However, this accomplishment is currently unrealistic and expensive, considering the need to construct different types of CAx contents and the need to support users through the various aspects of user roles & experiences. This paper proposes a context/social aware based hybrid reality CAx service. The user experience is provided by the integration of egocentric virtual reality and exocentric augmented reality. Furthermore, to make a user oriented CAx environment smarter and easy, a context/social analysis has been executed.
- Modeling and Simulation Cloud Platform for Multi-tenancy Support
- Modeling and Simulation (M&S) is a emerging technology that can replace physical experiments with virtual production (modeling) and engineering analysis (simulation) using high performance computers. HEMOS-Cloud (High performance Engineering MOdeling and Simulation) was developed as a private SaaS platform to provide a variety of M&S software to manufacturing enterprises. Users are sensitive to security issues because M&S deals with product related data. Therefore, it is necessary to construct closed M&S cloud services by user groups or software (multi-tenancy). HEMOS-Cloud provides a dedicated M&S cloud service creation by using some metadata such as domain address, user group, and menu configuration. In this paper, we describe the design and implementation of HEMOS-Cloud to support multi-tenancy.
- Design for Efficient Heterogeneous Cluster Systems in Integrated OS Environment
- To overcome resource limitations when using High Performance Computing (HPC) to conduct large computations, we exploit underutilized personal desktops by organizing a heterogeneous cluster environment. An open-source based scheduler, Son of Grid Engine (SGE) is provided in the integrated computing environment to manage or control the extra computing resources. We researched an SGE-based integrated OS environment on Windows and Linux desktops. In integrated OS environment, two kinds of groups are classified according to the individual's degree of utilization: The Full Synchronization group is the group that always shares the data space with the master node. The Partial Synchronization group is synchronized with the slave nodes when the job is allocated and is synchronized with the master node when the job is completed. We designed an efficient heterogeneous cluster system, taking into account the integrated OS environment. In the environment, the resource manager efficiently allocate and control the members of affiliated computing resources.
- Validation of OpenFOAM for Cavitating Flow Around a Sharp-Edged Circular Orifice
- This study verifies the accuracy of OpenFOAM for multiphase flow through the cavitation problem, which is a representative example of multiphase flow. OpenFOAM is widely used as an open-source computational fluid dynamics (CFD) software. Cavitating flow around a sharp-edged circular orifice is simulated, and the result is compared with experimental result. The analysis is performed by volume of fluid (VOF) method using the incompressible two-phase flow analysis solver, named 'interPhaseChangeFoam'. The results of the analysis by OpenFOAM yields about 0.5% error compared to the experimental result.
Tuesday, January 30 16:00 - 16:20
Tuesday, January 30 16:20 - 17:40
7-A: Human & Media Platform
- The Goals and State of Webble World 3.1 - Latest Version of the Meme Media Platform
- This paper proposes the use of the most recent Webble World implementation and Webble Technology, as it has evolved and transformed during the years of research of meme media theories and IntelligentPad development, as a key tool in software development and prototyping, application and knowledge federation, research collaboration, social creative interaction and web based tool and entertainment portal, in order to make life easier, quicker and more enjoyable. With Webble World 3.1, software development is much more in the hands of the end-users since most of the applications can be developed without any code required and with very little pre-knowledge, and the steps from idea to finished product is immensely shortened by the use of decoupled meme media building objects, Webbles, provided by independent users and developers from all over the world as well as the system's built in development framework which guides the application creator, whether being a basic user or a skilled programmer, to reach the end goal; a web application, simple or complex, that solves problems, inspire joy or simplify life, and which is easily published and shared with the world. Webble World 3.1 also inspires and support users to learn and evolve from each other by providing the ability to study in detail and build upon all Webbles previously published. Webble World 3.1 is an open, free, interactive sharing platform of digital human knowledge and web based software, available online for anyone, anywhere.
- Electricity Price Forecasting for Nord Pool Data
- In many countries deregulation of power markets was undertaken to create a more efficient market. As a result, electricity now can be purchased and sold across areas and countries more easily.For participants of electricity market it is beneficial to forecast future prices in order to optimize risks and profits as well as make future plans.A number of various methods is applied for solving this problem. However, the accuracy of forecasts is not sufficient as market spot price of electricity has features such as seasonality, spikes or high volatility. Furthermore, diverse approaches work differently with distinct countries (markets). In this paper we discuss our experiments with electricity spot price data of Lithuania's price zone in Nord Pool power market. Day-ahead forecasts are made using Seasonal Naïve, Exponential smoothing, Artificial Neural Networks.
- Food Recommendation Based on Personal's Mental Health Index and SNS Analysis
- Recommendation systems, by aiding decision-making on the part of users, aim to improve user satisfaction. On the other hand, because food recommendation systems do not take account of factors other than basic user profile information, they have been met with low satisfaction. Therefore, in this study we aim to present a system that draws on the personal's MHI(Mental Health Index) to recommend users with foods that were chosen by previous users with similar mental health characteristics such as stress, depression, fatigue.
7-B: FSP 2018
- A Study on a Cyber Threat Intelligence Analysis (CTI) Platform for the Proactive Detection of Cyber Attacks Based on Automated Analysis
- This paper proposes an automated cyberattack analysis platform that is designed to analyze and respond to cyberattacks, which are becoming ever more intelligent and advanced. The ICT information generated during previous cyberattacks will be collected to analyze cyberattacks automatically, and the relationship between the collected information, level of re-exploitation, and similar ICT information among cyberattacks will be automatically analyzed. If the values that are currently being monitored are entered into the developed platform, the most similar cyberattacks in the past and the current phase of attacks will be provided to the analyst. In addition, a system capable of blocking attacks in advance before damages are caused could be developed by providing response/analysis guideline information on the potential future attack inflow.
- Design of a Cyber Threat Information Collection System for Cyber Attack Correlation
- Nowadays, the number of cyber threats is increasing continuously, and attack techniques are becoming increasingly advanced and intelligent. One important thing that should be noted with regard to this situation is the marked increase in similar cyber incidents which use the same IP, domain, and malicious code for one cyberattack. Therefore, it is essential to understand the correlation between cyberattacks that occur due to the re-use of the same attack infrastructure (IP, domain, malicious code, etc.) for different cyberattacks, in order to detect and respond promptly to similar cyberattacks. To understand the correlation between cyberattacks, it is necessary to collect the related data concerning the procedures and techniques of cyberattacks. This paper proposes the design details of the cyber threat information collection system according to such needs. The proposed system performs the function of collecting the attack infrastructure data (IoCs) exploited for the cyberattack from various open data sources (OSINT, Open Source INTelligence), and uses the collected data as an input value to collect more data recursively. The relationship of the collected data can also be collected, saved, and managed, so that the data can be used to analyze the collection of cyberattacks. The proposed system uses a virtualization structure and distributed processing technology to collect data stably from various collection channels.
- Managing Cyber Threat Intelligence in A Graph Database
- Efforts to cope with ever increasing breach incidents jointly has established the standard format and protocol, and gave birth to many consultative groups. In addition, channels have appeared that distributes the Cyber Threat Intelligence information free of charge and studies on utilizing those channels have spread. As the market that shares the information professionally is expanding, necessity also arises that the shared information should be managed in various ways to create a better result. This paper proposes a management structure and method based on the format that is standardized, and the meaning and standard of Cyber Threat Intelligence that is shared outside, when loading the OSINT information collected from various channels in to the graph database. This paper also proposes the method of supporting detection provided by existing security equipment with the information saved in the graph database, and the effective analysis method as well. Lastly, this paper discusses the advantages that can be expected when saving the cyber threat information in the graph database that was developed using the information collected from the outside.
- Method of Quantification of Cyber Threat Based on Indicator of Compromise
- As a large quantity of new and varied attacks occur in Korea, it is difficult to analyze and respond to them with limited security experts and existing equipment. This paper proposes a method of analyzing the threat of Indicator of Compromise (IoC) used for cyber incidents and calculating it as a quantitative value in order to check the analysis priority of cyber incidents that occur in large quantities. Using this method, a large quantity of cyber incidents can be efficiently responded to by checking the quantification of cyber threat objectively to quickly determine the response level of the cyber incident and actively analyze cyber incidents with high threat levels.
7-C: SCA 2018
- Survey on OpenSCAP Profile for WLCG Environment in GSDC-KISTI Datacenter
- Maintaining the security of computer nodes by automated security processes is a task that must be performed in large-scale clustered computer systems. To satisfy this, we use open-source based OpenSCAP. The security rules to be applied should be optimized according to the application and the environment. Survey and utilize the security rules to be applied in the Worldwide LHC Computing Grid (WLCG) environment.
- A Study on Performance Measurement and Cluster Construction in "Singularity" Virtual Environment
- The "Singularity" program provides a virtualization technology that utilizes Linux container-based technology similar to "docker". This program provides an environment for the application to be used regardless of the version of the Linux OS on the computer, and it provides isolation technology to prevent the user from accessing unauthorized resources of the machine. Currently, the CMS group of WLCG is asking each Tier center to use "Singularity" for the isolation function. Therefore, it is necessary to study how much performance degradation of the virtualization environment provided by "Singularity" and how to configure to minimize the penalty. According to previous studies, "Singularity" shows that the CPU performance degradation compared to LXR technology has been remarkable according to the previous studies. However, the v2.4 version has recently been updated and a benchmark test for this version is required. We researched to see how it affects the performance of CPU and Disk I/O performance.
- KISTI CA V3.0: A New Certification Authority for Korean Grid Community
- Korea Institute of Science and Technology Information (KISTI) has operated a certification authority for grid uses since 2007. Certification Authority or Certificate Authority (CA) is a trusted organization that issues a certificate in which it helps other bodies to authenticate the identity of the certificate holder. KISTI CA operates Public Key Infrastructure (PKI) services based upon its Certificate Policy and the Certificate Practice Statement (CP/CPS). Recent release of the new version of CP/CPS includes stronger security rules, enhanced user experience and efficient PKI operations. In this paper, we present the key features and the current status of the new version of KISTI CA and discuss its future plan.
- Applicate HTCondor DAGMan in the Local Physical Analysis Farm
- When you build an analysis farm using a batch system, you need to analyze it with several small tasks instead of a huge single task in order to use it efficiently. The ALICE experiment, which is currently using the system, is a good environment to divide into several because the analysis data is independent for each event. We envisioned a model. It is that devide data by event, analyzes it by various works, and combines the results into one. As a result, we used the condor_dagman to construct a batch analysis environment by designating the order of analysis and merge parts. Therefore, users have various requirements according to their experiences, so we have created and distributed a batch job script reflecting some of them. Users performing tasks do batch jobs through some environment variables without knowledge of the batch system.
- A Study of LISP on Large-scale Data Center
- In recent year, the large-scale data center require thousands of Internet Protocol addresses. But the available Internet Protocol version 4 addresses were exhaustion already. Therefore, the large-scale data center adopting Internet Protocol version 6 addresses today. However, there is the large-scale data center that has not yet been switched to IPv6, and the connection with existing IPv4 must be guaranteed. Thus, although many IPv4/IPv6 translation technologies are applied, overhead due to protocol translation may occur, and additional cost problems may arise for translation. To solve this issue, our paper proposes a method to apply locator identifier separation protocol on large-scale data center, and proposes a method to apply it, and also explore the advantages of LISP on large-scale data center.
Tuesday, January 30 18:30 - 20:00
Wednesday, January 31
Wednesday, January 31 9:00 - 9:30
Wednesday, January 31 9:30 - 10:50
8-A: Interdisciplinary Session
- The Virtual Illumination for Outdoor Augmented Reality
- A lot of contents using augmented reality, which is one of the fields that have recently been spotlighted, are emerging. This content reduces the user's immersion feeling due to the gap between the real world and the 3D virtual contents. In this paper, we introduce the virtual illumination for outdoor augmented reality which simulate the daylight considering time and location. It is shown that the method of calculating the relative positions of the sun and illuminating the virtual objects to generate shadows is a more realistic result.
- Political Polarization Framework Model Using Sentiment Analysis
- There is an exponential increase in the use of social media like Facebook, Twitter, Flickr and YouTube to publish multimedia and other textual content. People use social media to share their liking, disliking and debating/discussing about topics of interests with their fellows and friends. These digital conversations help to extract opinion of individual and can be aggregated to get collective insight of the society about some important issue or event. To understand the opinion of people, language has become a great challenge, as people use different languages at social network to declare their views. In this research our primary focus is to deal with English text. We have used English posts and comments to categorize the opinion of people in the context of politics. A framework model is developed for collecting data from one of the most popular social media i.e. Facebook, preprocessed and cleaned data that can be used for analysis, and separation of data based on different languages. Our system clustered people based-on their comments and posts and successfully detected political polarization. Results show that the proposed method successfully detected polarization when data is prepared well.
8-B: Interdisciplinary Session
- A Study on Knowledge Unit for High Performance Computing Competency in Computational Science
- In this paper, we investigated whether 89 knowledge units in Computer Science Curricula 2013(CS2013) needed for high performance computing education as a basic research for high performance computing competency in computational science. We examined the validity and reliability of each knowledge unit based by the opinions of experts. As a result, 12 knowledge units were surveyed with high content validity (> = .8) and the 14 knowledge areas composed of several knowledge units were surveyes high reliability (> = .8). Therefore, 11 core knowledge units with high content validity and high reliability were showed as knowledge units essential for high performance computing education. These results are expected to contribute to the development of the curriculum and the competency of high performance computing in computational science.
- Analysis and Implications of Smart Farm Technology Trends
- Big Data, IoT (Internet of things) and AI (Artificial Intelligence) among the technologies related to the fourth Industrial Revolution are attracting attention in various industries. There is no exception in the agricultural fields, too. The Korean government is developing a Korean type of Smart Farm model through cooperation with related government agencies and private companies. However, the complexed environment control solution, which is a core element of the Smart Farm, is less completed than other leading countries and the application of big data service model is slower. Korean agriculture has problems such as agriculture population decline, aging, market opening, and abnormal weather phenomenon, so it is necessary to apply a stable and efficient smart farm. This paper presents implications that should be solved prior to the full- scale adoption of Smart Farm through literature review and empirical cases study.
8-C: Interdisciplinary Session
- Classification of Daytime and Night Based on Intensity and Chromaticity in RGB Color Image
- Classification of daytime and night in the color image is a very important task in image processing based on color images acquired from CCTV. Also, weather classification must be performed before performing image processing such as weather report, shadow removal and fog detection. In this paper, we proposed the classification, whether a color image is daytime or night. We first set the range of pixels in the gray level image from 0 to 50, from 51 and over 101, and we estimated each range as daytime, evening and night. In the first step, it is estimated based on the intensity and chromaticity of the image. If the classification result based on the intensity and chromaticity image is the same, the process is terminated. Otherwise, the k-means segmentation is used in the second step to determine the final classification. Some experiments are conducted so as to verify the proposed method, and the classification is well performed. The execution time results up to the first step are about 0.31 seconds on average, and the execution up to the second step is changed according to the resolution of the image.
- Simple Detection of Pigmentaiton and Skin Cancer in the Face
- Pigmentation is the coloring of the skin, and skin cancer is the uncontrolled growth of abnormal skin cells. In this paper, we proposed the simple detection of pigmentation and skin cancer using the illuminant-invariant and chromaticity image were used. The illuminant-invariant of the image is useful for applications, which is intended to operate on intrinsic scene. The chromaticitiy also specifies the color quality objectively regardless of luminance. Experimental results show that the proposed method effectively detects the skin conditions of pigmentation and skin cancer.