Program for 2021 International Conference on Platform Technology and Service (PlatCon)
|Time||Udo Room||Blue Room(Online)||Mango Room(Online)|
Sunday, August 22
|11:00-12:00||Local Arrangement Meeting|
|13:30-15:30||ICRP Steering Meeting|
|16:00-18:00||Conference Committees' Meeting(Steering Committee / Organizing Committee / Program Committee)|
Monday, August 23
|10:00-12:00||1-A: Network Platform||1-B: Computing Platform / PSDT Workshop|
|13:30-15:00||1-C: SCA 2021||1-D: Convergence Platform||1-E: Human & Media Platform|
|15:30-17:00||1-F: Invited Talk|
Tuesday, August 24
|10:00-12:00||2-A: CIA 2021||2-B: Computing Platform / FSP2021|
|13:30-15:00||2-C: Human & Media Platform||2-D: SCA2021||2-E: Convergence Platform|
|15:30-17:00||2-F: Invited Talk|
Wednesday, August 25
|09:30-12:00||3-A: Interdisciplinary Session|
Sunday, August 22
Sunday, August 22 11:00 - 12:00
Sunday, August 22 13:30 - 15:30
Sunday, August 22 16:00 - 18:00
Monday, August 23
Monday, August 23 9:30 - 10:00
Monday, August 23 10:00 - 12:00
- Transparent Web of Things Discovery in Constrained Networks Based on mDNS/DNS-SD
- Over the past few years, IoT (Internet of Things) technologies have become pervasive. Many IoT systems and applications have been proposed and deployed. As IoT devices are with restricted computation, storage, and power, these devices are usually connected by LLN (Low-power and Lossy Network), where the devices are only accessible via a gateway. As a result, it is tedious to integrate existing IP-based applications with the IoT devices as existing well-developed management protocols do not fit well to the devices within the LLN. This paper proposes a framework and a set of schemes that facilitate transparent discovery and access from an IP network to the devices in the LLN. With the proposed schemes, a client is able to discover IoT devices in a 6LoWPAN-based LLN using mDNS/DNS-SD service discovery protocol. The proposed design has been realized as add-on modules of an LLN gateway. Validations and experiments are also performed to verify the effectiveness of the proposed schemes.
- The Adoption of Design Thinking, Agile Software Development and Co-Creation Concepts A Case Study of Digital Banking Innovation
- Acceleration of technology, especially the mobile internet, causes changes all aspects of human life, including in the banking sector. New emerging technology such as Artificial Intelligence, Blockchain, Big Data, and Cloud computing change the business and operation of the bank. The bank's services have become more personalized, furthermore change customers' lifestyles. Banks are competing to create innovations and breakthroughs to create added value and building a digital ecosystem with fintech and big tech companies in the era of sharing economy. This case study explores the process of creating digital innovation in banking institutions by focusing on adopting design thinking (DT), agile software development (ASD), and co-creation concepts for building digital banking platforms. The case study involved IT executives from four banks in Indonesia. Data were taken through semi-structured interviews. The implication of this research is to accelerate the process of digital banking innovation and produce high-quality digital banking platforms in terms of features and technology.
- Integrated Disaster Information Transmission Platform Based on Connecting Multi-Channel
- Recently, public interest in disaster warning system has been on the rise as infectious diseases have increased and the patterns of disaster have become more complicated. In this paper, we propose Multi-channel based Disaster Information Transmission Platform (MDITP) that can effectively transmit disaster information by connecting multi-channel. MDITP is a technology that can spread a disaster information to the public within a short time by existing independently operated disaster information systems and linking broadcasting and communication environments. In addition, it is possible to enhance the understanding of disaster information by applying multimedia based content from the perspective of public. In terms of disaster management, the Platform will help reduce damage through rapid response.
- Ethical Chatbot Design for Reducing Negative Effects of Biased Data and Unethical Conversations
- AI technology is being introduced into various public and private service domains, transforming existing computing systems or creating new ones. While AI technologies can provide benefits to humans and society, the unexpected consequences (e.g., malfunctions) of AI systems can cause social losses. For this reason, research on ethical design for the development of AI-based systems is becoming important. In this paper, from existing studies on AI ethics, general guidelines such as transparency, explainability, predictability, accountability, fairness, privacy, and control for the ethical design of AI systems are reviewed. And, based on the ethical design guidelines, we discuss ethical design to reduce the negative effects of biased data and unethical dialogues in AI-based conversational chatbots.
- Design and Implementation of Real-Time Bio Signals Management System Based on HL7 FHIR for Healthcare Services
- Recently, attention has been focused on services that combine medical technology with ICT technologies such as artificial intelligence, big data, Internet of Things, and block chains. In addition, Research on healthcare services that can collect bio signal data through wearable sensors using IoT technology and monitor and manage health based on the collected data is increasing significantly. Healthcare services are bringing important changes in the pandemic era caused by covid-19. There is a need for a system capable of efficiently sharing and exchanging information of heterogeneous services to prevent emergencies and support optimal medical services. In this paper, we designed and developed a system that can collect, convert, and store bio-signals from various wearable sensors into international standard data to develop such healthcare services. HL7 FHIR applied mutandis in this paper is a standard protocol for data exchange between medical information systems of real-time collected bio signals. In this paper, we implement an interface module that converts bio signals such as EEG, ECG, EMG, and PPG collected in real time from a wearable sensor into a message structure defined by HL7 FHIR.
- Implementation of Generative Model Based Solver for Mathematical Word Problem with Linear Equations
- Solving math word problems automatically with a computer is an interesting topic. Instead of statistical methods and semantic parsing methods, recently, deep learning model based methods are used to solve MWPs. We experimented with different deep learning generative model that directly translates a math word problem into a linear equation. In this paper, four MWP solvers using the Sequence-to-Sequence (Seq2Seq) model with a attention mechanism were implemented, i.e., Seq2Seq, BiLSTM Seq2Seq, convolutional Seq2Seq, and transformer models. Then, performance analysis for the 4 MWP solvers has performed on MaWPS (English) and Math23K (Chinese) MWP datasets. Experiment shows that both the Seq2Seq model and the transformer model showed similar performance in translating into simple linear equations, but the transformer model showed the best performance in translating into more complex linear equations.
- A Study on the New Saturnin S-Box with Improved Implementation Efficiency
- The block cipher Saturnin uses a 16-bit S-box constructed by combining a 4-bit S-box and an MDS matrix. In this paper, we measure the security and efficiency of the 16-bit S-box by replacing the 4-bit S-box used in Saturnin S-box construction with 4-bit S-boxes that can be implemented with fewer operations. We tried to increase the efficiency of bitslice implementation as long as the main security measures of the S-box, such as differential uniformity, linearity, algebraic degree, and number of fixed points, are maintained. As a result, we succeeded in reducing the number of nonlinear and linear operations required for the 16-bit S-box implementation from 24 to 16 and from 62 to 54, respectively.
- Attention-Based PBN for Human Pose Estimation
- Human pose estimation localizes landmark keypoints of the entire human body, including the head, body, hands, and feet, and can be used in various applications such as motion capture and action recognition. Recently, many convolutional neural network-based pose estimation methods have been proposed. In particular, part-based branching network (PBN) shows good performance by learning features between keypoints that are highly relevant to each other. However, PBN has difficulty in identifying features of relevant keypoints from common feature maps of whole human body parts. In this paper, we propose a novel attention-based PBN architecture that further improves its pose estimation performance by focusing on features of relevant keypoints. To do this, the basic residual module of the PBN is relaced with the proposed modified SE-ResNet module to make the PBN focus on the features relevant to each other among common feature maps. The module uses the Argmax function to directly obtain indices of maximum values for each channel of feature maps, which allows to reflect spatial information in calculating channel weights. In the experiments, we demonstrate the effectiveness of the proposed scheme by measuring the percentage of correct keypoints on the MPII dataset.
- CEMO: Cloud Edge Architecture Development for a Multi Object Tracking
- Due to increase of video surveillance situation, advance of autonomous driving technology, and development of artificial neural network, the multi-object tracking (MOT) has been attracted attention in the computer vision community. Moreover, the importance of multi-input processing and real-time analysis is increasing with the need for fast processing of many videos. Modern multi-object trackers use sequential processing to input continuous frames of video and derive tracking trajectories for all objects mainly on a single server. When performing deep learning with high computation on a single server, latency inevitably occurs. The latency is the main reason that the tracker cannot meet the real-time requirements. Removing the number of operations to reduce latency will immediately lead to poor performance of tracker. Cloud edge computing is the best way to meet the real-time distributed requirements because it can solve the data transmission delay problem of traditional cloud computing and effectively cooperate between edge devices. In this paper, we propose a new system structure called Cloud Edge Multi Object (CEMO) tracker for developing deep learning-based cloud-edge real-time video analysis applications. CEMO tracker is a container-based microservice structure that divides large application functions into small, independent units, and is a flexible architecture based on Kubernetes, a container orchestration platform. CEMO tracker, which can efficiently perform operations on simultaneous input, integrate the results and show them to users, is expected to solve multiple objects tracking problems efficiently through distributed cloud-edge computing technology.
Monday, August 23 12:00 - 13:30
Monday, August 23 13:30 - 15:00
- Improving the Accuracy of Movie Recommendation via Enhanced Multiple Bias Analysis
- A movie recommendation system is a model that searches for movies that fit the user's preference based on an analysis of their previous viewing patterns. Bias is one of the variables involved in the movie selection and movie evaluation processes. Multiple bias analysis (MBA) shows higher accuracy than the matrix decomposition model based on the analysis of the bias of the user's movie viewing data. In this paper, new rating frequency weights that improve the accuracy of MBA are proposed. In addition, to verify whether its suitability for inducing content consumption, performance changes following cold-start were observed. As a result of an experiment conducted using MovieLens 100K dataset, it is observed that the accuracy of MBA improved through the proposed method and was higher than that of existing models on cold-start.
- Survey for Graph Embedding for Sequence Based Recommendation
- GNN is currently popular for analyzing users' preferences for items in e-commerce or social network users' preferences for an efficient recommendation. However, random walk based GNN methods or Marcov chain methods create static embeddings by a temporal point in time when the interaction between these users and items is made. Users' preferences in e-commerce services depend on the experience of time and the correspoing session. Reflecting the passage of time, analyzing the interaction for each session, and creating dynamic embeddings are crucial for predicting relationships. Therefore, we investigate studies that create dynamic embeddings including time series information for real time recommendation.
- Real Time Decision Support Model for a Serious Game
- The game industry(market) has a remarkable growth rate. The industry has more viewers than traditional sports such as basketball and baseball. This showed the possibility of "education" through games and introduced "serious games". However, players tend to focus on the initial experience without understanding player-to-player interactions. We analyzed player interaction and game information in real-time games, and created a decision support system for a team goals. We studied a model that defines various attribute information and events in a real-time game and uses them to predict victory. Our model can predict game wins with high accuracy in the middle of the game and guarantees a quick response. Furthermore, our proposed method is more effective than the six algorithms in existing studies.
- Multi-Magnetometer Based Orientation Estimation for Indoor Environment
- Estimating the orientation of mobile devices is an important technology used in multiple mobile applications. Existing orientation estimation method estimates the orientation by integrating the angular velocities collected using a gyroscope, but shows low accuracy due to the accumulation of errors over time. Therefore, in order to solve this, many methods that integrate the magnetometer and the inertial sensor have been proposed. However, since the magnetometer is affected by multiple interference sources indoors, it causes a large error in orientation estimation. In this paper, we investigate the reason that makes it difficult to use magnetometers in indoor environments and the feasibility of improving performance by using multiple-magnetometers.
- Deep Learning-Based IMU Sensor Calibration for Accurate Smartphone Orientation Estimation
- In this work, we investigate the feasibility of using a machine learning technique to improve the accuracy of IMU (Inertial Measurement Unit)-based orientation estimation methods. In particular, we would like to investigate whether performance improvements are possible through orientation prediction through predictable LSTM models for time series data rather than dead reckoning. This work collects a set of training data and constructs machine learning models using cameras built in smartphones without using other expensive equipment. In particular, using such as stereo camera set or RGB-D camera set we will improve accuracy about visual odometry for accuracy orientation training data set.
- Blockchain-Based Process Management for an Eco-Friendly Energy Sharing Platform
- With 4th industrial revolutions, the digital transformation has a great impact on the diverse industry. Digital transformation that are actively introduced and applied in the energy sharing and logistics field including block chain, Internet of Things (IoT), artificial intelligence (AI), big data, cyber security, and e-platform. Digital transformation is having a disruptive impact across industries and fundamentally changing the energy industry, logistics industry, and business environment. It can change approaches including production, distribution and consumption processes. In particular, transparency in logistics in the logistics industry and sharing of residual energy and activation of transactions in the energy industry are possible through integrity and openness. This paper studied a platform for blockchain-based unmanned logistics system and eco-friendly energy management.
- Recent Developments in Paxos-Based Consensus Algorithms
- Paxos is a consensus algorithm to solve state machine replication in a resilient way. It is mainly applied to log replication of servers in a distributed system. Replicated distributed nodes are capable of crash-fault tolerance, and the situation in which some messages are not delivered must also be considered. Existing Paxos and Paxos-based algorithms can only be used locally because they depend on a centralized leader, and do not respond well to write-intensive scenarios across wide area networks. However, in recent years, data storage across wide area networks between data centers has become increasingly important. In this paper, we introduce and analyze the developments of the Paxos-based consensus algorithms for use in wide area networks.
- A Survey on Consensus Algorithms Among Distributed Nodes
- A service composed of only one node has limitations in performance improvement and several problems such as single point failure. Therefore, a distributed service that is composed of multiple nodes is required. In a distributed service, a consensus algorithm is essential because nodes must have the same value. In this paper, algorithms for consensus among distributed nodes are introduced. We Introduce Paxos, a Crash Fault Tolerant consensus algorithm that can tolerate a fault in which a node goes down, and Practical Byzantine Tolerance (PBFT) algorithm that can tolerate faults that forge messages from malicious nodes, and studies improving these algorithms.
- A Survey on Distirbuted Storage System and Data Management Techiniques
- Due to growing production and demand of data worldwide, research on distributed storage systems that efficiently manage large amounts of data is being actively conducted. Local storage systems are inadequate for storing and managing large amounts of data because of scalability constraints and difficult to process large amount of user requests. Distributed storage system, on the other hand, are technologies that minimize the cost of storing and managing data on multiple nodes, and are often used to handle large amounts of data because they can increase scalability and achieve high data throughput using distributed data management techniques. In this paper, we conduct survey and analysis on these distributed data management techniques.
Monday, August 23 15:00 - 15:30
Monday, August 23 15:30 - 17:00
In recent years, there has been a tremendous increase in video capturing devices, which led to large personal and corporate digital video archives. This huge volume of video data became a source of inspiration for the development of vast numbers of applications such as visual surveillance, multimedia recommender systems, and context-aware advertising. The heterogeneity of video data, higher storage, processing cost, and communication requirements demand for a system that can efficiently manage and store huge amount of video data, while providing user-friendly access to stored data at the same time. To address this problem, video summarization schemes have been proposed. Video summarization refers to the extraction of keyframes, identifying most important and pertinent content. For instance, gastroenterologist uses wireless capsule endoscopy video technology to diagnose his patients. However, during capsule endoscopy process, video data are produced in huge amounts, but only a limited amount of data is actually useful for diagnosis. In this talk, we will explore two different aspects of video summarization: visual surveillance and medical imaging.
Tuesday, August 24
Tuesday, August 24 9:30 - 10:00
Tuesday, August 24 10:00 - 12:00
- MitM Tool Analysis for TLS Forensics
- Most recent major Internet services use TLS based encrypted communication. For the security of TLS communication, use a digital signature certificate between the client and server to ensure that each other can be trusted. Confidentiality is maintained using symmetric key cryptography, and integrity is verified through message authentication. However, even if encrypted communication through TLS is used, security issues such as MitM may occur. In this paper, we analyzed the MitM attack method and tool. The process of TLS encryption communication and representative MitM attack methods such as SSL Strip and SSL Split were analyzed. Bettercap, MitMproxy and Fiddler were analyzed as MitM attack tools. Protocols with strong security such as the HSTS protocol could also perform MitM attacks using SSL strip attack. In encrypted communication, additional authentication is required as well as a certificate.
- An Empirical Performance Evaluation of Key-Value Store on Multiple SSDs
- Cloud computing as a service-on-demand architecture has grown in importance over the previous few years. The performance of key-value store and storage in cloud computing is a key factor for providing high-quality cloud services. Especially, multiple solid-state drives (SSDs) can provide higher performance, fault tolerance, and storage capacity in the cloud computing environment. In this paper, we perform an empirical evaluation study for the performance of a key-value store (i.e., mongoDB) on recent NVMe SSDs (i.e., Intel NVMe SSDs). We analyze the performance by using a key-value benchmark (i.e., YCSB). We anticipate that the experimental results and performance analysis have implications for various key-value store and storage systems.
- A Study on Design of Cloud-Based Intelligent Document Processing Platform
- The work environment of a company or organization is in the process of transforming from a conventional on-premise environment to a digital work environment using cloud technology. Moreover, COVID-19 is accelerating the shift to the digital work environment as social and political demands for the untact era rise and non-face-to-face telecommuting/remote work increase. Certain platform dependencies are required for work collaboration or document sharing, and the scattered management of many documents in various formats has limited information usage, reducing the productivity and efficiency of work. In this paper, we will review the technical factors necessary for the intelligence of document tasks required in the rapidly changing digital work environment and conduct cloud-based intelligent document processing platform design research.
- Research Challenges on Transport Layer for Delay-Sensitive Application
- Low-latency and reliable characteristics are essential to activate next-generation services that are attracting attention, such as virtual reality, augmented reality, Internet of Things, and remote control. Although the 5G mobile communication network technology research, which is a representative low-latency network research, aims to achieve a delay time of less than 1 ms between the base station and the terminal, there is a limit to reducing the delay time without improving the protocol performance of the upper layer. In this paper, we describe the research challenge for existing transport layer protocols to support delay-sensitive applications as the following: (1) TCP parameter optimization, (2) transmission policy selection, (3) subflow scheduling, and (4) random loss detection.
- Estimating the 3D Indoor Scene Structure Using Point Cloud
- In this paper, we propose a technique for estimating the structure of real-world indoor space based on a dense point cloud obtained from video captured using an RGBD camera. Currently, many studies are being conducted to reconstruct a 3D virtual space using RGBD sensors, but the indoor environment consists of various real objects (chair, desk and so on) including walls, and modeling each of the real-world objects is a big challenge. Among the objects constituting the indoor space, the wall is an important clue to understand the structure of the indoor space, and in particular, the boundary of the floor becomes the basis of the indoor space structure. We propose an efficient method for estimating floor geometry from point clouds obtained through indoor spatial scanning. The boundary of the floor was predicted and the structure of the 3D space was estimated through the refinement of those expressing the floor information from the point cloud representing the entire indoor space.
- AR to XR: Real Object Control in AR Space
- Maker-less AR(Augmented Reality) systems utilize SLAM technology for camera tracking. This has the advantage of recognizing a larger real-world space than a 2D Maker-based AR system. We propose an object control method that can recognize objects that exist in a SLAM-based AR environment and change the recognized actual objects through control in an AR space. A technology in which changes in the virtual world lead to changes in the real world could be defined as eXtended reality. To check the possibility of eXtended reality, we build a SLAM-based AR space, using Point Cloud from image stream and a system that calculates and continuously tracks the 3D pose of predefined real objects in real time. We also built and tested an Arduino-based experimental environment to simulate changes in real objects.
- A Study on Building Training Dataset to Improve Edge-Based Object Detection
- In computer vision tasks, object detection models need to be stable about environmental factors. One of the most popular uses for object detection is the road traffic monitoring domain. It must be aware of a hazardous situation for 24 hours. In the road traffic monitoring, about continuous, changing time and place, we proposed dataset training method for generalizing, comparing training dataset. For experiment, we used the traffic road dataset including pedestrian, bus, car, bicycle. As a result, learning the specific node based on balanced base-dataset showed stable accuracy with similar performance to the model trained using same node without overfitting
- Image Data Augmentation Comparison for Deep Learning
- In deep learning tasks, Image data augmentation method such as rotate, resize, hue, contrast can improve accuracy of object detection. However, in specific classification task like similar object, some methods can have a rather bad effect in detection, because the features representing each class are narrowly located in feature distribution. To solve this problem, we compared the proper data augmentation methods in similar object. For experiment, we used face image dataset for similar object. As a result, the CutMix method showed the highest accuracy, and the Blur method showed lower accuracy than the training model without any data augmentation method.
Tuesday, August 24 12:00 - 13:30
Tuesday, August 24 13:30 - 15:00
- A Study on Improving Efficiency in HTCondor Multi-Group Environment of CMS Analysis Using CRIU
- We used HTCondor's preemption function to build an environment shared resources by several groups. However, due to this function, the job that had been run for a long time can be canceled by the very short jobs of another group. We studied how to minimize these problems through job checkpoint using CRIU. In this study, we studied whether CMS users' popular programs are checkpointed and restored well through the CRIU on virtual cluster that construct the environment that is actually being provided to CMS users as similar as possible. We have found problems due to differences between HTCondor system environments and regular environments and solved them using simple tricks.
- Migrating to HTCondor Based Systems for WLCG Tier-1 Workloads
- High Throughput Computing (HTC) is a common way to deal with an enormous amount of tasks with workloads of simulation, data reprocessing and data analysis simultaneously in the field of High Energy and Nuclear Physics. In particular, the Worldwide LHC Computing Grid (WLCG) is an international collaboration of about 170 HTC centers which was formed to share the responsibility of data storage and data processing workloads for the experiments using the Large Hadron Collider (LHC) at the European Nuclear Research Organization (CERN). Korea Institute of Science and Technology Information (KISTI) is a Tier-1 center of the WLCG and has provided a significant contribution of computational capacity. CREAM-CE and Torque/Maui had been used for a decade to process the workloads from the Grid however the community version of Torque/Maui is already far-behind the state-of-the-art of PBS technology, moreover the community support of CREAM-CE faced its end in December 2020. Therefore, we adopted HTCondor-CE and HTCondor batch system to replace the existing systems for workload processing. We present the migration to HTCondor based systems and its upgrade due to the end-of-life. Then we discuss the expectation for future workload processing.
- A Study on Optimizing the Environment of LDG Tier-2/Tier-3 Systems
- Recent data center trends provide a structure to share infrastructure and synchronize observation execution, data collection, and analysis to address common problems. This is in contrast to the previously monopolized and used one system to solve one study and problem. KISTI-GSDC is a data center established as a national funding project to promote fundamental research activities in Korea and supports WLCG Tier-1 Alice, WLCG Tier-2/Tier-3 CMS, Belle2, LIGO Tier-2/Tier-3, Reno, Genome, PAL, and TEM. Recently, part of WLCG Alice and CMS have been integrated into line with the latest data center trends, and LIGO experiments have also integrated support for Tier-2/Tier-3. In this paper, we identify problems in the LDG Tier-2/Tier-3 integrated system environment operated by KISTI-GSDC and study optimization measures.
- Learning to Rank in Social Network
- With the explosive growth of social media platforms, social network analysis has become pervasive. Much of these successes have been made possible by advances in machine learning. Various machine learning algorithms have been proposed for social network applications such as link predictions, user attribute predictions, item recommendations, community detections. In this work we propose a model that infers the shared latent representation that well describe the multi-modal data simultaneously. Specifically, we consider the link structure and the rankings of users for two modalities. We evaluate our model using the real-world data, and predict the rankings, which outperforms other baseline models.
- An Efficient Neural Network Based on Early Compression of Sparse CT Slice Images
- Recently, research on diagnosing diseases through artificial intelligence has been conducted in various medical fields, including Thyroid-associated ophthalmopathy. We introduce a computationally efficient CNN architecture, which is optimized for CT images and designed especially for mobile devices with very limited computing power. The proposed model is designed to focus only on the important slices in the entire CT image through channel squeezing. The proposed architecture utilizes three operations, pointwise convolution, depth-wise separable convolution and channel shuffle, to reduce computation cost for handling a series of CT image slices for a patient. On CT images, the proposed model achieves ∼3.5× actual speedup over ShuffleNet-v2 without degenerating prediction accuracy.
- Automatic Stroke Medical Ontology Augmentation with Standard Medical Terminology and Unstructured Textual Medical Knowledge
- The need for medical ontology to provide stroke medical knowledge is increasing as much research has recently been conducted to predict stroke diseases using AI technology quickly. Medical ontology serves as a medical explanation of predictions in conjunction with methods of analysis using machine learning and deep learning to analyze clinical data obtained from the medical field, medical imaging devices (MRI, CT, ultrasound, etc.). However, the existing medical ontology focused on is-a relationships in taxonomy to define the classification system for diseases, symptoms, and anatomical structures. This medical ontology is insufficient to explain complex organic relationships to disease-symptom-body-patients, a knowledge structure for predicting disease. Furthermore, although professional standard terms exist in medicine, electronic medical records (EMR), electronic health records (EHR) medical professional books, and medical papers that use common terms to express professional are mostly unstructured forms. To overcome this limitation, in this paper, we propose a stroke medical ontology automatic augmentation method via unstructured text medical knowledge using the lowest instance-level medical term ontology and top-level schema-level medical ontology for stroke disease prediction through standard medical terms.
- Security Vulnerabilities on Magnetic Secure Transmission
- Magnetic Secure Transmission (MST) a mobile payment technology available on Samsung smartphones. It mimics a card swipe by emitting the magnetic waves which include payment information similar to that in magnetic stripe cards. In the meantime, there are severe security threats for using MST to make payment transaction caused by the physical and structural nature of MST. In this paper, we summarize various kinds of vulnerabilities that allows hackers/intruders to obtain payment information, to produce tampered transactions for malicious purposes. There were several approaches in order to acquire payment information. And then, we will categorize the vulnerabilities by the attack mechanism.
- Characteristic AI Agent Generation Based on Game Player Data
- Recently, research on the improvement of artificial intelligence agent performance has been actively conducted in various fields related to the 4th industrial revolution as well as in the game field. However, it is difficult to obtain data for reinforcement learning in the real world, including complex games, and there are many difficulties in learning complex behaviors. Therefore, research that simulates various characteristics of gamers through imitation learning is also attracting attention. In this paper, we propose a method of generating an artificial intelligence agent that satisfies different expected behaviors through imitation learning according to a classification that reflects various characteristics of gamers based on data obtained from games.
- A Comparative Analysis of Tree-Based Models for Day-Ahead Solar Irradiance Forecasting
- Recently, solar photovoltaic (PV) techniques have been attracting lots of attention for sustainable development, and solar irradiance forecasting is crucial to estimate PV output. However, accurate solar irradiance forecasting is challenging because solar irradiance exhibits complex patterns due to various weather factors. Decision tree (DT)-based methods can effectively train complex internal and external factors so that they have been widely used in energy forecasting. In this paper, we developed several solar irradiation forecasting models using tree-based methods such as DT, bagging, random forest, gradient boosting machine, extreme gradient boosting, and Cubist. We then compared their prediction performance in terms of mean square error, root-mean-square-error (RMSE), and normalized RMSE. Experiment results for two regions on Jeju Island showed that Cubist could derive better prediction performance of day-ahead hourly solar irradiation than other tree-based methods.
Tuesday, August 24 15:00 - 15:30
Tuesday, August 24 15:30 - 17:00
A good data representation can typically reveal the latent structure of data and facilitate further processes such as clustering, classification and recognition. Nonnegative matrix factorization (NMF) as a fundamental approach for data representation has attracted great attentions. Despite its great performance, traditional NMF fails to explore the semantic information of multiple components as well as the diversity among them, which would be of great benefit to understand data comprehensively and in depth. In fact, real data are usually complex and contain various components. For example, face images have ex-pressions and genders. Each component mainly reflects one aspect of data and provides information others do not have. In this talk, I will present an approach on multi-component nonnegative matrix factorization (MCNMF). Instead of seeking only one representation of data, MCNMF learns multiple representations simultaneously, where each representation corresponds to a component. By integrating the multiple representations, a more comprehensive representation is then established.
Wednesday, August 25
Wednesday, August 25 9:30 - 12:00
- A Brief Survey of Machine Learning Based Approaches for Forest Disease Prediction
- Forests play a key role in sustainable Earth. One of main issues concerning forest management is monitoring of health and growth of trees against forest pests and diseases. Forest pests and diseases are forestry problems caused by insects and pathogens, primarily fungi in trees. All parts of a tree can become infected, leading to decreases in wood quality, productivity and occasionally death. This paper presents a comprehensive brief review for machine learning-based approaches for the prediction of forest pests and diseases. In the paper, we introduce major forest pests and diseases and studies on detecting them based on artificial intelligence.
- Connectivity in Metaverse: Revisited
- Since favoring contactless environments becomes more and more intensified due to the Covid-19 pandemic, the metaverse is becoming a reality. Thanks to the immersive experiences and the ever-dramatic advancements in technologies, the metaverse is gradually blurring the boundaries between the physical world and the virtual world. The gap between the physical world and the virtual world has been closing, the distance among active users in metaverse applications can play an important role in distinguishing the different features of the application services and could be a source for creating new application services. In this letter, we explore the metaverse platform focusing on connectivity and project the potential of the distance as an important criterion in developing location-based services in metaverse.
- A Study on the Integrated Care Service Model for Elderly Cognitive Disability Rehabilitation Coaching Based on ICT Convergence
- In this paper, we proposed Integrated Care Service Model in conjunction with caring ToyBot and digital multi-touch table based cognitive rehabilitation coaching to ensure the continuity of integrated care and coaching for people with cognitive disabilities by linking the non-contact smartization of home care for the elderly and the rehabilitation care of elderly people with cognitive disabilities. By presenting the components and main service functions of the proposed service model, and suggesting the possibility of service utilization, it will be possible to use it as an integrated care/rehabilitation coaching service for the elderly with cognitive impairment in the community and as a diagnosis and management service for depression in elderly medical institutions.
- A Signal Collection System for Deep Learning-Based Wireless Signal Classification
- The Internet of Things is emerging technology in the fourth industrial revolution. With this trend, wireless communication is gradually increasing, creating a congested wireless environment. To classify those congested wireless environment, deep learning can provide an intelligent wireless signal classification. In this paper, we devise a clever wireless signal collector to gather a large dataset in diverse scenarios for deep learning-based signal classification. We implement a signal collection scheme using the USRP and GNU radio platform. By collecting various wireless signals in the real world environment, we can also provide an accurate signal classification model to avoid wireless signal interferences.
- Design of Terrain and Fishing Contents for Making a Healing Life Fishing Game in Jeju Island
- Currently, fishing activities that the whole family can enjoy together can be enjoyed in online games, and more and more people are doing so online and offline. In particular, the demand for online fishing games is increasing these days, when many people are unable to engage in outdoor activities due to COVID-19. Fishing games are characterized by realistic implementation and realistic play. Through this, many fishing enthusiasts are evolving into a game that can be played in a fun way, providing the pleasure of waiting for the fish to catch and come up. In this study, we propose a method of providing content for unique fishing games unique to Jeju Island. To this end, it analyzes Jeju island tourism resources, classifies them into interactive content, and provides the most suitable system form for living an online life. Based on this planning system, game contents that operate online are developed, and each developed content is modularized and designed in a form that can be assembled. In order to build such a system, the proposed study needs to analyze the tourism resources of Jeju Island, and through this, it is necessary to construct a topographical model and content for fish objects. In this study, fishing points and characteristic areas in Jeju Island are analyzed. Through this, we propose a base content composition for a fishing game unique to Jeju Island.