Time (Tokyo) | Room: A (Main room no. 8-9) | Room: B (No. 11) | Room: C (No. 21) | Virtual Conference |
---|---|---|---|---|
Monday, April 21 |
||||
13:00-16:30 | R1: Registration Days | |||
Tuesday, April 22 |
||||
09:00-09:10 | K1: Opening ceremony by Assoc.Prof. Athikom Roeksabutr | |||
09:10-09:50 | K2: Keynote speaker by Prof. Dr.-Ing. Habil. Dr. h.c. Herwig Unger | |||
10:00-12:00 | S1: AI, ML, and Data Science | S2: Computer Science for Daily Life | S3: Software Engineering, Knowledge and Data Management | V1: Computer Science for Daily Life V3: AI in Daily Life |
13:00-15:00 | S4: AI, ML, and Data Science | S5: Networks, Security, Embedded Systems, and Internet of Things (IoT) | S6: Artificial Intelligence and Machine Learning | V2: AI and Data Science (Thai track) |
15:00-17:00 | S7: AI and Machine Learning |
The official opening ceremony for AJCC2025 is scheduled for April 22, 2025, between 9:00 AM and 9:10 AM Tokyo Time. This brief but significant event will be led by Assoc. Prof. Athikom Roeksabutr, the esteemed President of the Electrical Engineering Academic Association (Thailand).
This talk offers insights into cutting-edge research at the Chair of Communication Networks at FernUniversität in Hagen, focusing on advancements in brain-inspired methods and the development of GraphLearner, a neuromorphic sequence analyser and generator. Beginning with a review of previous research, including text-representing centroids and a decentralized search engine, we highlight the significance of brain-inspired approaches despite the success of conventional deep learning techniques.
Drawing from the works of Hawkins and others, we introduce the GraphLearner as a promising alternative to traditional "black box" models in deep neural networks. By employing Markov models and leveraging Bloom filters for efficient probability calculations, GraphLearner offers enhanced explainability and adaptability. The talk delves into the architecture and functioning of GraphLearner, showcasing its performance in natural language processing tasks. Additionally, we discuss potential extensions and applications of GraphLearner in diverse domains.