Abstract: A wide range of learning algorithms are available in the literature, including sophisticated structures that are based on feedforward, recurrent, or convolutional neural networks. The performance of these architectures matches or exceeds human performance in many important applications. However, they are susceptible to adversarial attacks that can drive them to erroneous decisions under minimal perturbations. They are also often trained with data that arise from homogeneous statistical distributions. And, once trained, the internal structure of these systems remains fixed and are expected to deliver reliable decisions thereafter. For all practical purposes, learning is turned off following training. Contrast these situations with learning by humans: they learn from different types of data and even minimal clues are sufficient in many instances. Humans are also more difficult to fool by small perturbations, and they continue to learn and accumulate experiences over time.
Motivated by these considerations, we will discuss one architecture for learning that exploits important characteristics of social interactions. We refer to the new framework as Social Machine Learning, and it consists of two main connected blocks. One block represents the memory component of the learning machine since it will learn the underlying clues, store them, and regularly update them. This ability adds a new level of richness to the learning process and is different from traditional boosting techniques because the processing is fully decentralized.
A second block represents the processing component of the social learning machine, and it consists of a graph structure linking the various clue models. This block performs classification by exploiting repeated social interactions among agents connected by a graph topology. The agents observe heterogeneous data arising from different statistical sources. This ability is different from neural network structures where information flows in a particular direction rather than arbitrarily over the graph edges, and where the feature data feeding into the graph is now highly heterogeneous. Moreover, the interactions among the agents on the graph take advantage of the "wisdom of the crowd" paradigm, which should lead to more robust learning. This is because it is more difficult to deceive a group of agents than an individual agent, especially when different parts of the group are observing different clues, not all of which can be perturbed similarly.
Analyses based on statistical learning theory indicate that, under reasonable conditions, the social machine learning structure can learn with high confidence. Moreover, the proposed architecture handles heterogeneity in data more gracefully, is able to learn with performance guarantees, is more resilient to attacks by exploiting the power of the group, and enables continuous learning.
Bio: A. H. Sayed is Dean of Engineering at EPFL, Switzerland, where he also leads the Adaptive Systems Laboratory (https://asl.ep.ch/). He is a member of the US National Academy of Engineering (NAE) and The World Academy of Sciences (TWAS). He served as President of the IEEE Signal Processing Society in 2018 and 2019. His research areas cover adaptation and learning theories, data and network sciences, and statistical inference. His work has been recognized with several awards including the 2022 IEEE Fourier Award, the 2020 Norbert Wiener Society Award, the 2015 Education Award, and the 2012 Technical Achievement Award from the IEEE Signal Processing Society. He also received the 2014 Papoulis Award from the European Association for Signal Processing, the 2005 Terman Award from the American Society for Engineering Education, and several Best Paper Awards. He is a Fellow of IEEE, EURASIP, and the American Association for the Advancement of Science (AAAS).
Abstract: As we look forward to the future use of our limited spectral resources, flexibility and efficiency are key. We want RF systems that fluidly adapt to the environment and users' needs. We also want the option of simultaneously providing multiple functions [for example: communications; radar; and positioning, navigation, and timing (PNT)] with the same RF signals. To achieve this vision of RF convergence, we need flexible RF frontends and processing for dynamic waveforms. Here, we focus on computational architecture required to achieve flexibility efficiently. For motivation, we consider simplified RF convergence examples of vehicular systems and augmented reality. While software-defined radio (SDR) systems seem to be the obvious solution for processing, they typically are orders of magnitude less efficient than the rigid full-custom systems-on-chip (SoCs) that are used in many modern communications systems. I introduce and describe our RF convergence enabling Domain-focused Advanced Software-reconfigurable Heterogeneous (DASH) SoC and software framework that that enables flexible and efficient processing for future RF systems.
Bio: Prof. Daniel W. Bliss is the Motorola Endowed Professor at Arizona State University in the school of Electrical, Computer, and Energy Engineering. He is also the Director of ASU's Center for Wireless Information Systems and Computational Architectures (WISCA). Dan received his Ph.D. and M.S. in Physics from the University of California at San Diego (1997 and 1995), and his B.S. in Electrical Engineering from ASU (1989). Dan is a Fellow of the IEEE and received the 2021 IEEE Warren D. White Award for Excellence in Radar Engineering. He has published two textbooks and more than 200 technical articles. He is responsible for foundational work in adaptive multiple-input multiple-output (MIMO) radar, MIMO communications, electronic protection, distributed-coherent systems, in-band full-duplex systems, and RF convergence. He has led coarse-scale heterogeneous system-on-chip (SoC) development programs. Before joining ASU, Dan was a Senior Member of the Technical Staff at MIT Lincoln Laboratory (1997-2012). Dan has also work on avionics for rockets (Atlas-Centaur), magnetic field optimization for high-energy particle-accelerator superconducting magnets, high-energy particle physics, and lattice-gauge-theory calculations.
Abstract: This talk discusses the recent breakthroughs and challenges in transforming memory into data rates for wireless communication networks. Recent research has offered profound progress toward understanding the inner workings of cache-aided downlink communications. In some astounding instances, this research suggests that preemptive use of distributed data-storage at the receiving communication nodes can offer unprecedented throughput gains by efficiently handling the majority of interference. This thematic talk will argue that while indeed multi-antenna arrays have been without a doubt the driving force behind advanced communications technologies, we are now presented with a new, highly abundant, and highly complementary resource in the form of the ever-increasing storage capabilities available across communicating nodes. In addressing this (often contentious) topic, the talk will seek to answer a simple question: Under a fixed set of antenna and SNR resources, what is the multiplicative boost in the throughput of downlink MISO systems --- where these systems can themselves naturally enjoy optimized exploitation of multiplexing and beamforming gains --- when we are now allowed to add reasonably-sized receiver-side storage capabilities?
This talk will also advocate that any such gains come at a time when there is an abundance of paradigm-shifting applications that demand new solutions. Such a new application involves for example immersive extended reality environments, which are considered by many to be a main 6G driver. In the talk, I will highlight challenging theoretical and practical open problems, including various mathematical challenges (of combinatorial nature or otherwise) whose resolution would undoubtedly have a direct impact on the performance of real multi-antenna systems. The main purpose for offering this talk is to gently advocate that research in the direction of unearthing the hidden duality between PHY and the preemptive use of memory, has the potential to directly translate the continuously increasing data-storage capabilities, into gains of wireless network capacity.
Bio: Petros Elia received the B.Sc. degree from the Illinois Institute of Technology, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Southern California (USC), Los Angeles, in 2001 and 2006 respectively. He is now a professor with the Department of Communication Systems at EURECOM in Sophia Antipolis, France. His latest research deals with the intersection of coded caching and feedback-aided communications in multiuser settings. He has also worked in the area of complexity-constrained communications, MIMO, queueing theory and cross-layer design, coding theory, information theoretic limits in cooperative communications, and surveillance networks. He is a Fulbright scholar, the co-recipient of the NEWCOM++ distinguished achievement award 2008-2011 for a sequence of publications on the topic of complexity in wireless communications, and the recipient of the ERC Consolidator Grant 2017-2022 on cache-aided wireless communications.
Abstract: Nearly passive metasurfaces have attracted great interest recently given their ability to tune the RF propagation environment and enhance the capabilities of wireless communication systems. Most work on such reconfigurable intelligent surfaces (RIS) has focused on designs that (almost) fully reflect all energy that impinges on the surface, in order to maximize performance metrics such as the network sum rate. More recently, focus has shifted to hybrid RIS architectures that sense or at least redirect some of the impinging RF energy in order for the RIS to (1) extract information for local processing (e.g., channel estimation), (2) refract some of it for transmission on the other side of the RIS, or (3) simply to absorb it. In this talk we will examine these alternative architectures and particularly focus on the advantages of partial absorption at the RIS for scenarios requiring interference mitigation.
Bio: Lee Swindlehurst received the B.S. (1985) and M.S. (1986) degrees in Electrical Engineering from Brigham Young University (BYU), and the PhD (1991) degree in Electrical Engineering from Stanford University. He was with the Department of Electrical and Computer Engineering at BYU from 1990-2007, where he served as Department Chair from 2003-06. During 1996-97, he held a joint appointment as a visiting scholar at Uppsala University and the Royal Institute of Technology in Sweden. From 2006-07, he was on leave working as Vice President of Research for ArrayComm LLC in San Jose, California. Since 2007 he has been a Professor in the Electrical Engineering and Computer Science Department at the University of California Irvine, where he served as Associate Dean for Research and Graduate Studies in the Samueli School of Engineering from 2013-16. During 2014-17 he was also a Hans Fischer Senior Fellow in the Institute for Advanced Studies at the Technical University of Munich. In 2016, he was elected as a Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA). His research focuses on array signal processing for radar, wireless communications, and biomedical applications, and he has over 300 publications in these areas. Dr. Swindlehurst is a Fellow of the IEEE and was the inaugural Editor-in-Chief of the IEEE Journal of Selected Topics in Signal Processing. He received the 2000 IEEE W. R. G. Baker Prize Paper Award, the 2006 IEEE Communications Society Stephen O. Rice Prize in the Field of Communication Theory, the 2006, 2010 and 2022 IEEE Signal Processing Society's Best Paper Awards, the 2017 IEEE Signal Processing Society Donald G. Fink Overview Paper Award, and a Best Paper award at the 2020 IEEE International Conference on Communications.
ABSTRACT: We are in the midst of a tidal transformation in the conditions in which wireless systems operate, with a determined push towards much higher frequencies (today mmWave, tomorrow sub-terahertz), with shrinking transmission ranges, and with much denser antenna arrays. This is stretching, even breaking, time-honored modelling assumptions such as that of planar wavefronts over the arrays. And, once the local curvature of those wavefronts is nonnegligible, a new opportunity arises for spatial multiplexing without any need for scattering or for multipath components. Conveniently, spatial multiplexing can then rely on the line-of-sight propagation path or the strong specular reflections that tend to dominate at those high frequencies and over short ranges. This presentation dwells on the physical underpinnings of this phenomenon, on the signal processing necessary to harness it for communication purposes, and on its potential implications for future systems
BIO: Angel Lozano is a Professor at Univ. Pompeu Fabra (UPF) in Barcelona. Prof. Lozano received a Ph.D. from Stanford University in 1998. In 1999, he joined Bell Labs (Lucent Technologies, now Nokia), where he was a member of the Wireless Communications Research Department until 2008. Between 2005 and 2008 he was also an Adj. Associate Professor at Columbia University. Prof. Lozano is a Fellow of the IEEE. He is an editor for the IEEE ComSoc Technology News, an area editor for the IEEE Trans. Wireless Communications and a former associate editor for the IEEE Trans. Inform. Theory (2011-2014) and the IEEE Trans. Communications (1999-2009). He was the Chair of the IEEE Communication Theory Technical Committee (2013-2014) and an elected member of the Board of Governors of the IEEE Communications Society (2012-2014). Prof. Lozano holds 16 patents and is the coauthor of the textbook "Foundations of MIMO Communication," released by Cambridge University Press in 2019. His papers have received several awards, including the 2009 Stephen O. Rice prize to the best paper published in the IEEE Trans. Communications, the 2016 Fred W. Ellersick prize to the best paper published in the IEEE Communications Magazine, and the 2016 Communications Society & Information Theory Society joint paper award. He also received an ERC Advanced Grant for the period 2016-2021 and was a 2017 Highly Cited Researcher.
Abstract: Millimeter wave (mmWave) communication and MIMO technology offer additional benefits beyond high data rate communications. The large arrays at high frequencies provide the angle and delay resolvability that can enable accurate localization of users and objects in the environment as a byproduct of communication. In this talk, I provide an overview of how signal processing and machine learning techniques can be integrated to achieve high accuracy joint localization and channel estimation in mmWave wireless networks. First, to drastically reduce complexity of the channel estimation stage and enable operation with large planar arrays, I introduce the recently developed multidimensional orthogonal matching pursuit (MOMP) algorithm, that operates with a dictionary in multiple dimensions instead of a large dictionary as conventional OMP would do. Then, I introduce a deep learning approach that predicts the order of the estimated channel paths, so the line-of-sight path and first order reflections can be selected to apply the corresponding geometric transformations and obtain the estimation of the device position. An additional data driven stage that refines the position estimation is also described. Finally, RIS-aided joint localization and communication is also discussed as a potential avenue to further increase position estimation accuracy that can also benefit from integrating data and model driven approaches.
Bio: Nuria González Prelcic received her Ph.D. in Electrical Engineering in 2000 from the University of Vigo, Spain. She joined the faculty at NC State as an Associate Professor in 2020. She was previously an Associate Professor in the Signal Theory and Communications Department at the University of Vigo, Spain, and also held visiting positions at the University of Texas at Austin and the University of New Mexico. She was also the founding director of the Atlantic Research Center for Information and Communication Technologies (atlanTTic) at the University of Vigo (2008-2017). She is an Editor for IEEE Transactions on Communications. She is an elected member of the IEEE Sensor Array and Multichannel Technical Committee and the IEEE Signal Processing for Communications and Networking Technical Committee. She is a member of the IEEE SPS Integrated Sensing and Communication Technical Working Group. Her main research interests include signal processing theory and signal processing and machine learning for wireless communications: filter banks, compressive sampling and estimation, multicarrier modulation, massive MIMO, MIMO processing for millimeter-wave communication, including vehicle-to-everything (V2X), air-to-everything (A2X) and LEO satellite communication. She is also interested in joint localization and communication, joint radar and communication, and sensor assisted communication. She has published more than 120 papers in the topic of signal processing for millimeter-wave communications, including a highly cited tutorial published in the IEEE Journal of Selected Topics in Signal Processing which has received the 2020 IEEE SPS Donald G. Fink Overview Paper Award.
Welcome reception kindly sponsored by the city of Oulu.
Location: Aleksinkulma https://kartta.ouka.fi/IMS/fi?layers=$urlOpaskartta&cp=7213030,474945&title=Aleksinkulma&z=5
Abstract: With the ongoing deployment of 5G systems, the attention of the research community is turning to the next generation, 6G. One of the first steps in investigating new systems is the analysis of the propagation channels that they are operating in, and the requirements of 6G will create a lot of new scenarios that need to be investigated: from new frequency range (e.g., Terahertz), to new ways of deployment (e.g., distributed massive MIMO), to new mobility models (e.g., base stations mounted on drones). It is axiomatic that efficient signal processing needs to be tuned to, and exploit, the special properties of the propagation channels. This talk will thus survey the measurements and models in the new types of propagation channels and discuss their impact on various types of signal processing, from channel estimation algorithms to scalable MIMO decoding to computations with low-resolution ADCs. A discussion of some open topics will wrap up this talk.
Bio: Andreas F. Molisch received his PhD and habilitation from TU Vienna in 1994 and 1999, respectively. After 10 years in industry he joined the University of Southern California, where he is now the Solomon Golomb - Andrew and Erna Viterbi Chair Professor. His research interest is wireless communications, with emphasis on wireless propagation channels, multi-antenna systems, ultrawideband signaling and localization, novel modulation methods, machine learning, caching for wireless content distribution, and edge computing. He has published five books and more than 650 research papers, which have been cited more than 58,000 times (h-index >100); he has also authored numerous standards contributions and been granted 70 patents. He is a Fellow of the National Academy of Inventors, IEEE, AAAS, and IET, as well as Member of the Austrian Academy of Sciences and recipient of numerous awards.
Abstract: In the current noisy intermediate-scale quantum (NISQ) era, quantum machine learning is emerging as a dominant paradigm to program gate-based quantum computers. In quantum machine learning, the gates of a quantum circuit are parametrized, and the parameters are tuned via classical optimization based on data and on measurements of the outputs of the circuit. Parametrized quantum circuits (PQCs) can efficiently address combinatorial optimization problems, implement probabilistic generative models, and carry out inference (classification and regression). This talk provides a short introduction to quantum machine learning by focusing on key algorithmic principles. The presentation follows the speaker's monograph "An Introduction to Quantum Machine Learning for Engineers", which can be found at the link https://arxiv.org/abs/2205.09510.
Bio: Osvaldo Simeone is a Professor of Information Engineering with the Centre for Telecommunications Research at the Department of Engineering of King's College London, where he directs the King's Communications, Learning and Information Processing lab. He received an M.Sc. degree (with honors) and a Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2005, respectively. From 2006 to 2017, he was a faculty member of the Electrical and Computer Engineering (ECE) Department at New Jersey Institute of Technology (NJIT), where he was affiliated with the Center for Wireless Information Processing (CWiP). His research interests include information theory, machine learning, wireless communications, neuromorphic computing, and quantum machine learning. Dr Simeone is a co-recipient of the 2022 IEEE Communications Society Outstanding Paper Award, the 2021 IEEE Vehicular Technology Society Jack Neubauer Memorial Award, the 2019 IEEE Communication Society Best Tutorial Paper Award, the 2018 IEEE Signal Processing Best Paper Award, the 2017 JCN Best Paper Award, the 2015 IEEE Communication Society Best Tutorial Paper Award and of the Best Paper Awards of IEEE SPAWC 2007 and IEEE WRECOM 2007. He was awarded a Consolidator grant by the European Research Council (ERC) in 2016. His research has been also supported by the U.S. National Science Foundation, the Vienna Science and Technology Fund, the European Space Agency, as well as by a number of industrial collaborations including with Intel Labs and InterDigital. He is the Chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society and of the UK & Ireland Chapter of the IEEE Information Theory Society. He is currently a Distinguished Lecturer of the IEEE Communications Society, and he was a Distinguished Lecturer of the IEEE Information Theory Society in 2017 and 2018. Dr Simeone is the author of the textbook "Machine Learning for Engineers" to be published by Cambridge University Press, three monographs, two edited books, and more than 170 research journal and magazine papers. He is a Fellow of the IET and of the IEEE.
Abstract: Evaluation of virtually every wireless system requires statistical channel models to assess performance in realistic propagation environments. Channel modeling is particularly challenging in the mmWave and THz frequencies as communication and imaging systems operate at wide bandwidths with high-dimensional arrays with complex dynamics. This talk will describe recent efforts to build a fully data-driven mmWave wireless channel models derived from extensive ray tracing. This work including training state-of-the-art neural network-based variational auto-encoders (VAE) and generative adversarial networks (GAN) to generate the full double directional channel parameters, meaning the path losses, delays, and angles of arrival and departure for all the propagation paths. Importantly, the method makes minimal statistical assumptions and can learn complex relationships among the paths and environment. Extensions are also discussed to multi-frequency models and models for high-rank LOS MIMO. We discuss several interesting open problems including ray tracing calibration, and modeling from partial visual information.
Bio: Sundeep Rangan received the B.A.Sc. at the University of Waterloo, Canada and the M.Sc. and Ph.D. at the University of California, Berkeley, all in Electrical Engineering. He has held postdoctoral appointments at the University of Michigan, Ann Arbor and Bell Labs. In 2000, he co-founded (with four others) Flarion Technologies, a spin off of Bell Labs, that developed Flash OFDM, one of the first cellular OFDM data systems and pre-cursor to 4G systems including LTE and WiMAX. In 2006, Flarion was acquired by Qualcomm Technologies where Dr. Rangan was a Senior Director of Engineering involved in OFDM infrastructure products. He joined the ECE department at NYU Tandon (formerly NYU Polytechnic) in 2010. He is a Fellow of the IEEE and an Associate Director of NYU WIRELESS, an academic-industry research center researching next-generation wireless systems. His research interests are in wireless communications, signal processing, information theory and control theory.
Abstract: AI and 6G are perfect storm that will revolutionize all the services and applications in 2030. With the success of the machine-learning technology and the potential for massive application based on the deep learning technology, we have the opportunity to re-architect the wireless networks to enable the native-AI rather than the add-on-AI. In this talk, we address three areas of the novel native-AI based 6G wireless designs: (1) AI based end-to-end link with a novel deep pre-coded transmitter and receiver for the real-world wireless fading channel, (2) the in-network learning and in-network inferencing, here, we present the methodology and the evaluation on the network and computing key performance Indicators (KPI) (3) the 6G data governance framework for raw-data-set collected from the billions of devices; that is, the deep learning model created by training and utilized for inferencing, we present a new principle as " model-follows-data" and its relation with information bottleneck theory. Finally, we will address how to minimize the carbon emission associated with in-network computing for deep learning.
Bio Scketch: Dr. Wen Tong is the CTO, Huawei Wireless. He is the head of Huawei wireless research. In 2011, Dr. Tong was appointed the Head of Communications Technologies Labs of Huawei, currently, he is the Huawei 5G chief scientist and led Huawei's 10-year-long 5G wireless technologies research and development. Prior to joining Huawei in 2009, Dr. Tong was the Nortel Fellow and head of the Network Technology Labs at Nortel. He joined the Wireless Technology Labs at Bell Northern Research in 1995 in Canada. Dr. Tong is the industry recognized leader in invention of advanced wireless technologies, Dr. Tong was elected as a Huawei Fellow and an IEEE Fellow. He was the recipient of IEEE Communications Society Industry Innovation Award in 2014, and IEEE Communications Society Distinguished Industry Leader Award for "pioneering technical contributions and leadership in the mobile communications industry and innovation in 5G mobile communications technology" in 2018. He is also the recipient of R.A. Fessenden Medal. For the past three decades, he had pioneered fundamental technologies from 1G to 5G wireless with more than 530 awarded US patents. Dr. Tong is a Fellow of Canadian Academy of Engineering, and he serves as Board of Director of Wi-Fi Alliance.
Abstract: Designing optimized policies in large scale wireless networks is challenging due to unknown or time-varying dynamics. While wireless communication networks can be well-modeled by Markov Decision Processes (MDPs), this approach induces a large state space which challenges policy optimization. Herein, we review strategies exploiting graph signal processing for network optimization including new representations for wireless network behavior. Our new representations effectively capture the influences of multiple hops in the network graph. We show that the novel representations allow for efficient graph reduction by projecting onto a lower dimensional subspace that accurately captures the behavior of the network while strongly reducing complexity for policy optimization. A novel on-line/off-line ensemble Q-learning methodology is proposed based on the new graph representation for wireless networks. The graph representations allow for the efficient creation of synthetic trajectories that accurately capture network behavior without the need for excessive trajectory sampling of the actual network. The approach enables the learning of multiple policies which can be efficiently fused. The proposed hybrid strategy offers significantly improved convergence rates and performance. The hope is that this approach can be generalized for other directed graphs.
Bio: Urbashi Mitra received the B.S. and the M.S. degrees from the University of California at Berkeley and her Ph.D. from Princeton University. Dr. Mitra is currently the Gordon S. Marshall Professor in Engineering at the University of Southern California with appointments in Electrical & Computer Engineering and Computer Science. She was the inaugural Editor-in-Chief for the IEEE Transactions on Molecular, Biological and Multi-scale Communications. She has been a member of the IEEE Information Theory Society's Board of Governors (2002-2007, 2012-2017), the IEEE Signal Processing Society's Technical Committee on Signal Processing for Communications and Networks (2012-2016), the IEEE Signal Processing Society's Awards Board (2017-2018), and the Chair/Vice-Chair of the IEEE Communication Theory Technical Committee (2017-2020). Dr. Mitra is a Fellow of the IEEE. She is the recipient of: the 2021 USC Viterbi School of Engineering Senior Research Award, the 2017 IEEE Women in Communications Engineering Technical Achievement Award, a 2015 UK Royal Academy of Engineering Distinguished Visiting Professorship, a 2015 US Fulbright Scholar Award, a 2015-2016 UK Leverhulme Trust Visiting Professorship, IEEE Communications Society Distinguished Lecturer, 2012 Globecom Signal Processing for Communications Symposium Best Paper Award, 2012 US National Academy of Engineering Lillian Gilbreth Lectureship, the 2009 DCOSS Applications & Systems Best Paper Award, 2001 Okawa Foundation Award, 2000 Ohio State University's College of Engineering Lumley Award for Research, and a 1996 National Science Foundation CAREER Award. Her research interests are in wireless communications, structured statistical methods, communication and sensor networks, biological communication systems, detection and estimation and the interface of communication, sensing and control.
Abstract: Recent years have witnessed a dramatically growing interest in machine learning (ML) methods. These data-driven trainable structures have demonstrated an unprecedented empirical success in various applications, including computer vision and speech processing. The benefits of ML-driven techniques over traditional model-based approaches are twofold: First, ML methods are independent of the underlying stochastic model, and thus can operate efficiently in scenarios where this model is unknown, or its parameters cannot be accurately estimated; Second, when the underlying model is extremely complex, ML algorithms have demonstrated the ability to extract and disentangle the meaningful semantic information from the observed data. Nonetheless, not every problem can and should be solved using deep neural networks (DNNs). In fact, in scenarios for which model-based algorithms exist and are computationally feasible, these analytical methods are typically preferable over ML schemes due to their theoretical performance guarantees and possible proven optimality. A notable application area where model-based schemes are typically preferable, and whose characteristics are fundamentally different from conventional deep learning applications, is wireless communications. In this talk, I will present methods for combining DNNs with traditional wireless communications algorithms. We will show how hybrid model-based/data-driven implementations arise from classical methods in communications and show how fundamental classic techniques can be implemented without knowledge of the underlying statistical model while achieving improved inference speed and robustness to uncertainty.
Bio: Nir Shlezinger is an assistant professor in the School of Electrical and Computer Engineering in Ben-Gurion University, Israel. He received his B.Sc., M.Sc., and Ph.D. degrees in 2011, 2013, and 2017, respectively, from Ben-Gurion University, Israel, all in electrical and computer engineering. From 2017 to 2019 he was a postdoctoral researcher in the Technion, and from 2019 to 2020 he was a postdoctoral researcher in Weizmann Institute of Science, where he was awarded the FGS prize for outstanding achievements in postdoctoral research. His research interests lie in the intersection of signal processing, machine learning, communications, and information theory
Conference banquet at Maikkula mansion
Bus transportation from the conference hotel and back.
https://maikkulankartano.fi/en/
Seated dinner followed by a Finnish sauna experience.
Abstract: When switching from odd to even-numbered generations, cellular standards have introduced a revolutionary change in the radio access network. The main reason is that odd-numbered generations break ground for a new communications paradigm, and even-numbered ones drive cost and energy down to democratize this for consumers. This requires the radical changes in radio access we have experienced. Looking forward, a breakthrough is required for 6G radio access, which also supports features such as "sensing as a service". The good news is that we can exploit the vast differences in how data requirements are to be serviced over the area as well as over the time (24/7). This allows for a possible solution - the "Gearbox PHY", with gears spanning from supporting extreme data rates down to a gear with extreme energy efficiency using analog impulse radio. The Gearbox PHY is to always serve the service needs in an energy optimal way.
Bio: Gerhard P. Fettweis is a Vodafone Chair Professor at TU Dresden since 1994 and the founding director of the Barkhausen Institute since 2018. He earned his Ph.D. under H. Meyr from RWTH Aachen in 1990. After being a postdoc at IBM Research, San Jose, CA, he moved to TCSI Inc., Berkeley, CA. He coordinates the 5G Lab Germany. In 2019 he was elected into the DFG Senate. His research focuses on wireless transmission and chip design for wireless/IoT platforms, with 20 companies from Asia/Europe/US sponsoring his research. He also serves on the board of National Instruments Corp, and advises other companies. Gerhard is a member of the German Academy of Sciences (Leopoldina), the German Academy of Engineering (acatech), and received multiple IEEE recognitions as well as the VDE ring of honor and the Semi Europe award. In Dresden, his team has spun out nineteen start-ups, and set up funded projects in volume of close to EUR 1/2 billion
Abstract: 5G and beyond wireless communication systems will rely on large antenna arrays at Base Stations (BSs) to serve multiple users with high data rates. BSs can accordingly collect Channel State Information (CSI) of very high dimensions during operation. Applying dimensionality reduction to CSI databases, channel charting can be performed. The Channel Chart (CC) reveals CSI samples that come from nearby spatial locations. In this talk, the principles of channel charting will be presented, and example use cases of applying channel charting for radio resource management will be discussed, concentrating on predicting the best beam in a mmWave system.
Bio: Olav Tirkkonen is associate professor in communication theory at the Department of Communications and Networking in Aalto University, Finland, where he has held a faculty position since August 2006. He received his M.Sc. and Ph.D. degrees in theoretical physics from Helsinki University of Technology in 1990 and 1994, respectively. Between 1994 and 1999 he held post-doctoral positions at the University of British Columbia, Vancouver, Canada, and the Nordic Institute for Theoretical Physics, Copenhagen, Denmark. From 1999 to 2010 he was with Nokia Research Center (NRC), Helsinki, Finland, most recently acting as Research Fellow. In 2016-2017 he was Visiting Associate Professor at Cornell University, Ithaca, NY, USA. He has published some 200 papers, is the coinventor of some 80 families of patents and patent applications and is coauthor of the book "Multiantenna transceiver techniques for 3G and beyond".