Title: Challenges in Developing B5G/6G Communication Systems
Speaker: Hideyuki Tokuda(National Institute of Information and Communications Technology, Japan)
Abstract: The 5G mobile communication system has evolved from people’s communication infrastructure to social infrastructure with the evolution of IoT and AI. In the next generation, Beyond 5G/6G (B5G/6G), targeted for around 2030, cyberspace and physical space will be integrated and will play an important role as the reliable social infrastructure of a Super-Smart Society, called Society 5.0. While 5G provides eMBB (Enhanced Mobile Broadband), URLLC (Ultra Reliable Low Latency Communications), and mMTC (Massive Machine Type Communications) features, B5G/6G may improve these functions at least ten times with Tera-Hertz communications, low power consumption and high-accuracy positioning for supporting next-generation services and businesses. B5G/6G will also cover NTN (Non-Terrestrial Networks) communications such as HAPS (High-altitude platform station) and GEO/LEO Satellites for providing anywhere and anytime connection by offering wide-area coverage and ensuring service availability, continuity, and scalability. In this context, satisfying various user requests and providing the desired Quality of Service (QoS) anytime and anywhere is one of the main challenges for B5G/6G communication systems. NICT compiled a draft of the research and development plan for B5G/6G, called NICT B5G/6G white paper. We derived the requirements from the needs by backcasting from the future society around 2030, while repeatedly verifying the fundamental technologies by forecasting the seeds. Various use cases of B5G/6G are also created. In this talk, we will introduce the outline of the NICT B5G/6G White Paper and explain use cases of B5G/6G such as Cybernetic Avatar Society, Working at a Moon base, and Beyond space and time. We also discuss NICT’s research and development of fundamental technologies supporting B5G/6G infrastructure.
Bio: Hideyuki Tokuda is President of the National Institute of Information and Communications Technology (NICT) and Professor Emeritus of Keio University, Japan. He obtained his B.S. (1975), M.S. (1977) from Keio University and Ph.D. (Computer Science) (1983) from University of Waterloo, Canada, respectively. After he completed Ph.D. in 1983, he joined School of Computer Science, Carnegie Mellon University and worked on distributed real-time operating systems such as Real-Time Mach, the ARTS Kernel. In 1990, he came back to Keio University. His research and teaching interests include Ubiquitous Computing Systems, OS, Sensor Networks, IoT, Cyber-Physical Systems and Smart Cities. He was a Professor of the Faculty of Environment and Information Studies and Executive Vice President of Keio. He was also an advisor to NISC (National center of Incident readiness and Strategy for Cybersecurity). Because of his research contribution, he was awarded Motorola Foundation Award, IBM Faculty Award, IPSJ Achievement Award, Information Security Cultural Award, MEXT Award in Japan. He is a member of the Science Council of Japan, IPSJ Fellow, JSSST Fellow, and JFES Fellow.
Title: Enhancing Scalability and Liquidation in QoS Lightning Networks
Speaker: Jie Wu(Temple Univsitery, USA)
Abstract: The lightning network (LN) is a special network in Bitcoin that uses offchain micropayment channels to scale the blockchain’s capability to perform instant transactions without a global block confirmation process. However, QoS measurements such as micropayment scalability in a large LN and liquidation for small nodes still remain major challenges for the LN. In this paper, we introduce the notion of supernodes and the corresponding supernodesbased pooling to address these challenges. In order to meet the high adaptivity and low maintenance cost in the dynamic LN where users join and leave, supernodes are constructed locally without any global information or label propagation. Each supernode, together with a subset of (non-supernodes) neighbors, forms a supernode-based pool. These pools constitute a partition of the LN. Additionally, supernodes are self-connected. Micropayment scalability is supported through node set reduction as only supernodes are involved in searching and in payment with other supernodes. Liquidation is enhanced through pooling to redistribute funds within a pool to external channels of its supernode. The efficacy of the proposed scheme is validated through both simulation and testbed in terms of routing success ratio.
Bio: Jie Wu is the Director of the Center for Networked Computing and Laura H. Carnell professor at Temple University. He also serves as the Director of International Affairs at College of Science and Technology. He served as Chair of Department of Computer and Information Sciences and Associate Vice Provost for International Affairs. Prior to joining Temple University, he was a program director at the National Science Foundation and was a distinguished professor at Florida Atlantic University. His current research interests include mobile computing and wireless networks, routing protocols, cloud and green computing, network trust and security, and social network applications. Dr. Wu regularly publishes in scholarly journals, conference proceedings, and books. He serves on several editorial boards, including IEEE Transactions on Mobile Computing and IEEE Transactions on Service Computing. Dr. Wu was general co-chair for IEEE MASS 2006, IEEE IPDPS 2008, IEEE ICDCS 2013, ACM MobiHoc 2014, ICPP 2016, IEEE CNS 2016, WiOpt 2021, and ICDCN 2022 as well as program cochair for IEEE INFOCOM 2011, CCF CNCC 2013, and ICCCN 2020. He was an IEEE Computer Society Distinguished Visitor, ACM Distinguished Speaker, and chair for the IEEE Technical Committee on Distributed Processing (TCDP). He was also the recipient of the 2011 China Computer Federation (CCF) Overseas Outstanding Achievement Award. Dr. Wu is a Fellow of the AAAS and a Fellow of the IEEE.
Title: Brain-inspired Networking and QoE Control
Speaker: Masayuki Murata(Osaka University, Japan)
Abstract: Machine learning is now actively applied to the “Industry 4.0” and “smart city” for establishing the next ICT-enabled world. However, since the neural network was “invented” in the mid-1980s, brain science has much progressed by advancements of high-precision measurement devices such as EEG and fMRI. We are now ready to develop the next-generation machine learning approaches. It is known that the most striking feature of the human brain is the ability to handle uncertainty under the dynamic environment, instead of pursuing optimality. Also, it accumulates “confidence” for reaching a final decision on the target task, which gives the flexibility of handling decisions against various sources of uncertainty. Furthermore, the environment may be changed by the decision itself, and the human may face the new environment. It can be viewed as a feedback control system, which must be utilized in artificial control systems. In this talk, brain-inspired approaches for networking problems are introduced by taking two steps. First, the “Yuragi” (meaning fluctuation in Japanese) concept is introduced. It is a universal feature of adaptability found in the natural systems including various biological systems and the human brain. It is formulated as the Yuragi theory as a simple canonical formula and can be used for network control methods in several situations where adaptability is much more important than optimality. Second, Yuragi theory is extended to the machine learning approach (which we call Yuragi Learning) by incorporating the Bayesian attractor model. Then, it is applied to a real-time QoE control in the video-streaming service, in which the user’s current emotional status is obtained by utilizing the recently developed lightweight device such as the headset EEG, and the agent controls the video quality instead of the user. Of course, the human brain is not perfect. One famous example is a cognitive bias. Problems for dealing with the cognitive bias in the case of QoE control are finally addressed.
Bio: Professor Masayuki Murata received the M.E. and D.E. degrees in Information and Computer Science from Osaka University, Japan, in 1984 and 1988, respectively. In April 1984, he joined Japan Science Institute (currently Tokyo Research Laboratory), IBM Japan, as a Researcher. He moved to Osaka University as an Assistant Professor in September 1987. In April 1999, he became a Full-Professor with the Graduate School of Engineering Science, Osaka University. Since April 2004, he is a Full-Professor with the Graduate School of Information Science and Technology, Osaka University. His research interests include computer communication network architecture inspired by biology and the human brain. He is a member of IEICE, IEEE, and ACM. He is now the Dean of Graduate School of Information Science and Technology, Osaka University, and the Vice-Director of the Center for Information and Neural Networks (CiNet), co-founded by Osaka University and National Institute of Information and Communications (NICT), Japan. In April 2021, he published the book entitled “Fluctuation-Induced Network Control and Learning: Applying the Yuragi Principle of Brain and Biological Systems of Brain and Biological Systems” co-edited with Dr. Kenji Leibniz from Springer.
Title: Edge Computing Meets Mission-critical Industrial Applications
Speaker: Albert Y. Zomaya(University of Sydney, Australia)
Abstract: In the past few decades, industrial automation has become a driving force in a wide range of industries. There is a broad agreement that the deployment of computing resources close to where data is created is more business-friendly, as it can address system latency, privacy, cost, and resiliency challenges that a pure cloud computing approach cannot address. This computing paradigm is now known as Edge Computing. Having said that, the full potential of this transformation for both of computing and data analytics is far from being realized. The industrial requirements are much more stringent than what a simple edge computing paradigm can deliver. This is particularly true when mission-critical industrial applications have strict requirements on real-time decision making, operational technology innovation, data privacy, and running environment. In this talk, I aim to provide a few answers by combining real-time computing strengths into modern data- and intelligence-rich computing ecosystems.
Bio: Albert Y. ZOMAYA is Chair Professor of High-Performance Computing & Networking in the School of Computer Science and Director of the Centre for Distributed and High-Performance Computing at the University of Sydney. To date, he has published > 600 scientific papers and articles and is (co-)author/editor of >30 books. A sought-after speaker, he has delivered >250 keynote addresses, invited seminars, and media briefings. His research interests span several areas in parallel and distributed computing and complex systems. He is currently the Editor in Chief of the ACM Computing Surveys and served in the past as Editor in Chief of the IEEE Transactions on Computers (2010-2014) and the IEEE Transactions on Sustainable Computing (2016-2020). Professor Zomaya is a decorated scholar with numerous accolades including Fellowship of the IEEE, the American Association for the Advancement of Science, and the Institution of Engineering and Technology (UK). Also, he is an Elected Fellow of the Royal Society of New South Wales and an Elected Foreign Member of Academia Europaea. He is the recipient of the1997 Edgeworth David Medal from the Royal Society of New South Wales for outstanding contributions to Australian Science, the IEEE Technical Committee on Parallel Processing Outstanding Service Award (2011), IEEE Technical Committee on Scalable Computing Medal for Excellence in Scalable Computing (2011), IEEE Computer Society Technical Achievement Award (2014), ACM MSWIM Reginald A. Fessenden Award (2017), and the New South Wales Premier’s Prize of Excellence in Engineering and Information and Communications Technology (2019).