Program

August 28-31, 2023

Paradise Hotel, Busan, Korea

Keynote Speeches

Speaker

Alessandra Sciutti

Affiliation

Italian Institute of Technology, Italy

Position

Head of the COgNiTive Architecture for Collaborative Technologies Unit

Paper Title

Human-in-the-core Cognitive Robotics

Date & Time

August 29, 2023 at Morning Session (KST Time)

Venue

Grand Ballroom (2F)

Dr. Alessandra Sciutti is the head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT), where she works with the iCub robot. After a master’s degree in Bioengineering from the University of Genova and a Ph.D. in Humanoid Technologies, she spent two research periods abroad, first at the Robotics Lab of the Rehabilitation Institute of Chicago (USA) and then at the Emergent Robotics Laboratory of Osaka University (Japan). In 2018 she was awarded an ERC Starting Grant, among the most prestigious grants by the European Research Council (ERC), for the project wHiSPER (www.whisperproject.eu), which focused on the investigation of shared perception between humans and robots.

She published more than 80 papers in international journals and conferences and is currently Associate Editor for several journals on Cognitive Robotics and Human-Robot Interaction, among which Cognitive Systems Research, the IEEE Transactions on Cognitive and Developmental Systems, and the International Journal of Social Robotics. She is the corresponding co-chair of the Technical Committee on Cognitive Robotics of the IEEE Robotics and Automation Society and a Scholar of the ELLIS (European Laboratory for Learning and Intelligent Systems) Society. Sciutti received many awards, such as the title “Inspiring Fifty” (2018) and “Tecnovisionarie” (2021), for her research in Robotics and AI. In 2022 she was on the cover of Fortune Italy and listed among the “40 under 40” young people changing the country. She has been included in the AcademiaNet database of profiles of excellent female researchers from all disciplines.

Her research aims to investigate the sensory, motor, and cognitive mechanisms underlying human social interaction, with the technological goal of developing robots able to establish mutual understanding with humans. Please check the Contact Unit website or her Google Scholar profile for more details on her research and the complete list of publications. For an introduction to her work, please watch https://youtu.be/LCkOjR_cvxI.

An important goal of researchers in HRI is to enable robots to predict humans’ intentions, internal states, and limitations while being transparent, predictable, and adaptable in their behaviors. For starters, an interactive robot would then need a model of what it means to be human: how humans think, perceive, feel and move. This knowledge, however, would not suffice: the robot should be able to learn through actual interaction, which are the individual needs, preferences, and desires of its human partners. And this process should be continuous, as each person changes in life, as a consequence of their interaction with others, including the robot itself.

A pathway toward cognitive robots capable of being considerate of humans starts with investigating the sensory, motor, and cognitive bases of human social abilities, the principles of human-to-human mutual understanding. In such studies, robots can “lend a hand” by serving as ideal controllable probes to test quantitatively and model the dynamics of human interaction.

These basic, common components must then be integrated into a cognitive architecture, relying on memory, internal motivation, and learning to enable every robot to autonomously adapt to its partners and learn from its own experiences.

This long-term plan calls for the joint efforts of multiple disciplines, including robotics, computer science, machine learning, neurophysiology, cognitive science, psychology, and philosophy. The ambition is to develop robots that do not necessarily look like humans but think and understand as we do.

As a result, we will obtain more intuitive and adaptable robots and contribute to a more profound comprehension of human cognition through a constructive and embodied approach.

Speaker

Sangok Seok

Affiliation

NAVER LABS, Republic of Korea

Position

CEO

Paper Title

New Connections between Humans, Spaces, and Information-Robotics, Autonomous Driving, AI, Digital TWIN

Date & Time

August 30, 2023 at Morning Session (KST Time)

Venue

Grand Ballroom (2F)

Dr. Sangok Seok, CEO of NAVER LABS, is leading NAVER’s next-generation technology platform research through the integration of robotics, AI, autonomous driving, digital twin, etc. Holding a bachelor’s and master’s degree in Mechanical and Aerospace Engineering from Seoul National University and a doctorate in Mechanical Engineering from the Massachusetts Institute of Technology, his research paper on the MIT Cheetah was selected as the best paper at IEEE/ASME Transactions on Mechatronics in 2016. After working in National Instruments and Samsung Electronics, Dr. Seok joined NAVER in 2015, spearheading NAVER’s robotics field and filing numerous robot-related patents. Since becoming the CEO of both NAVER LABS (in 2019) and NAVER LABS Europe (in 2020), he has been leading world-class researchers from 27 countries, focusing on preparing the future of NAVER, which will connect people, machines, spaces, and information through the most innovative and advanced technologies. In 2022, Dr. Seok received much attention from international corporations · media · research institutions for the “1784 Project,” under which NAVER’s second headquarters was constructed as the world’s first robot-friendly building. In recognition of the first domestic installation of local 5G networks and his contribution to the advancement of smart building technologies, he was awarded the Bronze Tower Order of Industrial Service Merit.

This lecture introduces the future in which people, spaces, and information will form new connections, and explains the core technologies required for this.

The development of high-performance sensors, AI, robots, and autonomous driving technology is rapidly blurring the boundaries between physical space and virtual space, and accelerating the automation of shipping and logistics infrastructure. Ultimately, everyday space itself will serve as a single platform, organically connecting with various services.

The technological topics that must precede such change lie in ‘digital twin’ and ‘mobility.’ A digital twin is a replica of the real world in a digital environment, serving as important data for smart cities, autonomous driving, service robots, XR, and metaverses. As it is highly time-consuming and costly to establish a digital twin of a city, an innovative solution is needed. The technology that performs precise localization based on this digital twin data is also important. In particular, seamless localization should be made possible with technologies such as VL (visual localization), which can accurately determine the location from a single photo, even indoors or between dense buildings where GPS does not work. Mobility, that continues the connection with users in diverse environments, is now the role of the robot. It is still a huge challenge for robots to leave factories and coexist with humans in everyday environments.

In addition to wheels and legs for movement, safe and precise control of robot arms and hands, which enables robots to work, is also required. Software requires even further development. Vision-based deep reinforcement learning that enables natural autonomous driving without expensive sensors, HRI (human-robot interaction) research that creates standards for natural coexistence between humans and robots, robot technology that expands boundaries of movement from indoors to roads, and brainless robot technology that simultaneously controls multiple robots through the cloud and ultra-low latency networks will bring forward the popularization of robot services.

With these technologies, robots will store information and move on their own, becoming an innovative infrastructure that creates new connections between cities, buildings, offices, etc. The future technologies that have stayed in research labs are now increasingly moving into our lives. This lecture will address its prospects and the challenges we face today.

Speaker

Tomohiro Shibata

Affiliation

Kyushu Institute of Technology

Position

Professor

Paper Title

Designing Assistive Robots that Harness Physical Interaction between Humans and Robots

Date & Time

August 31, 2023 at Morning Session (KST Time)

Venue

Grand Ballroom (2F)

BIO

Tomohiro Shibata received Ph.D. from the University of Tokyo, Japan, in 1996, continued his robotics study as a JSPS (Japan Society for the Promotion of Sciences) researcher, and then worked on computational neuroscience research using a humanoid robot at ATR (Advanced Telecommunication Research Institute) as a JST (Japan Science and Technology) researcher. After working as an associate professor at Nara Institute of Science and Technology in robotics, computational neuroscience, and assisted living, he currently works as a professor at Kyushu Institute of Technology, Kitakyushu, Japan. He also organizes the Smart Life Care Co-Creation Laboratory, which the Ministry of Health, Labor and Welfare use for a project to develop, demonstrate, and promote nursing care robots.

He received a young investigator award from the Robotics Society of Japan (1992), the Best Paper Award from the Japanese Neural Network Society (2002 and 2015), the Neuroscience Research Excellent Paper Award from the Japan Neuroscience Society (2007), the Best Application Paper Award of IROS 2015 (2015), Excellent Paper Award from the RSJ (2020), Best Presentation Award of ICIEV and icIVPR (2021), the Winner in the Healthcare Category of Garmin Healthcare Awards (2022), and others.

He was an editorial board member of Neural Networks and an executive board member of the Robotics Society of Japan (RSJ). He is currently an executive board member of the Japanese Neural Network Society (JNNS), a fellow of the RSJ, a member of the International Exchange Committee of the RSJ, and the head of the special interest group on "Nursing Care Robots" of the RSJ. He is also a member of IEEE, a governing council member of The Robotics Society (of India), a member of JSME, and the Society for Nursing Science and Engineering.

Abstract

The demand for assistive robots is rapidly increasing across the medical, nursing care, and welfare sectors. When designing such robots, it is crucial to address the needs and abilities of the target users while considering the specific situations and environments they will operate in. The ultimate goal is to maximize the user's potential and support their independence.

However, designing assistive robots poses significant challenges due to the varying anthropometric, kinematic, peripheral nervous system, central nervous system, and other characteristics of individuals. Ideally, we would incorporate all these factors into models and develop control laws accordingly, but this proves challenging in practice.

To address these challenges, this keynote will focus on the design approach leveraging the physical interaction between the user and the robot. The presentation will include research on the gait-assistive robot that prevents and alleviates gait freezing symptoms in patients with neurological diseases, the wearable assistive suit that facilitates the learning of skilled workers' caregiving behaviors, and dual-armed robots that assist in dressing. The basic design policy is to exploit human abilities; the user's neural oscillation system in the gait-assistive robot, the user's motor learning system in the assist suit, and the user's residual motor abilities in the dressing robot.

In summary, by emphasizing physical interaction and leveraging the abilities of the person being assisted, assistive robots can significantly improve the quality of life for those facing physical challenges. However, cost, weight, and size remain notable barriers to the widespread adoption of such assistive robots. We often employ inexpensive and light pneumatic artificial muscles as actuators to overcome the issues. Other approaches will also be discussed, such as utilizing 3D printing technology and minimizing the robot's complexity.