Mobile Collaborative Robots:Addressing Real World Challenges
The collaboration between Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) and the University of Edinburgh (UoE) aims at fundamental and applied researches in artificial intelligence and robotics. The current research focus on the areas of perception, motion planning and control in the field of high dimensional robotics system and HRI. The three scientific pillars of the project are Multi-Contact Planning and Control, Multi-Agent Collaborative Manipulation and Robot Perception.
Multi-Contact Planning and Control
The difficulty of multi-contact planning problem mainly comes from three aspects: the discrete choice of the sequence of contact combinations, the continuous choice of contact locations and timing and the continuous choice of a path between two contact combinations (transition). The goal of the research is a general framework that can handle all scenarios by formulating contacts as geometric constraints and contact force variables in a general numerical optimization problem.
Multi-Agent Collaborative Manipulation
Multi-agent collaborative manipulation, including bimanual manipulation, multi-robot collaboration and human-robot collaboration, aims at exploring the ability of intelligent robotic systems in complex dynamic environment and improve human working efficiency in manufacturing factory. This research mainly focus on: multiple phase/contact optimization for large momentum object manipulation, autonomous task allocation and distributed control for multi-agent collaboration, policy learning and adaptive control for efficient human-robot collaboration.
Perception includes proprioception and exteroception. One of the main research focus will be state estimation form multi-modal information. For exteroceptive perception, the team mainly focus on robot vision which can provide rich information about surroundings with LIDAR, depth sensors and cameras. Another important part of the research is the post-processing and understanding of sensor data. We are going to develop efficient semantic understanding system to extract environment semantic information, such as plane surfaces, moving objects and the environment dynamic status. These information will be used as inputs for robot motion planning and control.
Principle Investigator: Prof. Sethu Vijayakumar
Group Leader: Dr. Songyan Xin, Dr. Lei Yan
Principle Investigator: Prof. Tin Lun Lam
Group Leader: Dr. Tianwei Zhang