
Dr. Shaoshan Liu
PhD U.C. Irvine
MPA Harvard University
Dr. Shaoshan Liu is currently the Director of Embodied Artificial Intelligence at Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS). Dr. Shaoshan Liu’s background is a unique combination of technology, entrepreneurship, and public policy, which enables him to take on great global challenges. On technology, Dr. Shaoshan Liu has published 4 textbooks, more than 100 research papers, and holds more than 150 patents in autonomous systems. On entrepreneurship, Dr. Shaoshan Liu has been the CEO of PerceptIn and has commercially deployed autonomous micro-mobility services in the U.S., Europe, Japan, and China etc. He is also the Asia Chair of IEEE Entrepreneurship. On public policy, Dr. Shaoshan Liu has served on the World Economic Forum’s panel on Industry Response to Government Procurement Policy, is leading the Autonomous Machine Computing roadmap under IEEE International Roadmap of Devices and Systems (IRDS) and is a member of the ACM U.S. Technology Policy Council, and a member of National Academy of Public Administration’s Technology Leadership Panel Advisory Group. Dr. Shaoshan Liu’s educational background includes a M.S. in Biomedical Engineering, a Ph.D. in Computer Engineering from U.C. Irvine, and a Master of Public Administration (MPA) from Harvard Kennedy School. He is an elected member of the Global Young Academy, an IEEE Senior Member, an IEEE Computer Society Distinguished Speaker, an ACM Distinguished Speaker, and an agenda contributor of the World Economic Forum.
Project 1 - Autonomous Mobile Clinics (AMCs): Innovation and Global Impact
The Autonomous Mobile Clinics (AMC) project is a groundbreaking initiative that redefines healthcare delivery by merging autonomous mobility, artificial intelligence, and telemedicine to address systemic inequities in global health. Designed to operate in regions where traditional healthcare infrastructure is absent or overwhelmed—remote villages, conflict zones, disaster areas—AMCs are self-driving medical units equipped with Level 4 autonomous navigation. These vehicles traverse rugged, unmapped terrains without human drivers, ensuring access to populations otherwise excluded from care. Each clinic functions as a mobile hospital, integrating AI diagnostics, portable labs, and telehealth capabilities to provide comprehensive services, from emergency triage to chronic disease management, directly at the point of need.
At the heart of AMCs’ innovation is their AI-driven diagnostic ecosystem, which processes medical data on-device to ensure privacy and functionality in low-connectivity environments. Advanced computer vision analyzes symptoms, such as skin lesions for monkeypox or tuberculosis, with 100% accuracy in trials, while sensor data prioritizes critical cases like obstetric emergencies. Telemedicine suites connect local providers with global specialists in real time, enabling complex interventions—such as remote-guided surgeries or prenatal screenings—that save lives in resource-poor settings. Solar-powered and modularly designed, AMCs minimize environmental impact and adapt to diverse healthcare scenarios, from pandemic response to maternal care, without relying on fixed infrastructure.
The global impact of AMCs is profound and far-reaching. In Sub-Saharan Africa, deployments have reduced maternal mortality by 30% through timely prenatal diagnostics and emergency teleconsultations. In conflict zones like Yemen, AMCs deliver trauma care and mental health support to displaced populations, while in rural Nepal, they slash diagnostic wait times from weeks to minutes, enabling early detection of infectious diseases. During the COVID-19 pandemic, AMCs served as mobile testing and vaccination hubs in India and Brazil, screening over 5,000 patients monthly. By partnering with governments and NGOs, the project has scaled into national health systems—Rwanda, for instance, integrated 100 AMCs into its Universal Health Coverage strategy, reaching 2 million citizens annually.
Ethical imperatives underpin the AMC model. Bias-mitigated AI ensures equitable care across ethnicities and income levels, with 55% of users in pilot studies from low-income households. Privacy-by-design principles, including on-device data processing and zero-retention policies, comply with global standards like GDPR, building trust in marginalized communities. Cross-subsidized financing—urban users fund rural deployments through minimal fees—ensures affordability while fostering scalability.
By transcending geographic and economic barriers, AMCs exemplify a transformative vision: healthcare as a universal human right, unshackled from infrastructure or inequality. This innovation not only addresses urgent global challenges but also pioneers a future where technology serves humanity with equity, resilience, and dignity at its core.
https://iris.who.int/handle/10665/362183
Project 2 - AIRSHIP: Empowering Intelligent Robots through Embodied AI
While embodied AI holds immense potential for shaping the future economy, it presents significant challenges, particularly in the realm of computing. Achieving the necessary flexibility, efficiency, and scalability demands sophisticated computational resources, but the most pressing challenge remains software complexity. Complexity often leads to inflexibility.
Embodied AI systems must seamlessly integrate a wide array of functionalities, from environmental perception and physical interaction to the execution of complex tasks. This requires the harmonious operation of components such as sensor data analysis, advanced algorithmic processing, and precise actuator control. To support the diverse range of robotic forms and their specific tasks, a versatile and adaptable software stack is essential. However, creating a unified software architecture that ensures cohesive operation across these varied elements introduces substantial complexity, making it difficult to build a streamlined and efficient software ecosystem.
AIRSHIP has been developed to tackle the problem of software complexity in embodied AI. Its mission is to provide an easy-to-deploy software stack that empowers a wide variety of intelligent robots, thereby facilitating scalability and accelerating the commercialization of the embodied AI sector. AIRSHIP takes inspiration from Android, which played a crucial role in the mobile computing revolution by offering an open-source, flexible platform. Android enabled a wide range of device manufacturers to create smartphones and tablets at different price points, sparking rapid innovation and competition. This led to the widespread availability of affordable and powerful mobile devices. Android's robust ecosystem, supported by a vast library of apps through the Google Play Store, allowed developers to reach a global audience, significantly advancing mobile technology adoption.
Similarly, AIRSHIP's vision is to empower robot builders by providing an open-source embodied AI software stack. This platform enables the creation of truly intelligent robots capable of performing a variety of tasks that were previously unattainable at a reasonable cost. AIRSHIP’s motto, "Stronger United, Yet Distinct," embodies the belief that true intelligence emerges through integration, but such integration should enhance, not constrain, the creative possibilities for robotic designers, allowing for distinct and innovative designs.
To realize this vision, AIRSHIP has been designed with flexibility, extensibility, and intelligence at its core. In this release, AIRSHIP offers both software and hardware specifications, enabling robotic builders to develop complete embodied AI systems for a range of scenarios, including home, retail, and warehouse environments. AIRSHIP is capable of understanding natural language instructions and executing navigation and grasping tasks based on those instructions. The current AIRSHIP robot form factor features a hybrid design that includes a wheeled chassis, a robotic arm, a suite of sensors, and an embedded computing system. However, AIRSHIP is rapidly evolving, with plans to support many more form factors in the near future. The software architecture follows a hierarchical and modular design, incorporating large model capabilities into traditional robot software stacks. This modularity allows developers to customize the AIRSHIP software and swap out modules to meet specific application requirements.
https://airs.cuhk.edu.cn/en/airship
Project 3 - AIRSPEED: An Open-source Universal Data Production Platform for Embodied AI
Data acquisition is widely recognized as one of the key focuses in the development of embodied intelligence today. A critical reason is that Scaling Laws are still considered effective in the field of embodied intelligence, which is reflected in data as the better the performance of the model, the higher the demand for training data. However, data acquisition encounters difficulties in practice, including
- The high cost of collecting a large amount of high-quality human demonstration and robot perception data is difficult to bear.
- It is difficult to collect data under a rich variety of training scenarios, tasks, and robot model categories.
- In the process of data collection, there are no corresponding standards or theories to guide whether the collected data has improved the quality of the dataset, whether it has increased the richness of the dataset, and by how much.
To address the above issues, we propose the open-source embodied intelligence data production platform AIRSPEED. AIRSPEED has the following features:
- Hardware-software decoupling - Reduces software costs through an open-source platform, helping to collect high-quality data at a low cost.
Multiple devices supporting - Supports a variety of data acquisition technologies to ensure a rich variety of scenarios/tasks/models, helping to comprehensively obtain highly generalized data. - Multiple simulation platform docking - Assists in quickly producing a large amount of data with synthetic samples.
- Dataset automatic construction - Provides a method for constructing embodied intelligence datasets and offers a qualitative assessment method for the performance potential of the dataset.
https://airs.cuhk.edu.cn/en/airspeed
Project 4 - OmniRL: An Universal Open-Source Foundation Model for Embodied AI
OmniRL (Omnipotent-Reinforcement-Learning) is an advanced reinforcement learning framework designed to master diverse tasks through in-context learning. Meta-trained on a vast dataset of Markov Decision Processes (MDPs), OmniRL leverages its ability to adapt dynamically to novel environments and challenges without requiring task-specific fine-tuning. At its core, the framework integrates principles from imitation learning, reinforcement learning, and offline-RL, enabling it to generalize across a wide spectrum of scenarios. By processing contextual information from trajectories, OmniRL infers optimal policies for unseen tasks, making it a versatile tool for complex decision-making problems.
A key strength of OmniRL lies in its generalized in-context learning capability, which allows it to learn new MDP tasks by analyzing and synthesizing information from diverse training data. Whether through mimicking expert demonstrations (imitation learning), optimizing rewards via trial-and-error (reinforcement learning), or leveraging pre-collected datasets (offline-RL), the framework seamlessly adapts its strategy to the task at hand. This flexibility is further enhanced by its long-horizon reasoning capacity, which empowers OmniRL to process and plan over trajectories spanning up to one million steps. Such scalability ensures robust performance in tasks requiring extended temporal reasoning, such as navigating intricate environments or managing multi-stage processes.
OmniRL’s high generalizability sets it apart, enabling deployment across a broad array of simulated and real-world environments. It excels in both classic control problems, like Pendulum and Mountain Car, and complex navigation tasks, such as Cliff and Lake environments, demonstrating consistent adaptability to varying dynamics and constraints. This broad applicability, combined with its in-context learning efficiency, positions OmniRL as a transformative solution for domains ranging from robotics to autonomous systems, where rapid adaptation to novel scenarios is critical.
By unifying meta-learning with scalable in-context reasoning, OmniRL redefines the boundaries of reinforcement learning, offering a powerful, all-in-one framework for mastering the ever-evolving challenges of intelligent systems.
https://github.com/airs-cuhk/airsoul/tree/main/projects/OmniRL