My research focuses on robotics, spatial AI, SLAM, 3D/4D scene representation, and Gaussian Splatting. I am interested in how robots can perceive the physical world, build persistent internal models, understand geometry and dynamics, and act intelligently over long horizons. I completed my master's and PhD at Peking University and the Technical University of Munich, where I worked on nonlinear least-squares optimization and Geometric Computer Vision.
My long-term goal is to build next-generation intelligent robots from body to mind, combining robot hardware, spatial understanding, persistent world modeling, and robot action policy learning.
I am actively recruiting motivated PhD and MPhil students, research assistants, and interns to join Embodied Spatial AI Lab at HKUST(GZ), starting from Fall 2026 and Spring 2027.
Join the LabGeometry-aware perception systems that enable robots to localize, map, and understand complex real-world environments through spatially consistent representations.
Persistent 3D and 4D world models that support long-horizon robot autonomy through geometry-centric tracking, reconstruction, and structured scene representations.
Learning-based robot systems that connect perception, representation, and reasoning with decision-making and interaction in dynamic environments.
I am actively recruiting PhD and MPhil students, research assistants, and interns to join my lab at HKUST(GZ). My research focuses on VLA, Imitation learning, Reinforcement learning, SLAM, 3D/4D scene representation, and Scene understanding.
Students are encouraged to contact me with their CV and research interests.
Self-motivation, curiosity, strong execution, and the ability to connect solid theoretical understanding with real-world robotic systems.