Ailing Zeng 曾爱玲
BiographyI'm a technical staff member at Anuttacon, leading the development of a human-centric interactive multimodal video generation system. These models enable agents to perceive, interact with, and generate real-time, long-horizon video behaviors. Previously, I spent three wonderful years at Tencent Hunyuan&AI Lab and International Digital Economy Academy (IDEA), leading a human-centric perception and generation research team. I obtained my Ph.D. from the Department of Computer Science and Engineering, the Chinese University of Hong Kong, supervised by Prof. Qiang Xu. I was a visiting scholar in the Robotics Institute, Carnegie Mellon University. Some previous research works,
1) Human-centric visual perception with large-scale data and generic models: IDOL, AiOS, SMPLer-X, OSX, DW-Pose, ED-Pose, SmoothNet, DeciWatch 2) Large-scale multi-modality datasets: Motion-X, UBody, Uni-KPT, BallPlay, HuMMan, Human-Art 3) Human-centric generative models: MotionCraft, HumanSD, PhysHOI, Dreamwaltz, HumanTOMATO, DiffSHEG 4) Interactive AI & Human-in-the-loop techniques: X-Pose, Click-Pose, Grounded-SAM 5) Previously, time series analysis and forecasting: LTSF-Linear, SCINet, FITS We are hiring full-time researchers, engineers, and interns based in Mountain View or Singapore, see our open roles. Feel free to reach out if you are interested. News
Selected ResearchSee full list at Google Scholar. (*equal contribution, #corresponding author or project lead)
|