Qitai Wang

I am currently a fifth-year Ph.D. student at the NLPR, Institute of Automation, Chinese Academy of Sciences (CASIA), supervised by Prof. Tieniu Tan and Prof. Zhaoxiang Zhang. Prior to that, I obtained my Bachelor's degree in Automation from the Department of Automation, Tsinghua University in 2020. Additionally, I have interned at TuSimple.

My research interests involves computer vision, 3D perceptions, video generation models and driving simulation. I am currently exploring fully generative driving simulation.

Email  /  Google Scholar  /  Github  /  Curriculum Vitae

profile photo
News

  • 2024-09: One paper on 3D perception is accepted to TPAMI.
  • 2025-01: One paper on scene reconstruction-based driving simulation is accepted to CVPR 2025..
  • 2025-01: One paper on generative driving simulation is accepted to ICLR 2025.
  • 2024-09: One paper on driving world model is accepted to NeurIPS 2024 Dataset Track.
  • 2024-07: One paper on End-to-End Multi-Object Tracking is accepted to ECCV 2024.
  • Research

    * indicates equal contribution

    dise FreeSim: Toward Free-viewpoint Camera Simulation in Driving Scenes
    Lue Fan, Hao Zhang, Qitai Wang, Hongsheng Li, Zhaoxiang Zhang
    CVPR 2025
    [paper] [Page]

    After FreeVS we propose FreeSim, a generation-reconstruction hybrid method for free-viewpoint camera simulation, taking the best of two worlds!

    dise FreeVS: Generative View Synthesis on Free Driving Trajectory
    Qitai Wang, Lue Fan, Yuqi Wang, Yuntao Chen, Zhaoxiang Zhang
    ICLR, 2025
    [paper] [Page] [code]

    FreeVS is the first method that supports high-quality generative view synthesis on free driving trajectory. A crucial step towards achieving generative driving simulation.

    dise DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model
    Yuqi Wang*, Ke Cheng*, Jiawei He*, Qitai Wang*, Hengchen Dai, Yuntao Chen, Fei Xia, Zhaoxiang Zhang
    NeurIPS, 2024, D&B Track
    [paper] [Page] [code]

    DrivingDojo dataset features video clips with a complete set of driving maneuvers, diverse multi-agent interplay, and rich open-world driving knowledge, laying a stepping stone for future world model development.

    dise OneTrack: Demystifying the Conflict Between Detection and Tracking in End-to-End 3D Trackers
    Qitai Wang, Jiawei He, Yuntao Chen, Zhaoxiang Zhang
    ECCV, 2024
    [paper]

    We have completely resolved the challenge where the perception performance of end-to-end multi-object tracking was inferior to that of standalone detectors, enabling lossless unification of detection and tracking tasks.

    dise Uncertain Object Representation for Image-Based 3D Object Perception
    Qitai Wang, Yuntao Chen, Zhaoxiang Zhang
    TPAMI
    [paper]

    We propose the uncertain representation of 3D objects to meet the indeterminacy of localizing objects in images.

    dise Immortal Tracker: Tracklet Never Dies
    Qitai Wang, Yuntao Chen, Ziqi Pang, Naiyan Wang, Zhaoxiang Zhang
    arxiv
    [paper] [code]

    Still the Best, Fastest, Simplest LiDAR-based 3D multi-object tracker so far.


    Website Template


    © Qitai Wang | Last updated: May 12, 2025