About the Lab
ZeroGen Lab is an academic research laboratory dedicated to advancing frontier research in artificial intelligence. Our work focuses on foundational and system-level questions related to large language models (LLMs), including reasoning, knowledge modeling, and cognitive alignment. We study how models understand, organize, and generate knowledge, with the goal of building intelligent systems that are more reliable, interpretable, and scalable.
Our current research spans a range of topics, including LLM reasoning, knowledge representation, long-term memory in large models, sequence modeling, context engineering, OCR capability evaluation for large models, and automated GUI testing. We also welcome self-proposed research directions beyond these areas and warmly invite motivated researchers and students to join us.
Research Methodology and Strengths
At ZeroGen Lab, we emphasize:
- Problem-driven research paradigms: addressing real-world and complex problems rather than focusing solely on metric optimization
- Systematic approaches: integrating algorithms, systems, and data analysis
- Reproducibility and openness: valuing experimental transparency and long-term research accumulation
Academic Vision
We aim to advance large language models beyond high-performance text generation toward intelligent systems with robust reasoning and knowledge understanding capabilities, contributing solid theoretical and methodological foundations to the development of general artificial intelligence.
Collaboration and Exchange
We welcome academic collaborations and exchanges with researchers from related fields and encourage students interested in foundational AI research to participate in our work.
If you are interested in collaboration or academic exchange, please feel free to contact us using the information provided below.
Contact Us
Renjun Hu: rjhu@dase.ecnu.edu.cn
Address: West Room 109, Mathematics Building,
3663 Zhongshan North Road, Putuo District, Shanghai