Tech talk

Building Efficient Mathematical Reasoners in the LLM Era

Tech Talk with Zhenwen Liang

Abstract

This talk highlights two innovative approaches for specialized mathematical reasoning within the context of large language models (LLMs). The first approach introduces multi-view fine-tuning, leveraging diverse mathematical problem datasets to train models like LLaMA-7B more efficiently and flexibly. This method outperforms traditional knowledge distillation techniques and displays impressive generalization across various datasets. The second approach employs GPT-3 as a math tutor to distil mathematical capabilities into smaller student models through targeted exercises aligned with educational principles. This technique excels in accuracy across different benchmarks while using significantly fewer parameters. Together, these strategies present a promising direction for the efficient training of specialized mathematical reasoning models in the LLM era.

Bio

Zhenwen is a PhD student at the University of Notre Dame, working on NLP for mathematical reasoning and large language models. He was a previous research intern at the Allen Institute (AI2) and Tencent AI Lab. Before Notre Dame, he got a B.E. degree in Computer Science at the University of Electronic Science and Technology of China (UESTC) and an M.S. degree at King Abdullah University of Science and Technology (KAUST). He has a strong commitment to academic excellence, as demonstrated by his publication in IJCAI, NAACL, EMNLP, AAAI, ACL, etc., and his active participation in conferences such as NAACL-22, EMNLP-22, and AAAI-23. He was also selected to receive the AAAI travel scholarship organized the math reasoning tutorial in IJCAI-23 and MATH-AI workshop in NeurIPS’23.

Do you want to catch the Tech Talk on YouTube?

Just head over to Pi School’s YouTube channel.

Want more Tech Talks?

They’re right here!