Skip to main content
Browse by:

Flexible Reasoning with Large Language Models as Informal Logic Programs

Hongyuan Mei, TTIC
Wednesday, February 21, 2024
12:00 pm - 1:00 pm
Hongyuan Mei, TTIC
Duke Computer Science Colloquium

Lunch will be served at 11:45 AM.

Formal logic programs are useful tools in AI. However, they require users to first express the problem in a formal logic language, which is difficult to do for many real-world problems. In this talk, I will discuss an alternative paradigm, using large language models (LLMs) as informal logic programs. In this paradigm, the propositions are expressed in natural language and the reasoning steps are carried out by a prompted LLM.

This talk will present 3 problems effectively addressed by this paradigm: 1.) Event sequence modeling and prediction, the task of reasoning about future events given the past. 2.) Natural language entailment. 3.) Embodied reasoning, in which a robot needs to plan multiple steps to complete a task. For all these problems, our paradigm achieves stronger results than classical methods using formal logic programs and/or using LLMs as standalone solvers.

Dr. Hongyuan Mei is currently a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC). He obtained his PhD from Johns Hopkins University (JHU) Computer Science. Hongyuan's research spans ML and natural language processing. Currently, he is most interested in harnessing and improving the reasoning capabilities of large language models to solve challenging problems such as event prediction. His research has been supported by a Bloomberg Data Science PhD Fellowship, the 2020 JHU Jelinek Memorial Award, and research gifts from Adobe and Ant Group. His technical innovations have been integrated into real-world products such as Alipay, the world's largest mobile digital payment platform, which serves more than one billion users. His research has been covered by Fortune Magazine and Tech At Bloomberg.

Contact: Bhuwan Dhingra