Postdoc at Stanford HAI → Incoming Assistant Professor at UBC CS (2025)

**CV | Twitter | Scholar | [email protected] | he/him**


headshot_09_2024.png

Bio

I am an incoming Assistant Professor (2025) at UBC, and current postdoc at Stanford HAI. I work on Natural Language Processing and AI. My research broadly studies the capabilities and limits of large language models (and other generative AI systems). For example:

While the popular view of LLMs is through the lens of engineering —how do we make models do what we want?— I tend to view them more through a natural sciences lens —what do models want to do in the first place? While they are clearly very useful, much of my work operates on the assumption that we really have very little idea what is contained in these models, and how it might differ from the human intelligence we are accustomed to. Many questions I am working to answer are related: Where do LLMs fail that humans succeed, and vice versa? Are compact/small-scale LLMs massively underestimated? How do current alignment techniques alter/degrade language understanding? etc.

My work has been recognized with multiple awards, including best method paper at NAACL 2022, outstanding paper at ACL 2023, and outstanding paper at EMNLP 2023. I was also awarded the NSERC PGS-D fellowship which supported my PhD in part, and was recognized with an honorable mention for the NSF GRFP.

I received my PhD from the University of Washington in 2024, working with Yejin Choi, and my BSc (Honours Computer Science) from the University of British Columbia in 2017. I have been an intern at the Allen Institute for AI on the Mosaic team and Microsoft Research in the Natural Language Processing Group. Outside of research, I love cooking, making bread, pasta, ice cream, and cocktails. I love movies of all kinds, and music.

<aside> 🔔

For Prospective Students

My current work centres on the analysis, use, and limits of LLMs. If our interests might align, I encourage you to apply for a graduate position at UBC (Vancouver campus) in NLP/AI.

Please mention me in your statement if you are interested in working together. I cannot guarantee that I will respond to emails specifically about admissions, and getting in contact will not affect the admissions process. The best way to get my attention is through your grad application, which I will read carefully.

That being said, I am happy to (time permitting) answer questions and give advice more generally on graduate school and the application process. This tends to be a complicated process, and an unfair barrier to many students taking part in research.

</aside>


News

<aside> 🗣

03/2025

Invited talk at the TTIC NLP Seminar

</aside>

<aside> 🗣

12/2024

Served as a Panelist at the Future of NLP Workshop at Neurips 2024

</aside>

<aside> 📢

12/2024

I will be attending Neurips 2024, and will be part of the panel discussion at the Future of NLP Workshop.

Get in touch if you are applying to UBC NLP for graduate school or if we should collaborate!

</aside>

<aside> 📢

09/2024

I am beginning a postdoc at Stanford with Chris Potts

</aside>

<aside> 📢

08/2024

I have accepted a faculty position at the UBC Computer Science Department

Prospective PhD/MSc students should apply here

</aside>

<aside> 📢

06/2024

We presented The Generative AI Paradox at ICLR 2024!

</aside>


Research Group

Join

I am seeking curious minds to join me in exploring the mysteries of large language models and generative AI systems! See my bio above for more information.


Members

Principal Investigator

Peter West

MSc Students

PhD Students


Publications

Recent Core Work (non exhaustive)

<aside> 📄

Predicting vs. Acting: A Trade-off Between World Modeling & Agent Modeling

→ arXiv:2407.02446

Margaret Li, Weijia Shi, Artidoro Pagnoni, Peter West, Ari Holtzman

[paper]

</aside>

<aside> 📄

The Generative AI Paradox: "What It Can Create, It May Not Understand”

→ ICLR 2024

=Peter West, =Ximing Lu, =Nouha Dziri, =Faeze Brahman, =Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

[paper]

</aside>

<aside> 📄

Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?

→ arXiv:2308.00189

Ari Holtzman, Peter West, Luke Zettlemoyer

[paper]

</aside>

<aside> 📄

Impossible Distillation for Paraphrasing and Summarization: How to Make High-quality Lemonade out of Small, Low-quality Models

→ NAACL 2024

Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi

[paper]

</aside>

<aside> 📄

Minding Language Models'(Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

→ ACL 2023 Outstanding Paper Award!

Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, Yulia Tsvetkov

[paper]

</aside>

<aside> 📄

Generating Sequences by Learning to Self-Correct

→ ICLR 2023

=Sean Welleck, =Ximing Lu, ♡Peter West, ♡Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi

= co-first author

♡ co-second author

[paper]

</aside>

<aside> 📄

Symbolic knowledge distillation: from general language models to commonsense models

→ NAACL 2022

Peter West, Chandra Bhagavatula, Jack Hessel, Jena D Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, Yejin Choi

[paper] [code and data]

</aside>

<aside> 📄

</aside>

<aside> 📄

</aside>

<aside> 📄

</aside>

<aside> 📄

</aside>