Hello! My name is Yangqiaoyu Zhou (I also go by Rosa). I am a 5th year PhD student at the CHAI lab at the University of Chicago working with Chenhao Tan. My research focuses on explainability in natural language processing. In the past, I have studied learning from natural language explanations to improve model capabilities, including robustness to out-of-distribution data, few-shot learning ability, and interpretability. More recently, I am exploring how large language models can support scientific discovery by generating hypotheses that explain patterns in data and incorporate insights from existing literature.
Besides research, I like rock climbing, playing the Erhu, and painting.
Publications
- Literature Meets Data: A Synergistic Approach to Hypothesis Generation
Haokun Liu*, Yangqiaoyu Zhou*, Mingxuan Li*, Chenfei Yuan, Chenhao Tan
In submission - Hypothesis Generation with Large Language Models
Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastrava, Hongyuan Mei, Chenhao Tan
EMNLP Workshop on NLP for Science, 2024
[Poster] - 🔥FLamE: Few Shot Learning From Explanations
Yangqiaoyu Zhou, Yiming Zhang, Chenhao Tan
ACL, 2023
Oral Presentation
[Poster] [Video] - Learning to Ignore Adversarial Attacks
Yiming Zhang, Yangqiaoyu Zhou, Samuel Carton, Chenhao Tan
EACL, 2023 - Investigating the Effect of Natural Language Explanations on Out-of-distribution Generalization in Few-shot NLI
Yangqiaoyu Zhou and Chenhao Tan
EMNLP Workshop on Insights from Negative Results in NLP, 2021
[Code] [Video] - GraPhyC: Using Consensus to Infer Tumor Evolution
Kiya Govek, Camden Sikes, Yangqiaoyu Zhou, Layla Oesper
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2020