Follow
Ellie Pavlick
Ellie Pavlick
Verified email at brown.edu - Homepage
Title
Cited by
Cited by
Year
BERT rediscovers the classical NLP pipeline
I Tenney
arXiv preprint arXiv:1905.05950, 2019
16732019
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
14822023
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
RT McCoy
arXiv preprint arXiv:1902.01007, 2019
12262019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
9992022
What do you learn from context? probing for sentence structure in contextualized word representations
I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ...
arXiv preprint arXiv:1905.06316, 2019
8902019
Optimizing statistical machine translation for text simplification
W Xu, C Napoles, E Pavlick, Q Chen, C Callison-Burch
Transactions of the Association for Computational Linguistics 4, 401-415, 2016
6582016
Openwebtext corpus
A Gokaslan, V Cohen, E Pavlick, S Tellex
4222019
PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification
E Pavlick, J Ganitkevitch, P Rastogi, B Van Durme, C Callison-Burch
Volume 2: Short Papers, 425, 2015
3922015
Do prompt-based models really understand the meaning of their prompts?
A Webson, E Pavlick
arXiv preprint arXiv:2109.01247, 2021
3322021
Inherent disagreements in human textual inferences
E Pavlick, T Kwiatkowski
Transactions of the Association for Computational Linguistics 7, 677-694, 2019
2712019
What happens to BERT embeddings during fine-tuning?
A Merchant, E Rahimtoroghi, E Pavlick, I Tenney
arXiv preprint arXiv:2004.14448, 2020
1952020
An empirical analysis of formality in online communication
E Pavlick, J Tetreault
Transactions of the association for computational linguistics 4, 61-74, 2016
1652016
Collecting diverse natural language inference problems for sentence representation evaluation
A Poliak, A Haldar, R Rudinger, JE Hu, E Pavlick, AS White, B Van Durme
arXiv preprint arXiv:1804.08207, 2018
1632018
Mapping language models to grounded conceptual spaces
R Patel, E Pavlick
International conference on learning representations, 2022
1222022
Simple PPDB: A Paraphrase Database for Simplification
E Pavlick, C Callison-Burch
1212016
Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling
A Wang, J Hula, P Xia, R Pappagari, RT McCoy, R Patel, N Kim, I Tenney, ...
arXiv preprint arXiv:1812.10860, 2018
1202018
Probing what different NLP tasks teach machines about function word comprehension
N Kim, R Patel, A Poliak, A Wang, P Xia, RT McCoy, I Tenney, A Ross, ...
arXiv preprint arXiv:1904.11544, 2019
1042019
Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen, E Chi, ...
arXiv preprint arXiv:2010.06032, 2020
1032020
Can language models encode perceptual structure without grounding? a case study in color
M Abdou, A Kulmizev, D Hershcovich, S Frank, E Pavlick, A Søgaard
arXiv preprint arXiv:2109.06129, 2021
962021
The language demographics of amazon mechanical turk
E Pavlick, M Post, A Irvine, D Kachaev, C Callison-Burch
Transactions of the Association for Computational Linguistics 2, 79-92, 2014
952014
The system can't perform the operation now. Try again later.
Articles 1–20