Volgen
Luke Zettlemoyer
Luke Zettlemoyer
Geverifieerd e-mailadres voor cs.washington.edu - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Roberta: A robustly optimized bert pretraining approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
29921*2019
Deep contextualized word representations
ME Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, ...
NAACL, 2018
16267*2018
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension
M Lewis
arXiv preprint arXiv:1910.13461, 2019
113782019
Unsupervised cross-lingual representation learning at scale
A Conneau
arXiv preprint arXiv:1911.02116, 2019
65162019
Opt: Open pre-trained transformer language models
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
arXiv preprint arXiv:2205.01068, 2022
3395*2022
Spanbert: Improving pre-training by representing and predicting spans
M Joshi, D Chen, Y Liu, DS Weld, L Zettlemoyer, O Levy
Transactions of the association for computational linguistics 8, 64-77, 2020
22572020
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension
M Joshi, E Choi, DS Weld, L Zettlemoyer
arXiv preprint arXiv:1705.03551, 2017
22442017
Qlora: Efficient finetuning of quantized llms
T Dettmers, A Pagnoni, A Holtzman, L Zettlemoyer
Advances in Neural Information Processing Systems 36, 2024
19672024
Multilingual denoising pre-training for neural machine translation
Y Liu
arXiv preprint arXiv:2001.08210, 2020
18642020
Allennlp: A deep semantic natural language processing platform
M Gardner, J Grus, M Neumann, O Tafjord, P Dasigi, N Liu, M Peters, ...
arXiv preprint arXiv:1803.07640, 2018
14412018
Toolformer: Language models can teach themselves to use tools
T Schick, J Dwivedi-Yu, R Dessė, R Raileanu, M Lomeli, E Hambro, ...
Advances in Neural Information Processing Systems 36, 2024
13202024
Knowledge-based weak supervision for information extraction of overlapping relations
R Hoffmann, C Zhang, X Ling, L Zettlemoyer, DS Weld
Proceedings of the 49th annual meeting of the association for computational …, 2011
12532011
End-to-end neural coreference resolution
K Lee, L He, M Lewis, L Zettlemoyer
arXiv preprint arXiv:1707.07045, 2017
11852017
Rethinking the role of demonstrations: What makes in-context learning work?
S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2202.12837, 2022
11782022
Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars
LS Zettlemoyer, M Collins
Conference on Uncertainty in Artificial Intelligence (UAI), 2005
1165*2005
QuAC: Question answering in context
E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer
arXiv preprint arXiv:1808.07036, 2018
9552018
Summarizing source code using a neural attention model
S Iyer, I Konstas, A Cheung, L Zettlemoyer
54th Annual Meeting of the Association for Computational Linguistics 2016 …, 2016
8732016
Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale
T Dettmers, M Lewis, Y Belkada, L Zettlemoyer
Advances in Neural Information Processing Systems 35, 30318-30332, 2022
8432022
Generalization through memorization: Nearest neighbor language models
U Khandelwal, O Levy, D Jurafsky, L Zettlemoyer, M Lewis
arXiv preprint arXiv:1911.00172, 2019
8022019
Adversarial example generation with syntactically controlled paraphrase networks
M Iyyer, J Wieting, K Gimpel, L Zettlemoyer
arXiv preprint arXiv:1804.06059, 2018
7972018
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20