Követés
Luke Zettlemoyer
Cím
Hivatkozott rá
Hivatkozott rá
Év
Roberta: A robustly optimized bert pretraining approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
30107*2019
Deep contextualized word representations
ME Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, ...
NAACL, 2018
16293*2018
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension
M Lewis
arXiv preprint arXiv:1910.13461, 2019
115102019
Unsupervised cross-lingual representation learning at scale
A Conneau
arXiv preprint arXiv:1911.02116, 2019
65612019
Opt: Open pre-trained transformer language models
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
arXiv preprint arXiv:2205.01068, 2022
3427*2022
Spanbert: Improving pre-training by representing and predicting spans
M Joshi, D Chen, Y Liu, DS Weld, L Zettlemoyer, O Levy
Transactions of the association for computational linguistics 8, 64-77, 2020
22672020
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension
M Joshi, E Choi, DS Weld, L Zettlemoyer
arXiv preprint arXiv:1705.03551, 2017
22592017
Qlora: Efficient finetuning of quantized llms
T Dettmers, A Pagnoni, A Holtzman, L Zettlemoyer
Advances in Neural Information Processing Systems 36, 2024
20072024
Multilingual denoising pre-training for neural machine translation
Y Liu
arXiv preprint arXiv:2001.08210, 2020
18702020
Allennlp: A deep semantic natural language processing platform
M Gardner, J Grus, M Neumann, O Tafjord, P Dasigi, N Liu, M Peters, ...
arXiv preprint arXiv:1803.07640, 2018
14432018
Toolformer: Language models can teach themselves to use tools
T Schick, J Dwivedi-Yu, R Dessì, R Raileanu, M Lomeli, E Hambro, ...
Advances in Neural Information Processing Systems 36, 2024
13382024
Knowledge-based weak supervision for information extraction of overlapping relations
R Hoffmann, C Zhang, X Ling, L Zettlemoyer, DS Weld
Proceedings of the 49th annual meeting of the association for computational …, 2011
12572011
Rethinking the role of demonstrations: What makes in-context learning work?
S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2202.12837, 2022
11912022
End-to-end neural coreference resolution
K Lee, L He, M Lewis, L Zettlemoyer
arXiv preprint arXiv:1707.07045, 2017
11882017
Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars
LS Zettlemoyer, M Collins
Conference on Uncertainty in Artificial Intelligence (UAI), 2005
1165*2005
QuAC: Question answering in context
E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer
arXiv preprint arXiv:1808.07036, 2018
9602018
Summarizing source code using a neural attention model
S Iyer, I Konstas, A Cheung, L Zettlemoyer
54th Annual Meeting of the Association for Computational Linguistics 2016 …, 2016
8752016
Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale
T Dettmers, M Lewis, Y Belkada, L Zettlemoyer
Advances in Neural Information Processing Systems 35, 30318-30332, 2022
8532022
Generalization through memorization: Nearest neighbor language models
U Khandelwal, O Levy, D Jurafsky, L Zettlemoyer, M Lewis
arXiv preprint arXiv:1911.00172, 2019
8072019
Lima: Less is more for alignment
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
Advances in Neural Information Processing Systems 36, 2024
8022024
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20