Follow
Eric Wallace
Title
Cited by
Cited by
Year
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
T Shin, Y Razeghi, RL Logan IV, E Wallace, S Singh
EMNLP 2020, 2020
12772020
Extracting Training Data from Large Language Models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security 2021, 2020
11742020
Calibrate Before Use: Improving Few-Shot Performance of Language Models
TZ Zhao*, E Wallace*, S Feng, D Klein, S Singh
ICML 2021, 2021
8312021
Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
7192022
Universal Adversarial Triggers for Attacking and Analyzing NLP
E Wallace, S Feng, N Kandpal, M Gardner, S Singh
EMNLP 2019, 2019
7132019
Evaluating Models' Local Decision Boundaries via Contrast Sets
M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ...
EMNLP Findings 2020, 2020
4182020
Pretrained Transformers Improve Out-of-Distribution Robustness
D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song
ACL 2020, 2020
4022020
InCoder: A Generative Model for Code Infilling and Synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
ICLR 2023, 2022
3542022
Pathologies of Neural Models Make Interpretations Difficult
S Feng, E Wallace, II Grissom, M Iyyer, P Rodriguez, J Boyd-Graber
EMNLP 2018, 2018
3432018
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
E Wallace*, Y Wang*, S Li, S Singh, M Gardner
EMNLP 2019, 2019
2712019
Extracting Training Data from Diffusion Models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
USENIX Security 2023, 2023
2662023
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
2442020
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples
E Wallace, P Rodriguez, S Feng, I Yamada, J Boyd-Graber
TACL 2019, 2019
158*2019
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
RL Logan IV, I Balažević, E Wallace, F Petroni, S Singh, S Riedel
ACL Findings 2022, 2021
1512021
Koala: A Dialogue Model for Academic Research
X Geng*, A Gudibande*, H Liu*, E Wallace*, P Abbeel, S Levine, D Song
BAIR Blog, 2023
1442023
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
E Wallace, J Tuyls, J Wang, S Subramanian, M Gardner, S Singh
EMNLP Demo, 2019
1442019
Large Language Models Struggle to Learn Long-Tail Knowledge
N Kandpal, H Deng, A Roberts, E Wallace, C Raffel
ICML 2023, 2022
1432022
Compositional Questions Do Not Necessitate Multi-hop Reasoning
S Min*, E Wallace*, S Singh, M Gardner, H Hajishirzi, L Zettlemoyer
ACL 2019, 2019
1422019
Deduplicating Training Data Mitigates Privacy Risks in Language Models
N Kandpal, E Wallace, C Raffel
ICML 2022, 2022
1332022
Concealed Data Poisoning Attacks on NLP Models
E Wallace*, TZ Zhao*, S Feng, S Singh
NAACL 2021, 2020
120*2020
The system can't perform the operation now. Try again later.
Articles 1–20