Follow
Katherine Lee
Katherine Lee
Researcher, Google Brain Research
Verified email at google.com
Title
Cited by
Cited by
Year
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
The Journal of Machine Learning Research 21 (1), 5485-5551, 2020
115502020
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
arXiv preprint arXiv:2204.02311, 2022
20832022
Extracting Training Data from Large Language Models.
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security Symposium 6, 2021
8732021
PaLM 2 Technical Report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
2962023
Quantifying Memorization Across Neural Language Models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramer, C Zhang
arXiv preprint arXiv:2202.07646, 2022
2342022
Deduplicating training data makes language models better
K Lee, D Ippolito, A Nystrom, C Zhang, D Eck, C Callison-Burch, N Carlini
arXiv preprint arXiv:2107.06499, 2021
2212021
WT5?! Training Text-to-Text Models to Explain their Predictions
S Narang, C Raffel, K Lee, A Roberts, N Fiedel, K Malkan
arXiv preprint arXiv:2004.14546, 2020
1362020
Hallucinations in neural machine translation
K Lee, O Firat, A Agarwal, C Fannjiang, D Sussillo
942018
What Does it Mean for a Language Model to Preserve Privacy?
H Brown, K Lee, F Mireshghallah, R Shokri, F Tramèr
2022 ACM Conference on Fairness, Accountability, and Transparency, 2280-2292, 2022
712022
Propagation of information along the cortical hierarchy as a function of attention while reading and listening to stories
M Regev, E Simony, K Lee, KM Tan, J Chen, U Hasson
Cerebral Cortex 29 (10), 4017-4034, 2019
642019
Counterfactual Memorization in Neural Language Models
C Zhang, D Ippolito, K Lee, M Jagielski, F Tramèr, N Carlini
arXiv preprint arXiv:2112.12938, 2021
462021
Measuring Forgetting of Memorized Training Examples
M Jagielski, O Thakkar, F Tramèr, D Ippolito, K Lee, N Carlini, E Wallace, ...
arXiv preprint arXiv:2207.00099, 2022
392022
Are aligned neural networks adversarially aligned?
N Carlini, M Nasr, CA Choquette-Choo, M Jagielski, I Gao, A Awadalla, ...
arXiv preprint arXiv:2306.15447, 2023
362023
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
D Ippolito, F Tramèr, M Nasr, C Zhang, M Jagielski, K Lee, ...
arXiv preprint arXiv:2210.17546, 2022
282022
A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity
S Longpre, G Yauney, E Reif, K Lee, A Roberts, B Zoph, D Zhou, J Wei, ...
arXiv preprint arXiv:2305.13169, 2023
162023
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
S Kudugunta, I Caswell, B Zhang, X Garcia, CA Choquette-Choo, K Lee, ...
arXiv preprint arXiv:2309.04662, 2023
42023
Students Parrot Their Teachers: Membership Inference on Model Distillation
M Jagielski, M Nasr, C Choquette-Choo, K Lee, N Carlini
arXiv preprint arXiv:2303.03446, 2023
42023
Talkin’‘Bout AI Generation: Copyright and the Generative AI Supply Chain
K Lee, AF Cooper, J Grimmelmann
Available at SSRN 4523551, 2023
32023
Is My Prediction Arbitrary? Measuring Self-Consistency in Fair Classification
AF Cooper, K Lee, S Barocas, C De Sa, S Sen, B Zhang
arXiv preprint arXiv:2301.11562, 2023
32023
Report of the 1st Workshop on Generative AI and Law
AF Cooper, K Lee, J Grimmelmann, D Ippolito, C Callison-Burch, ...
arXiv preprint arXiv:2311.06477, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20