Follow
Asa Cooper Stickland
Asa Cooper Stickland
Postdoctoral Researcher, New York University
Verified email at ed.ac.uk - Homepage
Title
Cited by
Cited by
Year
Bert and pals: Projected attention layers for efficient adaptation in multi-task learning
AC Stickland, I Murray
International Conference on Machine Learning, 5986-5995, 2019
2922019
The reversal curse: Llms trained on" a is b" fail to learn" b is a"
L Berglund, M Tong, M Kaufmann, M Balesni, AC Stickland, T Korbak, ...
arXiv preprint arXiv:2309.12288, 2023
912023
Gpqa: A graduate-level google-proof q&a benchmark
D Rein, BL Hou, AC Stickland, J Petty, RY Pang, J Dirani, J Michael, ...
arXiv preprint arXiv:2311.12022, 2023
772023
Recipes for adapting pre-trained monolingual and multilingual models to machine translation
AC Stickland, X Li, M Ghazvininejad
arXiv preprint arXiv:2004.14911, 2020
392020
Multilingual domain adaptation for NMT: Decoupling language and domain information with adapters
AC Stickland, A Berard, V Nikoulina
arXiv preprint arXiv:2110.09574, 2021
272021
Diverse ensembles improve calibration
AC Stickland, I Murray
arXiv preprint arXiv:2007.04206, 2020
222020
Deep transformers with latent depth
X Li, A Cooper Stickland, Y Tang, X Kong
Advances in Neural Information Processing Systems 33, 1736-1746, 2020
222020
Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans
L Berglund, AC Stickland, M Balesni
Taken out of context: On measuring situational awareness in llms, 2023
162023
Taken out of context: On measuring situational awareness in LLMs
L Berglund, AC Stickland, M Balesni, M Kaufmann, M Tong, T Korbak, ...
arXiv preprint arXiv:2309.00667, 2023
122023
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
A Üstün, AC Stickland
arXiv preprint arXiv:2205.11277, 2022
82022
Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans. 2023. Taken out of context: On measuring situational awareness in LLMs
L Berglund, AC Stickland, M Balesni
arXiv preprint arXiv:2309.00667, 2023
72023
Robustification of multilingual language models to real-world noise in crosslingual zero-shot settings with robust contrastive pretraining
AC Stickland, S Sengupta, J Krone, S Mansour, H He
arXiv preprint arXiv:2210.04782, 2022
62022
Steering without side effects: Improving post-deployment control of language models
AC Stickland, A Lyzhov, J Pfau, S Mahdi, SR Bowman
arXiv preprint arXiv:2406.15518, 2024
32024
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs
A Sheshadri, A Ewart, P Guo, A Lynch, C Wu, V Hebbar, H Sleight, ...
arXiv preprint arXiv:2407.15549, 2024
12024
Robustification of Multilingual Language Models to Real-world Noise with Robust Contrastive Pretraining.
AC Stickland, S Sengupta, J Krone, S Mansour, H He
arXiv preprint arXiv:2210.04782, 2022
12022
Regularising Fisher Information Improves Cross-lingual Generalisation
AC Stickland, I Murray
Proceedings of the 1st Workshop on Multilingual Representation Learning, 238-241, 2021
12021
Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs
S Price, A Panickssery, S Bowman, AC Stickland
arXiv preprint arXiv:2407.04108, 2024
2024
BERT and PALs: Projected Attention Layers
AC Stickland, I Murray
The system can't perform the operation now. Try again later.
Articles 1–18