Follow
Jamie Hayes
Jamie Hayes
Google DeepMind
Verified email at google.com
Title
Cited by
Cited by
Year
LOGAN: evaluating privacy leakage of generative models using generative adversarial networks
J Hayes, L Melis, G Danezis, E De Cristofaro
arXiv preprint arXiv:1705.07663, 506-519, 2017
685*2017
k-fingerprinting: A robust scalable website fingerprinting technique
J Hayes, G Danezis
25th USENIX Security Symposium (USENIX Security 16), 1187-1203, 2016
4762016
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
32nd USENIX Security Symposium (USENIX Security 23), 5253-5270, 2023
4512023
Generating steganographic images via adversarial training
J Hayes, G Danezis
Advances in neural information processing systems 30, 2017
3462017
The loopix anonymity system
AM Piotrowska, J Hayes, T Elahi, S Meiser, G Danezis
26th usenix security symposium (usenix security 17), 1199-1216, 2017
2242017
Learning universal adversarial perturbations with generative models
J Hayes, G Danezis
2018 IEEE Security and Privacy Workshops (SPW), 43-49, 2018
1762018
Unlocking high-accuracy differentially private image classification through scale
S De, L Berrada, J Hayes, SL Smith, B Balle
arXiv preprint arXiv:2204.13650, 2022
1742022
Local and central differential privacy for robustness and privacy in federated learning
M Naseri, J Hayes, E De Cristofaro
arXiv preprint arXiv:2009.03561, 2020
1562020
Reconstructing training data with informed adversaries
B Balle, G Cherubin, J Hayes
2022 IEEE Symposium on Security and Privacy (SP), 1138-1156, 2022
1372022
On visible adversarial perturbations & digital watermarking
J Hayes
Proceedings of the IEEE conference on computer vision and pattern …, 2018
1322018
Website fingerprinting defenses at the application layer
G Cherubin, J Hayes, M Juárez
Proceedings on Privacy Enhancing Technologies 2017 (2), 168-185, 2017
1062017
Contamination attacks and mitigation in multi-party machine learning
J Hayes, O Ohrimenko
Advances in neural information processing systems 31, 2018
1012018
Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy
M Naseri, J Hayes, E De Cristofaro
arXiv preprint arXiv:2009.03561, 2020
922020
Towards unbounded machine unlearning
M Kurmanji, P Triantafillou, J Hayes, E Triantafillou
Advances in neural information processing systems 36, 2024
842024
A framework for robustness certification of smoothed classifiers using f-divergences
KD Dvijotham, J Hayes, B Balle, Z Kolter, C Qin, A Gyorgy, K Xiao, ...
International Conference on Learning Representations, 2020
572020
Tight auditing of differentially private machine learning
M Nasr, J Hayes, T Steinke, B Balle, F Tramčr, M Jagielski, N Carlini, ...
32nd USENIX Security Symposium (USENIX Security 23), 1631-1648, 2023
552023
Differentially private diffusion models generate useful synthetic images
S Ghalebikesabi, L Berrada, S Gowal, I Ktena, R Stanforth, J Hayes, S De, ...
arXiv preprint arXiv:2302.13861, 2023
532023
Guard Sets for Onion Routing
J Hayes, G Danezis
Proceedings on Privacy Enhancing Technologies 1 (2), Pages 65–80, 2015
38*2015
Bounding training data reconstruction in dp-sgd
J Hayes, B Balle, S Mahloujifar
Advances in Neural Information Processing Systems 36, 2024
262024
Extensions and limitations of randomized smoothing for robustness guarantees
J Hayes
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
262020
The system can't perform the operation now. Try again later.
Articles 1–20