Követés
Yanqi Zhou
Yanqi Zhou
E-mail megerősítve itt: google.com - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
Journal of machine learning research 21 (140), 1-67, 2020
196402020
Lamda: Language models for dialog applications
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
arXiv preprint arXiv:2201.08239, 2022
15512022
Deep learning scaling is predictable, empirically
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
arXiv preprint arXiv:1712.00409, 2017
7822017
Deep Voice 2: Multi-Speaker Neural Text-to-Speech
YZ Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng ...
Neural Information Processing Systems (NIPS), 2017
655*2017
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International Conference on Machine Learning, 5547-5569, 2022
5532022
Neural voice cloning with a few samples
S Arik, J Chen, K Peng, W Ping, Y Zhou
Advances in neural information processing systems 31, 2018
4612018
OpenPiton: An open source manycore research framework
J Balkind, M McKeown, Y Fu, T Nguyen, Y Zhou, A Lavrov, M Shahrad, ...
ACM SIGPLAN Notices 51 (4), 217-232, 2016
2822016
Mixture-of-experts with expert choice routing
Y Zhou, T Lei, H Liu, N Du, Y Huang, V Zhao, AM Dai, QV Le, J Laudon
Advances in Neural Information Processing Systems 35, 7103-7114, 2022
2442022
T5: Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
Journal of Machine Learning Research 21, 1-67, 2020
1832020
Toju Duke, Lucas Dixon, Kun Zhang, Quoc V
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
Le, Yonghui Wu, Zhifeng Chen, and Claire Cui, 2021
1632021
Atomic In-place Updates for Non-volatile Main Memories with Kamino-Tx
A Memaripour, A Badam, A Phanishayee, Y Zhou, R Alagappan, ...
EuroSys '17 Proceedings of the Twelfth European Conference on Computer …, 2017
1282017
Renelito Delos Santos
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
1022022
Do transformer modifications transfer across implementations and applications?
S Narang, HW Chung, Y Tay, W Fedus, T Fevry, M Matena, K Malkan, ...
arXiv preprint arXiv:2102.11972, 2021
1002021
Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
arXiv preprint arXiv:1910.10683, 2019
922019
A learned performance model for tensor processing units
S Kaufman, P Phothilimthana, Y Zhou, C Mendis, S Roy, A Sabne, ...
Proceedings of Machine Learning and Systems 3, 387-400, 2021
822021
Resource-efficient neural architect
Y Zhou, S Ebrahimi, SÖ Arık, H Yu, H Liu, G Diamos
arXiv preprint arXiv:1806.07912, 2018
782018
MITTS: Memory inter-arrival time traffic shaping
Y Zhou, D Wentzlaff
ACM SIGARCH Computer Architecture News 44 (3), 532-544, 2016
632016
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv preprint arXiv:2305.14705, 2023
612023
Exploring the limits of transfer learning with a unified text-to-text transformer
A Roberts, C Raffel, K Lee, M Matena, N Shazeer, PJ Liu, S Narang, W Li, ...
Google, Tech. Rep., 2019
612019
Deep learning scaling is predictable
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
Empirically. arXiv 1712, 2, 2017
592017
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20