Jingzhao Zhang
Jingzhao Zhang
IIIS, Tsinghua University
E-mail megerősítve itt: mit.edu - Kezdőlap
Hivatkozott rá
Hivatkozott rá
Why gradient clipping accelerates training: A theoretical justification for adaptivity
AJ Jingzhao Zhang, Tianxing He, Suvrit Sra
International Conference on Learning Representations 2020, 2020
Why are adaptive methods good for attention models?
J Zhang, SP Karimireddy, A Veit, S Kim, S Reddi, S Kumar, S Sra
Advances in Neural Information Processing Systems 33, 15383-15393, 2020
3D computer-generated holography by non-convex optimization
J Zhang, N Pégard, J Zhong, H Adesnik, L Waller
Optica 4 (10), 1306-1313, 2017
Direct Runge-Kutta Discretization Achieves Acceleration
J Zhang, A Mokhtari, S Sra, A Jadbabaie
Advances in Neural Information Processing Systems 2018, 2018
Complexity of finding stationary points of nonconvex nonsmooth functions
J Zhang, H Lin, S Jegelka, S Sra, A Jadbabaie
International Conference on Machine Learning, 11173-11182, 2020
Fast federated learning in the presence of arbitrary device unavailability
X Gu, K Huang, J Zhang, L Huang
Advances in Neural Information Processing Systems 34, 12052-12064, 2021
Coping with label shift via distributionally robust optimisation
J Zhang, A Menon, A Veit, S Bhojanapalli, S Kumar, S Sra
ICLR 2021, 2020
Exposure bias versus self-recovery: Are distortions really incremental for autoregressive text generation?
T He, J Zhang, Z Zhou, J Glass
ACL 2021, 2019
Understanding the unstable convergence of gradient descent
K Ahn, J Zhang, S Sra
International Conference on Machine Learning, 247-257, 2022
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
J Zhang, H Zhang, S Sra
arXiv preprint arXiv:1811.04194, 2018
Complexity lower bounds for nonconvex-strongly-concave min-max optimization
H Li, Y Tian, J Zhang, A Jadbabaie
Advances in Neural Information Processing Systems 34, 1792-1804, 2021
A probe towards understanding gan and vae models
L Mi, M Shen, J Zhang
arXiv preprint arXiv:1812.05676, 2018
Provably efficient algorithms for multi-objective competitive rl
T Yu, Y Tian, J Zhang, S Sra
International Conference on Machine Learning, 12167-12176, 2021
On bilevel optimization without lower-level strong convexity
L Chen, J Xu, J Zhang
arXiv preprint arXiv:2301.00712, 2023
Sion’s minimax theorem in geodesic metric spaces and a Riemannian extragradient algorithm
P Zhang, J Zhang, S Sra
SIAM Journal on Optimization 33 (4), 2885-2908, 2023
Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE
J Zhang, CA Uribe, A Mokhtari, A Jadbabaie
2019 American Control Conference, 2018
Efficient Sampling on Riemannian Manifolds via Langevin MCMC
X Cheng, J Zhang, S Sra
Advances in Neural Information Processing Systems 35, 5995-6006, 2022
Acceleration in First Order Quasi-strongly Convex Optimization by ODE Discretization
J Zhang, S Sra, A Jadbabaie
2019 Conference on Decision and Control, 2019
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective
J Zhang, H Li, S Sra, A Jadbabaie
ICML 2022, 2021
Online policy optimization for robust MDP
J Dong, J Li, B Wang, J Zhang
arXiv preprint arXiv:2209.13841, 2022
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20