Követés
Praneeth Netrapalli
Praneeth Netrapalli
E-mail megerősítve itt: google.com - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Low-rank matrix completion using alternating minimization
P Jain, P Netrapalli, S Sanghavi
Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013
11872013
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
International conference on machine learning, 1724-1732, 2017
9042017
Phase retrieval using alternating minimization
P Netrapalli, P Jain, S Sanghavi
Advances in Neural Information Processing Systems 26, 2013
6902013
Morel: Model-based offline reinforcement learning
R Kidambi, A Rajeswaran, P Netrapalli, T Joachims
Advances in neural information processing systems 33, 21810-21823, 2020
5692020
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, M Jordan
International conference on machine learning, 4880-4889, 2020
398*2020
Non-convex robust PCA
P Netrapalli, N UN, S Sanghavi, A Anandkumar, P Jain
Advances in neural information processing systems 27, 2014
3462014
The pitfalls of simplicity bias in neural networks
H Shah, K Tamuly, A Raghunathan, P Jain, P Netrapalli
Advances in Neural Information Processing Systems 33, 9573-9585, 2020
2962020
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
2732018
Learning the graph of epidemic cascades
P Netrapalli, S Sanghavi
ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012
2252012
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
Journal of the ACM (JACM) 68 (2), 1-29, 2021
219*2021
Efficient algorithms for smooth minimax optimization
KK Thekumparampil, P Jain, P Netrapalli, S Oh
Advances in Neural Information Processing Systems 32, 2019
1952019
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Journal of machine learning research 18 (223), 1-42, 2018
194*2018
Learning sparsely used overcomplete dictionaries via alternating minimization
A Agarwal, A Anandkumar, P Jain, P Netrapalli
SIAM Journal on Optimization 26 (4), 2775-2799, 2016
1932016
Efficient domain generalization via common-specific low-rank decomposition
V Piratla, P Netrapalli, S Sarawagi
International Conference on Machine Learning, 7728-7738, 2020
1772020
Information-theoretic thresholds for community detection in sparse networks
J Banks, C Moore, J Neeman, P Netrapalli
Conference on Learning Theory, 383-416, 2016
157*2016
The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares
R Ge, SM Kakade, R Kidambi, P Netrapalli
Advances in neural information processing systems 32, 2019
1502019
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
1462016
A short note on concentration inequalities for random vectors with subgaussian norm
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
arXiv preprint arXiv:1902.03736, 2019
1332019
Learning sparsely used overcomplete dictionaries
A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon
Conference on Learning Theory, 123-137, 2014
1232014
On the insufficiency of existing momentum schemes for stochastic optimization
R Kidambi, P Netrapalli, P Jain, S Kakade
2018 Information Theory and Applications Workshop (ITA), 1-9, 2018
1222018
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20