Követés
Seungyeon Kim
Seungyeon Kim
Google DeepMind
E-mail megerősítve itt: google.com - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Why are adaptive methods good for attention models?
J Zhang, SP Karimireddy, A Veit, S Kim, S Reddi, S Kumar, S Sra
Advances in Neural Information Processing Systems 33, 15383-15393, 2020
334*2020
Local low-rank matrix approximation
J Lee, S Kim, G Lebanon, Y Singer
International conference on machine learning, 82-90, 2013
2602013
Local collaborative ranking
J Lee, S Bengio, S Kim, G Lebanon, Y Singer
Proceedings of the 23rd international conference on World wide web, 85-96, 2014
1602014
LLORMA: Local low-rank matrix approximation
J Lee, S Kim, G Lebanon, Y Singer, S Bengio
Journal of Machine Learning Research 17 (15), 1-24, 2016
1272016
A statistical perspective on distillation
AK Menon, AS Rawat, S Reddi, S Kim, S Kumar
International Conference on Machine Learning, 7632-7642, 2021
118*2021
ConceptVector: Text visual analytics via interactive lexicon building using word embedding
D Park, S Kim, J Lee, J Choo, N Diakopoulos, N Elmqvist
IEEE transactions on visualization and computer graphics 24 (1), 361-370, 2017
1082017
Evaluations and methods for explanation through robustness analysis
CY Hsieh, CK Yeh, X Liu, P Ravikumar, S Kim, S Kumar, CJ Hsieh
arXiv preprint arXiv:2006.00442, 2020
632020
On the reproducibility of neural network predictions
S Bhojanapalli, K Wilber, A Veit, AS Rawat, S Kim, A Menon, S Kumar
arXiv preprint arXiv:2102.03349, 2021
412021
Beyond Sentiment: The Manifold of Human Emotions
S Kim, F Li, G Lebanon, I Essa
Proceedings of the 16 International Conference on Artificial Intelligence …, 2013
372013
Rankdistil: Knowledge distillation for ranking
S Reddi, RK Pasumarthi, A Menon, AS Rawat, F Yu, S Kim, A Veit, ...
International Conference on Artificial Intelligence and Statistics, 2368-2376, 2021
322021
In defense of dual-encoders for neural ranking
A Menon, S Jayasumana, AS Rawat, S Kim, S Reddi, S Kumar
International Conference on Machine Learning, 15376-15400, 2022
272022
Semantic label smoothing for sequence to sequence problems
M Lukasik, H Jain, AK Menon, S Kim, S Bhojanapalli, F Yu, S Kumar
arXiv preprint arXiv:2010.07447, 2020
212020
Automatic feature induction for stagewise collaborative filtering
J Lee, M Sun, S Kim, G Lebanon
Advances in Neural Information Processing Systems 25, 2012
172012
Supervision complexity and its role in knowledge distillation
H Harutyunyan, AS Rawat, AK Menon, S Kim, S Kumar
arXiv preprint arXiv:2301.12245, 2023
122023
Efficient training of language models using few-shot learning
SJ Reddi, S Miryoosefi, S Karp, S Krishnan, S Kale, S Kim, S Kumar
International Conference on Machine Learning, 14553-14568, 2023
112023
Matrix approximation under local low-rank assumption
J Lee, S Kim, G Lebanon, Y Singer
arXiv preprint arXiv:1301.3192, 2013
92013
Embeddistill: A geometric knowledge distillation for information retrieval
S Kim, AS Rawat, M Zaheer, S Jayasumana, V Sadhanala, W Jitkrittum, ...
arXiv preprint arXiv:2301.12005, 2023
62023
Estimating Temporal Dynamics of Human Emotions
S Kim, J Lee, G Lebanon, H Park
Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI), 2015
52015
Local Context Sparse Coding
S Kim, J Lee, G Lebanon, H Park
Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI), 2015
52015
Fast spammer detection using structural rank
S Kim, H Park, G Lebanon
arXiv preprint arXiv:1407.7072, 2014
42014
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20