Follow
Jinwoo Kim
Title
Cited by
Cited by
Year
Pure transformers are powerful graph learners
J Kim, TD Nguyen, S Min, S Cho, M Lee, H Lee, S Hong
Advances in Neural Information Processing Systems 35, 2022
1062022
Setvae: learning hierarchical composition for generative modeling of set-structured data
J Kim, J Yoo, J Lee, S Hong
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
552021
Transformers generalize deepsets and can be extended to graphs and hypergraphs
J Kim, S Oh, S Hong
Advances in Neural Information Processing Systems 34, 2021
292021
Spontaneous retinal waves can generate long-range horizontal connectivity in visual cortex
J Kim, M Song, J Jang, SB Paik
Journal of Neuroscience 40 (34), 6584-6599, 2020
212020
Universal few-shot learning of dense prediction tasks with visual token matching
D Kim, J Kim, S Cho, C Luo, S Hong
International Conference on Learning Representations, 2023
132023
Equivariant hypergraph neural networks
J Kim, S Oh, S Cho, S Hong
European Conference on Computer Vision, 2022
102022
Learning probabilistic symmetrization for architecture agnostic equivariance
J Kim, TD Nguyen, A Suleymanzade, H An, S Hong
Advances in Neural Information Processing Systems 36, 2023
42023
Transformers meet stochastic block models: attention with data-adaptive sparsity and cost
S Cho, S Min, J Kim, M Lee, H Lee, S Hong
Advances in Neural Information Processing Systems 35, 2022
12022
Learning symmetrization for equivariance with orbit distance minimization
TD Nguyen, J Kim, H Yang, S Hong
NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations, 2023
2023
3d denoisers are good 2d teachers: molecular pretraining via denoising and cross-modal distillation
S Cho, DW Jeong, SM Ko, J Kim, S Han, S Hong, H Lee, M Lee
arXiv preprint arXiv:2309.04062, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–10