Pre-trained models: Past, present and future X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, ... AI Open 2, 225-250, 2021 | 811 | 2021 |
Persistent b+-trees in non-volatile main memory S Chen, Q Jin Proceedings of the VLDB Endowment 8 (7), 786-797, 2015 | 417 | 2015 |
Fine-grained video-text retrieval with hierarchical graph reasoning S Chen, Y Zhao, Q Jin, Q Wu Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 356 | 2020 |
The SuperSID project: Exploiting high-level information for high-accuracy speaker recognition D Reynolds, W Andrews, J Campbell, J Navratil, B Peskin, A Adami, Q Jin, ... 2003 IEEE International Conference on Acoustics, Speech, and Signal …, 2003 | 356 | 2003 |
Say as you wish: Fine-grained control of image caption generation with abstract scene graphs S Chen, Q Jin, P Wang, Q Wu Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 259 | 2020 |
Speech emotion recognition with acoustic and lexical features Q Jin, C Li, S Chen, H Wu 2015 IEEE international conference on acoustics, speech and signal …, 2015 | 218 | 2015 |
MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation J Hu, Y Liu, J Zhao, Q Jin arXiv preprint arXiv:2107.06779, 2021 | 191 | 2021 |
Multimodal multi-task learning for dimensional and continuous emotion recognition S Chen, Q Jin, J Zhao, S Wang Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, 19-26, 2017 | 169 | 2017 |
Multi-modal dimensional emotion recognition using recurrent neural networks S Chen, Q Jin Proceedings of the 5th International Workshop on Audio/Visual Emotion …, 2015 | 146 | 2015 |
WenLan: Bridging vision and language by large-scale multi-modal pre-training Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ... arXiv preprint arXiv:2103.06561, 2021 | 137 | 2021 |
Mm-diffusion: Learning multi-modal diffusion models for joint audio and video generation L Ruan, Y Ma, H Yang, H He, B Liu, J Fu, NJ Yuan, Q Jin, B Guo Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 136 | 2023 |
Missing modality imagination network for emotion recognition with uncertain missing modalities J Zhao, R Li, Q Jin Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 131 | 2021 |
Far-field speaker recognition Q Jin, T Schultz, A Waibel IEEE Transactions on Audio, Speech, and Language Processing 15 (7), 2023-2032, 2007 | 125 | 2007 |
Ts2-net: Token shift and selection transformer for text-video retrieval Y Liu, P Xiong, L Xu, S Cao, Q Jin European conference on computer vision, 319-335, 2022 | 120 | 2022 |
Describing videos using multi-modal fusion Q Jin, J Chen, S Chen, Y Xiong, A Hauptmann Proceedings of the 24th ACM international conference on Multimedia, 1087-1091, 2016 | 119 | 2016 |
Speaker segmentation and clustering in meetings. Q Jin, T Schultz Interspeech 4, 597-600, 2004 | 118 | 2004 |
Ureader: Universal ocr-free visually-situated language understanding with multimodal large language model J Ye, A Hu, H Xu, Q Ye, M Yan, G Xu, C Li, J Tian, Q Qian, J Zhang, Q Jin, ... arXiv preprint arXiv:2310.05126, 2023 | 82 | 2023 |
Multi-modal conditional attention fusion for dimensional emotion prediction S Chen, Q Jin Proceedings of the 24th ACM international conference on Multimedia, 571-575, 2016 | 82 | 2016 |
Speaker de-identification via voice transformation Q Jin, AR Toth, T Schultz, AW Black 2009 IEEE Workshop on Automatic Speech Recognition & Understanding, 529-533, 2009 | 82 | 2009 |
Is voice transformation a threat to speaker identification? Q Jin, AR Toth, AW Black, T Schultz 2008 IEEE International Conference on Acoustics, Speech and Signal …, 2008 | 78 | 2008 |