Songfang Huang
Songfang Huang
Alibaba DAMO Academy
Verified email at
Cited by
Cited by
Semantic relation classification via convolutional neural networks with simple negative sampling
K Xu, Y Feng, S Huang, D Zhao
arXiv preprint arXiv:1506.07650, 2015
Question answering on freebase via relation extraction and textual evidence
K Xu, S Reddy, Y Feng, S Huang, D Zhao
arXiv preprint arXiv:1603.00957, 2016
Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix
B Luo, Y Feng, Z Wang, Z Zhu, S Huang, R Yan, D Zhao
arXiv preprint arXiv:1705.03995, 2017
Combining graph-based learning with automated data collection for code vulnerability detection
H Wang, G Ye, Z Tang, SH Tan, S Huang, D Fang, Y Feng, L Bian, ...
IEEE Transactions on Information Forensics and Security 16, 1943-1958, 2020
Melr: Meta-learning via modeling episode-level relationships for few-shot learning
N Fei, Z Lu, T Xiang, S Huang
International Conference on Learning Representations, 2020
Hybrid question answering over knowledge base and free text
K Xu, Y Feng, S Huang, D Zhao
Proceedings of COLING 2016, the 26th International Conference on …, 2016
Hierarchical Bayesian language models for conversational speech recognition
S Huang, S Renals
IEEE Transactions on Audio, Speech, and Language Processing 18 (8), 1941-1954, 2010
E2E-VLP: end-to-end vision-language pre-training enhanced by visual learning
H Xu, M Yan, C Li, B Bi, S Huang, W Xiao, F Huang
arXiv preprint arXiv:2106.01804, 2021
IEPT: Instance-level and episode-level pretext tasks for few-shot learning
M Zhang, J Zhang, Z Lu, T Xiang, M Ding, S Huang
International Conference on Learning Representations, 2020
Marrying up regular expressions with neural networks: A case study for spoken language understanding
B Luo, Y Feng, Z Wang, S Huang, R Yan, D Zhao
arXiv preprint arXiv:1805.05588, 2018
Hierarchical Pitman-Yor language models for ASR in meetings
S Huang, S Renals
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU …, 2007
StructuralLM: Structural pre-training for form understanding
LS Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang
Proceedings of the 59th Annual Meeting of the Association for Computational …, 0
Veco: Variable encoder-decoder pre-training for cross-lingual understanding and generation
F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang, L Si
Proceedings of the 59th ACL, 2020
Palm: Pre-training an autoencoding&autoregressive language model for context-conditioned generation
B Bi, C Li, C Wu, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2004.07159, 2020
Raise a child in large language model: Towards effective and generalizable fine-tuning
R Xu, F Luo, Z Zhang, C Tan, B Chang, S Huang, F Huang
arXiv preprint arXiv:2109.05687, 2021
Biomedical question answering: A survey of approaches and challenges
Q Jin, Z Yuan, G Xiong, Q Yu, H Ying, C Tan, M Chen, S Huang, X Liu, ...
ACM Computing Surveys (CSUR) 55 (2), 1-36, 2022
Improving biomedical pretrained language models with knowledge
Z Yuan, Y Liu, C Tan, S Huang, F Huang
arXiv preprint arXiv:2104.10344, 2021
Cross-language document summarization via extraction and ranking of multiple summaries
X Wan, F Luo, X Sun, S Huang, J Yao
Knowledge and Information Systems 58 (2), 481-499, 2019
Noisy-labeled NER with confidence estimation
K Liu, Y Fu, C Tan, M Chen, N Zhang, S Huang, S Gao
arXiv preprint arXiv:2104.04318, 2021
Noise-robust semi-supervised learning by large-scale sparse coding
Z Lu, X Gao, L Wang, JR Wen, S Huang
Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015
The system can't perform the operation now. Try again later.
Articles 1–20